report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
|---|---|
Five ESE sites collectively contain substantial quantities of Category I special nuclear material. These include the following: the Savannah River Site in Savannah River, South Carolina, and the Hanford Site in Richland, Washington, which are managed by EM; the Idaho National Engineering and Environmental Laboratory and the Argonne National Laboratory-West which are located in Idaho Falls, Idaho, and are managed by NE; and the Oak Ridge National Laboratory in Oak Ridge, Tennessee, which is managed by SC. Contractors operate each site for ESE. The ESE program offices that oversee these sites—EM, NE, and SC—have requested about $397 million in fiscal year 2005 for security. Two other organizations are important contributors to DOE’s security program. The Office of Security in DOE’s Office of Security and Safety Performance Assurance develops and promulgates orders and policies, such as the DBT, to guide the department’s safeguards and security programs. The Office of Independent Oversight and Performance Assurance in DOE’s Office of Security and Safety Performance Assurance supports the department by, among other things, independently evaluating the effectiveness of contractors’ performance in safeguards and security. It also performs follow-up reviews to ensure that contractors have taken effective corrective actions and appropriately addressed weaknesses in safeguards and security. The risks associated with Category I special nuclear materials vary but include the creation of improvised nuclear devices capable of producing a nuclear yield, theft for use in an illegal nuclear weapon, and the potential for sabotage in the form of radioactive dispersal. Because of these risks, DOE has long employed risk-based security practices. The key component of DOE’s well-established, risk-based security practices is the DBT, a classified document that identifies the characteristics of the potential threats to DOE assets. The DBT traditionally has been based on a classified, multiagency intelligence community assessment of potential terrorist threats, known as the Postulated Threat. The DBT considers a variety of threats in addition to the terrorist threat. Other adversaries considered in the DBT include criminals, psychotics, disgruntled employees, violent activists, and spies. The DBT also considers the threat posed by insiders, those individuals who have authorized, unescorted access to any part of DOE facilities and programs. Insiders may operate alone or may assist an adversary group. Insiders are routinely considered to provide assistance to the terrorist groups found in the DBT. The threat from terrorist groups is generally the most demanding threat contained in the DBT. DOE counters the terrorist threat specified in the DBT with a multifaceted protective system. While specific measures vary from site to site, all protective systems at DOE’s most sensitive sites employ a defense-in- depth concept that includes sensors, physical barriers, hardened facilities and vaults, and heavily armed paramilitary protective forces equipped with such items as automatic weapons, night vision equipment, body armor, and chemical protective gear. The effectiveness of the protective system is formally and regularly examined through vulnerability assessments. A vulnerability assessment is a systematic evaluation process in which qualitative and quantitative techniques are applied to detect vulnerabilities and arrive at effective protection of specific assets, such as special nuclear material. To conduct such assessments, DOE uses, among other things, subject matter experts, such as U.S. Special Forces; computer modeling to simulate attacks; and force-on-force performance testing, in which the site’s protective forces undergo simulated attacks by a group of mock terrorists. The results of these assessments are documented at each site in a classified document known as the Site Safeguards and Security Plan. In addition to identifying known vulnerabilities, risks, and protection strategies for the site, the Site Safeguards and Security Plan formally acknowledges how much risk the contractor and DOE are willing to accept. Specifically, for more than a decade, DOE has employed a risk management approach that seeks to direct resources to its most critical assets—in this case Category I special nuclear material—and mitigate the risks to these assets to an acceptable level. Levels of risk—high, medium, and low—are assigned classified numerical values and are derived from a mathematical equation that compares a terrorist group’s capabilities with the overall effectiveness of the crucial elements of the site’s protective forces and systems. Historically, DOE has striven to keep its most critical assets at a low risk level and may insist on immediate compensatory measures should a significant vulnerability develop that increases risk above the low risk level. Compensatory measures could include deploying additional protective forces or curtailing operations until the asset can be better protected. In response to a September 2000 DOE Inspector General’s report recommending that DOE establish a policy on what actions are required once a high or moderate risk is identified, in September 2003, DOE’s Office of Security issued a policy clarification stating that identified high risks at facilities must be formally reported to the Secretary of Energy or Deputy Secretary within 24 hours. In addition, under this policy clarification, identified high and moderate risks require corrective actions and regular reporting. Through a variety of complementary measures, DOE ensures that its safeguards and security policies are being complied with and are performing as intended. Contractors perform regular self-assessments and are encouraged to uncover any problems themselves. DOE Orders also require field offices to comprehensively survey contractors’ operations for safeguards and security every year. The Office of Independent Oversight and Performance Assurance in DOE’s Office of Security and Safety Assurance provides yet another check through its comprehensive inspection program. All deficiencies identified during surveys and inspections require the contractors to take corrective action. Reflecting the post-September 11 environment, the May 2003 DBT, among other things, identified a larger terrorist threat than did the 1999 DBT. It also expanded the range of terrorist objectives to include radiological, biological, and chemical sabotage. Key features of the 2003 DBT included the following: Expanded terrorist characteristics and goals. The 2003 DBT assumes that terrorist groups are the following: well armed and equipped; trained in paramilitary and guerrilla warfare skills and small unit tactics; highly motivated; willing to kill, risk death, or commit suicide; and capable of attacking without warning. Furthermore, according to the 2003 DBT, terrorists might attack a DOE facility for a variety of goals, including the theft of a nuclear weapon, nuclear test device, or special nuclear material; radiological, chemical, or biological sabotage; and the on-site detonation of a nuclear weapon, nuclear test device, or special nuclear material that results in a significant nuclear yield. DOE refers to such a detonation as an improvised nuclear device. Increased the size of the terrorist group threat. The 2003 DBT increases the terrorist threat levels for the theft of the department’s highest value assets—Category I special nuclear materials—although not in a uniform way. Previously, under the 1999 DBT, all DOE sites that possessed any type of Category I special nuclear material were required to defend against a uniform terrorist group composed of a relatively small number of individuals. Under the 2003 DBT, however, the department judged the theft of a nuclear weapon or test device to be more attractive to terrorists, and sites that have these assets are required to defend against a substantially higher number of terrorists than are other sites. For example, a DOE site that, among other things, assembles and disassembles nuclear weapons, is required to defend against a larger terrorist group. Other DOE sites, such as an EM site that stores excess plutonium, only have to defend against a smaller group of terrorists. However, the number of terrorists in the 2003 DBT is larger than the 1999 DBT number. DOE calls this a graded threat approach. Mandated specific protection strategies. In line with the graded threat approach and depending on the type of materials they possess and the likely mission of the terrorist group, sites must now implement specific protection strategies for Category I special nuclear material. In addition, sites will have to develop, for the first time, specific protection strategies for facilities, such as radioactive waste storage areas, wastewater treatment, and science laboratories, against the threat of radiological, chemical, or biological sabotage. Addressed the potential for improvised nuclear device concerns. The May 2003 DBT established a special team to report to the Secretary of Energy on each site’s potential for improvised nuclear devices. Based on the team’s advice, in April 2004 the Deputy Secretary of Energy designated whether a site had such a concern. This official designation was intended to help address the general dissatisfaction with previous DOE policies for improvised nuclear devices, knowledge of which was carefully controlled and not shared widely with security officials. For example, some EM sites had no information at all on their potential for this risk. When we testified before this Subcommittee in April 2004, we stated that while DOE had issued the final DBT in May 2003, it had only recently begun to resolve a number of significant issues that could affect the ability of its sites to fully meet the threat in the new DBT in a timely fashion. These issues involved issuing additional DBT implementation guidance, developing DBT implementation plans, and developing budgets to support these plans. We noted that fully resolving all of these issues might take several years, and the total cost of meeting the new threats was currently unknown. Consequently, we stated, full DBT implementation could occur anywhere from fiscal year 2005 to fiscal year 2008, well beyond the department’s goal of the end of fiscal year 2006. Because some sites would be unable to effectively counter the higher threat contained in the new DBT for up to several years, we stated that these sites should be considered to be at higher risk under the new DBT than they were under the old DBT. After reviewing ESE’s efforts to implement the May 2003 DBT at sites containing Category I special nuclear material, we continue to be concerned about whether DOE can meet its fiscal year 2006 deadline for full DBT implementation. Specifically, while ESE sites that contain Category I special nuclear material have developed plans for implementing the May 2003 DBT, as directed by the Deputy Secretary of Energy, we believe there are four issues that will make it difficult to implement these plans in a timely fashion. First, ESE sites approved their implementation plans in February 2004 before the Deputy Secretary of Energy issued his guidance on which sites had improvised nuclear device vulnerabilities. As noted previously, the May 2003 DBT created a special team, composed of weapons designers and security specialists to report on each site’s improvised nuclear device vulnerabilities. The results of this report were briefed to senior DOE officials in March 2004 and the Deputy Secretary of Energy issued guidance, based on this report, to DOE sites in early April 2004. As a result, some sites may be required under the 2003 DBT to shift to enhanced protection strategies, which could be very costly. This special team’s report may most affect ESE sites, because, in some cases, their improvised nuclear device potential had not previously been explored. In addition, ESE security officials told us that confusion exists about how or if this guidance applies to their sites, and they stated that they are working with officials from DOE’s Office of Security to resolve this confusion. The Director of DOE’s Office of Security and Safety Performance Assurance agreed that additional guidance will be necessary to resolve this confusion. Consequently, because ESE sites developed their plans well before this guidance was issued, the assumptions in their plans may no longer be valid and the plans may need to be revised. Second, the ESE site implementation plans are based on the May 2003 DBT; however, DOE is now reexamining the May 2003 DBT and may revise it. In our April 2004 report, we expressed several concerns about the May 2003 DBT. In particular, we noted that some DOE sites may have improvised nuclear device concerns that, if successfully exploited by terrorists could result in a nuclear detonation. However, under the May 2003 DBT, DOE only required these sites to defend against a relatively small group of terrorists. Because we believed that DOE had not made a persuasive case for defending against a lower number of terrorists, we recommended that DOE reexamine how it applies the DBT to sites with improvised nuclear device concerns. Subsequently, in May 2004, the Secretary of Energy announced that the department would reexamine the DBT. Originally, this reexamination was to be completed by June 30, 2004. However, according to the Director of DOE’s Office of Security and Safety Performance Assurance this effort will not be completed until August 6, 2004. In addition, the Director stated that the end result of this effort may only be a plan on how to revise the DBT. Consequently, if the DBT is changed in a way that increases security requirements, some ESE offices may have to revise their implementation plans to reflect the need to provide for a more stringent defense. Third, in one case ESE does not have adequate resources. Specifically, while ESE sites have developed implementation plans, even under the old assumptions, the plan for one site was under funded. NE security officials told us that for one site no DBT implementation funding had been requested for fiscal year 2005, even though the site recognized that it needed to substantially increase its protective forces to meet the new DBT. Finally, ESE faces a number of complex organizational issues that could make DBT implementation more difficult. Specifically: EM’s Security Director told us that for EM to fully comply with the DBT requirements in fiscal year 2006 at one of its sites, it will have to close and de-inventory two facilities, consolidate excess materials into remaining special nuclear materials facilities, and move consolidated Category I special nuclear material, which the National Nuclear Security Administration’s Office of Secure Transportation will transport, to another site. Likewise, the EM Security Director told us that to meet the DBT requirements at another site, EM will have to accelerate the closure of one facility and transfer special nuclear material to another facility on the site. Because the costs to close these facilities and to move materials within a site are borne by the EM program budget and not by the EM safeguards and security budget, obtaining adequate funding could be difficult. At an Office of Science site, a building that contains Category I special nuclear material is managed and protected by the Office of Science, while the material itself belongs to NE. NE is currently planning to remove the material and process it. After processing, the material will no longer have to meet the protection requirements for Category I special nuclear material. Accomplishing this task will require additional security measures, the planning and funding for which will have to be carefully coordinated between the Office of Science and NE. NE sites face similar issues. For example, the NE Security Director told us that EM currently owns all of the Category I special nuclear material stored at an NE site. EM is currently planning to have the National Nuclear Security Administration’s Office of Secure Transportation transport this material to several other locations by the end of January 2005. According to the NE site Security Director, NE is counting on the successful removal of this special nuclear material to meet the department’s fiscal year 2006 deadline for implementing the May 2003 DBT. To implement the May 2003 DBT, NE also needs to consolidate two of its sites into a single national laboratory, which will, among other things, ensure that it has an adequate number of protective forces. If the EM special nuclear materials are not moved and this consolidation is not achieved, the number of protective forces at this site may not be adequate. Because of the importance of successfully integrating multiple program activities with security requirements, we continue to believe, as we recommended in April 2004, that DOE needs to develop and implement a departmentwide, multiyear, fully resourced implementation plan for meeting the May 2003 DBT requirements that includes important programmatic activities such as the closure of facilities and the transportation of special nuclear materials. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information on this testimony, please contact Robin M. Nazzaro at (202) 512-3841. James Noel and Jonathan Gill made key contributions to this testimony. Don Cowan and Preston Heard also made contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
A successful terrorist attack on Department of Energy (DOE) sites containing the material used in nuclear weapons, called special nuclear material, could have devastating consequences for the site and its surrounding communities. Because of these risks, DOE needs an effective safeguards and security program. A key component of an effective program is the design basis threat (DBT), a classified document that identifies, among other things, the potential size and capabilities of terrorist forces. The terrorist attacks of September 11, 2001, rendered the then-current DBT obsolete resulting in DOE issuing a new version in May 2003. GAO examined the issues that could impede the ability of DOE's Office of Energy, Science and Environment to fully meet the threat contained in the May 2003 DBT by the department's fiscal year 2006 deadline. Five Office of Energy, Science and Environment sites contain substantial quantities of Category I special nuclear material, which consists of specified quantities of plutonium and highly enriched uranium. These sites have all developed plans for implementing the May 2003 DBT. However, there are several issues that could make it difficult to implement these plans by DOE's deadline of the end of fiscal year 2006. The Office of Energy, Science and Environment sites approved their DBT implementations plans in February 2004 before the Deputy Secretary of Energy issued his April 2004 guidance on which sites had improvised nuclear device vulnerabilities. As a result, some sites may be required to shift to enhanced protection strategies, which could be very costly. Consequently, the assumptions in the Office of Energy, Science and Environment DBT implementation plans may no longer be valid, and the plans may need to be revised. The Office of Energy, Science and Environment site plans are based on the May 2003 DBT; however, DOE is now reexamining the May 2003 DBT and may revise it. Consequently, if the DBT is changed in a way that increases security requirements, some Office of Energy, Science and Environment sites may have to revise their implementation plans to reflect the need to provide for a more stringent defense. The plan for one Office of Energy, Science and Environment site was under funded. Specifically, officials in the Office of Nuclear Energy, Science and Technology, which is part of the Office of Energy, Science and Environment, told GAO that, for one site, no DBT implementation funding had been requested for fiscal year 2005. Finally, full implementation of these plans will require the successful resolution of complex organizational arrangements between various program and security offices. Consequently, GAO continues to believe, as it recommended in April 2004, that DOE needs to develop and implement a departmentwide, multiyear, fully resourced implementation plan for meeting the new DBT requirements that includes important programmatic activities such as the closure of facilities and the transportation of special nuclear materials.
|
In Bosnia, conflict raged from 1992 through 1995 and involved the Federal Republic of Yugoslavia, Croatia, and Bosnia’s three major ethnic groups. All were fighting for control of specific territories tied to each group’s definition of its own state. During this time an estimated 2.3 million people became refugees or were internally displaced. NATO forces intervened in the conflict to support international humanitarian and peacekeeping operations beginning in 1993, culminating in a month-long bombing campaign against Bosnian-Serb forces in July 1995. This pressure and U.S.- led negotiating efforts resulted in a cease-fire and negotiation of the Dayton Peace Agreement in December 1995. About 54,000 NATO-led troops were deployed beginning in late 1995 to enforce the military aspects of the agreement and provide security for humanitarian and other assistance activities. Currently, about 12,000 international troops remain in Bosnia to provide security, including 1,800 U.S. soldiers. The conflict in and around the Serbian province of Kosovo between Yugoslav security forces and ethnic Albanian insurgents fighting for Kosovo’s independence took place from early 1998 through mid-1999. NATO initiated a bombing campaign against Yugoslavia in March 1999 to end Yugoslav aggression and subsequently deployed about 50,000 troops to enforce compliance with cease-fire and withdrawal agreements. Currently, there are about 25,000 NATO-led peacekeeping troops in Kosovo, including about 2,500 U.S. soldiers. The conflict in Afghanistan extends back to the Soviet Union’s 10-year occupation of the country that began in 1979, during which various countries, including the United States, backed Afghan resistance efforts. Three years after Soviet forces withdrew, the communist regime fell to the Afghan resistancebut unrest continued. The Taliban movement emerged in the mid 1990s, but was removed by coalition forces in late 2001 for harboring al Qaeda terrorists who attacked the United States on September 11. In December 2001, the Bonn Agreement was signed, which provided for interim governance of the country. Currently, about 4,600 International Security Assistance Force troops provide security for the city of Kabul and the surrounding area and approximately 11,000 U.S.-led coalition forces continue to fight remnants of the Taliban and al Qaeda. GAO’s work over the past 10 years on Bosnia and Kosovo, and our recent work on Afghanistan, indicate that post-conflict assistance is a broad, long-term effort that requires humanitarian, security, economic, governance, and democracy-building measures. For Bosnia and Kosovo, forces led by the North Atlantic Treaty Organization provided overall security, and the international community developed country-specific and regional frameworks for rebuilding the country and province, respectively. Bosnia’s plan included the 3- to 4-year, $5.1 billion Priority Reconstruction Program, which provided humanitarian, economic, and other assistance based on needs assessments conducted by the World Bank and other international organizations. A number of international organizations involved in the Bosnia peace operation, including the Office of the High Representative, the United Nations, and the Organization for Security and Cooperation in Europe, helped develop government institutions and supported democracy-building measures and police training. In Kosovo, a U.N. peace operation oversaw assistance through (1) the United Nations and other donors for housing winterization, refugee relief, and other short- term needs; (2) the medium-term Reconstruction and Recovery Program devised by the European Commission and the World Bank; and (3) programs to build a judiciary, a police force, and government institutions. The Bosnia- and Kosovo-specific programs were complemented in 1999 by the Stability Pact, which focused on encouraging democratization, human rights, economic reconstruction, and security throughout the region. For Afghanistan, the World Food Program’s (WFP) food assistance effort constituted the largest portion of humanitarian assistance in the post- conflict period. To determine the needs of the Afghan people, WFP conducted and continues to undertake periodic rapid food needs assessments and longer-term food and crop supply assessments. Based on the results of these reviews, WFP designs short-term emergency operations focusing on free distribution of food, as well as longer-term recovery operations including health, education, training, and infrastructure projects. Owing to the size of WFP’s effort and its years of experience in Afghanistan, WFP provided much of the logistics support for other organizations operating in Afghanistan during 2002 and 2003. A range of humanitarian and longer-term development assistance is being provided through broad assistance programs developed by the United Nations and other multilateral, bilateral, and nongovernmental organizations. These programs include infrastructure rehabilitation, education, health, agriculture, and governance projects, among others. Post-conflict assistance efforts differ in the extent of multilateral involvement. In Bosnia and Kosovo, the North Atlantic Treaty Organization is responsible for enforcing the military and security aspects of peace operations under the terms of U.N. Security Council Resolutions 1031 and 1244, respectively. The United Nations, the European Union, and other international organizations are responsible for rebuilding political and civic institutions and the region’s economies under U.N. resolutions and the Dayton Peace Agreement. In Afghanistan, the United States is one of many bilateral and multilateral donors of aid helping to implement the Bonn Agreement. In contrast, in post-conflict Iraq, the United States and Britain are occupying powers under international law and are recognized as such in U.N. Security Resolution 1483. The obligations of occupying forces as enumerated in international conventions include respecting the human rights of the local population; ensuring public order, safety, and health; protecting property; and facilitating humanitarian relief operations, among others. While the post-conflict situation in each location has varied, certain similarities are apparent, chief among them that assistance efforts continue to be provided in volatile and highly politicized environments where local parties have competing interests and differing degrees of support for the peace process. In Bosnia, the Bosnian Serb parties continue to oppose terms of the peace agreement, such as the freedom of ethnic minority refugees and internally displaced persons to return to their prewar homes. In Kosovo, groups of Kosovar Albanians and Serbs retain unauthorized weapons and commit acts of violence and intimidation against ethnic minorities in violation of the peace agreements. In Afghanistan, warlords control much of the country and foster an illegitimate economy fueled by the smuggling of arms, drugs, and other goods. They also withhold hundreds of millions of dollars in customs duties collected at border points in the regions they control, depriving the central government of revenue to fund the country’s reconstruction. Our work has consistently shown that effective reconstruction assistance cannot be provided without three essential elements: a secure environment, a strategic vision for the overall effort, and strong leadership. In Bosnia and Kosovo, humanitarian and other civilian workers were generally able to perform their tasks because they were supported by large NATO-led forces. In Bosnia, the NATO-led forces enforced the cease-fire, ensured the separation and progressive reduction of the three ethnically based armies from more than 400,000 soldiers and militia to 20,000 by 2003, and disbanded paramilitary police units. In Kosovo, the NATO-led force provided security by (1) ensuring that uniformed Yugoslav security forces withdrew from Kosovo as scheduled and remained outside the province and (2) monitoring the demilitarization and transformation of the Kosovo Liberation Army. Despite the relative security in these two locations, various paramilitaries continued to operate, and sporadic violent incidents occurred against international workers and the local population. From 1996 through 2002, eight humanitarian workers were killed in Bosnia and from 1999 to 2002, two humanitarian workers were killed in Kosovo as a result of hostile action. In contrast, throughout the post-conflict period in Afghanistan, humanitarian assistance workers have been at risk due to ongoing security problems caused by domestic terrorism, long-standing rivalries among warlords, and the national government’s lack of control over the majority of the country. The 4,600-troop International Security Assistance Force operates only in Kabul and surrounding areas, while the mission of the approximately 11,000-troop (9,000 U.S. and 2,000 non-U.S. troops), U.S.-led coalition force is to root out the remnants of the Taliban and terrorist groupsnot to provide security. In 2002 and 2003, the deteriorating security situation has been marked by terrorist attacks against the Afghan government, the Afghan people, and the international communityincluding humanitarian assistance workers. Among the incidents were attempted assassinations of the Minister of Defense and the President; rocket attacks on U.S. and international military installations; and bombings in the center of Kabul, at International Security Assistance Force headquarters, and at U.N. compounds. On June 17, 2003, the U.N. Security Council expressed its concern over the increased number of attacks against humanitarian personnel, coalition forces, International Security Assistance Forces, and Afghan Transitional Administration targets by Taliban and other rebel elements. These incidents have disrupted humanitarian assistance and the overall recovery effort. Since the signing of the Bonn Agreement in December 2001, four assistance workers and 10 International Security Assistance Force troops were killed due to hostile action. In our years of work on post-conflict situations, a key lesson learned is that a strategic vision is essential for providing assistance effectively. In Bosnia, the Dayton Agreement provided a framework for overall assistance efforts, but lacked an overall vision for the operation. This hindered both the military and civilian components of the peace operation from implementing the peace agreement. For example, the Dayton Agreement determined that the military operation in Bosnia would accomplish its security objectives and withdraw in about 1 year but did not address the security problem for the ongoing reconstruction efforts after that time. Recognizing this deficiency, NATO, supported by the President of the United States, subsequently provided an overall vision for the mission by first extending the time frame by 18 months and then tying the withdrawal of the NATO-led forces to benchmarkssuch as establishing functional national institutions and implementing democratic reforms. In Afghanistan, the Bonn Agreement sets out a framework for establishing a new government. In addition, multilateral, bilateral, and nongovernmental organizations providing humanitarian assistance and longer-term development assistance have each developed independent strategies, which have resulted in a highly fragmented reconstruction effort. To bring coherence to the effort, the Afghan government developed a National Development Framework and Budget. The framework ’provides a vision for a reconstructed Afghanistan and broadly establishes national goals and policy directions. The budget articulates development projects intended to achieve national goals. However, despite the development of these documents, donor governments and assistance agencies have continued to develop their own strategies, as well as fund and implement projects outside the Afghan government’s national budget. Our work also highlights the need for strong leadership in post-conflict assistance. In Bosnia, for example, the international community created the Office of the High Representative to assist the parties in implementing the Dayton Agreement and coordinate international assistance efforts, but initially limited the High Representative to an advisory role. Frustrated by the slow pace of the agreement’s implementation, the international community later strengthened the High Representative’s authority, which allowed him to annul laws that impeded the peace process and to remove Bosnian officials who were hindering progress. In Afghanistan, WFP recognized the need for strong leadership and created the position of Special Envoy of the Executive Director for the Afghan Region. The special envoy led and directed all WFP operations in Afghanistan and neighboring countries during the winter of 2001–2002, when the combination of weather and conflict was expected to increase the need for food assistance. WFP was thus able to consolidate control of all resources in the region, streamline its operations, and accelerate movement of assistance. WFP points to creation of the special envoy as one of the main reasons it was able to move record amounts of food into Afghanistan from November 2001 through January 2002. In December 2001 alone, WFP delivered 116,000 metric tons of food, the single largest monthly food delivery within a complex emergency operation in WFP’s history. Among the challenges to implementing post-conflict assistance operations that we have identified are ensuring sustained political and financial commitment, adequate human resources and funds to carry out operations, coordinated assistance efforts, and local support. Ensuring sustained political and financial commitment for post-conflict assistance efforts is a key challenge because these efforts take longer, are more complicated, and are more expensive than envisioned. In Bosnia, reconstruction continues after 8 years, and there is no end date for withdrawing international troops, despite the initial intent to withdraw them in 1 year. Corruption is difficult to overcome and threatens successful implementation of the Dayton Peace Agreement. In Kosovo, after 4 years, there is still no agreement on the final status of the territory—whether it will be a relatively autonomous province of Serbia or a sovereign entity. This makes it impossible to establish a time frame for a transition in assistance efforts. Moreover, providing this assistance costs more than anticipated. Total U.S. military, civilian, humanitarian, and reconstruction assistance in Bosnia and Kosovo from 1996 through 2002 was approximately $19.7 billiona figure that significantly exceeded initial expectations. In Afghanistan, the preliminary needs assessment prepared by the international community estimated that between $11.4 billion and $18.1 billion in long-term development assistance would be needed over 10 years to rebuild infrastructure and the institutions of a stable Afghan state. Others have estimated that much more is required. For January 2002 through March 2003, donors pledged $2.1 billion. However, only 27 percent, or $499 million, was spent on major development projects such as roads and bridges; the remainder was spent on humanitarian assistance. Consequently, more than a year and a half of the 10-year reconstruction period has passed and little in the way of reconstruction has begun. For fiscal year 2002, U.S. assistance in Afghanistan totaled approximately $717 million. The Department of Defense estimates that military costs in Afghanistan are currently about $900 million per month, or $10.8 billion annually. Another challenge to effectively implementing assistance efforts is ensuring sufficient personnel to carry out operations and follow-through on pledged funds. In Bosnia and Kosovo, the international community has had difficulties providing civilian staff and the specialized police for security in the volatile post-conflict environment. For example, operations in Bosnia had a 40 percent shortfall in multinational special police trained to deal with civil disturbances from returns of refugees or from efforts to install elected officials. These shortfalls sometimes threatened security in potentially violent situations. In Kosovo, U.N. efforts to establish a civil administration, create municipal administrative structures, and foster democracy were hindered by the lack of qualified international administrators and staff. Delays in getting these staff on the ground and working allowed the Kosovo Liberation Army to temporarily run government institutions in an autocratic manner and made it difficult to regain international control. In Afghanistan, inadequate and untimely donor support disrupted WFP’s food assistance efforts. When the operation began in April 2002, WFP had received only $63.9 million, or 22 percent, of required resources. From April through June—the preharvest period when Afghan food supplies are traditionally at their lowest point—WFP was able to meet only 51 percent of the planned requirement for assistance. WFP’s actual deliveries were, on average, 33 percent below actual requirements for the April 2002 through January 2003 period. Lack of timely donor contributions forced WFP to reduce rations to returning refugees and internally displaced persons from 150 kilograms to 50 kilograms. Lack of donor support also forced WFP and its implementing partners to delay, in some cases for up to 10 weeks, compensation promised to Afghans who participated in the food-for-work and food-for-asset-creation projects. WFP lost credibility with Afghans and nongovernmental organizations as a result. Similarly, resource shortages forced WFP to delay for up to 8 weeks in-kind payments of food in its civil service support program, which aimed to help the new government establish itself. Coordinating and directing assistance activities between and among multiple international donors and military components has been a challenge. In Bosnia, 59 donor nations and international organizationsincluding NATO, the United Nations, the Organization for Security and Cooperation in Europe, the European Union, the World Bank, and nongovernmental organizationshad a role in assistance activities but did not always coordinate their actions. For example, the United Nations and NATO initially could not agree on who would control and reform the Bosnian special or paramilitary police units. For the first year of post-conflict operations, these special police forces impeded assistance activities. The NATO-led force finally agreed to define these special police forces as military units and disbanded them in 1997. In Kosovo, the need for overall coordination was recognized and addressed by giving the United Nations a central role in providing overall coordination for humanitarian affairs, civil administration activities, and institution building. In Afghanistan, coordination of international assistance in general, and agricultural assistance in particular, was weak in 2002. From the beginning of the assistance effort, donors were urged to defer to the Afghan government regarding coordination. According to the United Nations, Afghan government authorities were responsible for coordination, and the international community was to operate and relate to the Afghan government in a coherent manner rather than through a series of disparate relationships. The Afghan government’s attempt to exert leadership over the reconstruction process in 2002 was largely ineffective primarily because the bilateral, multilateral, and nongovernmental assistance agenciesincluding the United Nations, the Food and Agriculture Organization, the Asian Development Bank, the World Bank, the U.S. Agency for International Development (USAID), and othersprepared individual reconstruction strategies, had their own mandate and funding sources, and pursued development efforts in Afghanistan independently. In addition, according to the international community, the Afghan government lacked the capacity and resources to be an effective coordinator, and thus these responsibilities could not be delegated to it. In December 2002, the Afghan government instituted a new coordination mechanism, but this mechanism has not surmounted conditions that prevented effective coordination throughout 2002. Another challenge is ensuring that local political leaders and influential groups support and participate in assistance activities. In Bosnia, the Bosnian-Serb leaders and their political parties opposed the Dayton Peace Agreement and blocked assistance efforts at every turn. For example, they tried to block the creation of a state border service to help all Bosnians move freely and obstructed efforts to combat crime and corruption, thus solidifying hard-line opposition and extremist views. In mid-1997, when donor nations and organizations started linking their economic assistance to compliance with the Dayton Agreement, some Bosnian-Serb leaders began implementing some of the agreement’s key provisions. Although Afghanistan’s central government is working in partnership with the international community to implement the Bonn Agreement and rebuild the country, warlords control much of the country and foster an illegitimate economy. They control private armies of tens of thousands of armed men, while the international communityled by the U.S. militarystruggles to train a new Afghan national army. Meanwhile, the Taliban regime was not party to the Bonn Agreement, and remnants of the regime continue to engage in guerilla attacks against the government and the international community. Over the course of our work, we found that the international community and the United States provide a number of mechanisms for accountability in and oversight of assistance operations. First, the international community has monitored the extent to which post- conflict assistance achieved its objectives through reports from the United Nations and the international coordinating mechanisms. Individual donors and agencies also have monitored their respective on-the-ground operations. For example, the United States monitors aid through the U.S. Agency for International Development and USAID’s inspector general. In Bosnia, the Peace Implementation Council (PIC)—a group of 59 countries and international organizations that sponsors and directs the peace implementation process—oversaw humanitarian and reconstruction programs, set objectives for the operation, monitored progress toward those goals, and established mission reconstruction and other benchmarks in the spring of 1998. The High Representative in Bosnia, whose many responsibilities include monitoring implementation of the Dayton Agreement, reports to the Peace Implementation Council on progress and obstacles in this area. In Kosovo, the High-Level Steering Group (comprised of Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, the European Union, the United Nations, the World Bank, the International Monetary Fund, and the European Bank for Reconstruction and Development) performed a similar guidance and oversight role. It set priorities for an action plan to rebuild Kosovo and to repair the economies of the neighboring countries through the Stability Pact. Moreover, the U.N. interim administration in Kosovo was responsible for monitoring and reporting on all aspects of the peace operation, including humanitarian and economic reconstruction efforts. In Afghanistan, WFP has used a number of real-time monitoring mechanisms to track the distribution of commodities. Our review of WFP data suggested that food distributions have been effective and losses minimal. WFP data indicated that in Afghanistan, on average, 2.4 monitoring visits were conducted on food aid projects implemented between April 2002 and November 2003. In addition to WFP monitors, private voluntary organization implementing partners who distribute food at the local beneficiary level make monitoring visits in areas where WFP staff cannot travel due to security concerns. During our visits to project and warehouse sites in Afghanistan, we observed orderly and efficient storage, handing, and distribution of food assistance. (Because of security restrictions, we were able to conduct only limited site visits in Afghanistan.) WFP’s internal auditor reviewed its monitoring operations in Afghanistan in August 2002 and found no material weaknesses. USAID has also conducted periodic monitoring of WFP activities and has not found any major flaws in its operations. Over the past 10 years, GAO has evaluated assistance efforts in 16 post- conflict emergencies, including those in Haiti, Cambodia, Bosnia, Kosovo, and Afghanistan. Specifically, these evaluations have focused on governance, democracy-building, rule of law, anticorruption, economic, military, food, agriculture, demining, refugee, and internally displaced person assistance projects. In broader terms, our work has examined the progress toward achieving the goals of the Dayton Peace Agreement and the military and political settlements for Kosovo, as well as the obstacles to achieving U.S. policy goals in Bosnia, Kosovo, and Afghanistan. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to any questions you or other members may have. For future contacts regarding this testimony, please call Susan Westin at (202) 512-4128. Key contributors to this testimony were Phillip J. Thomas, David M. Bruno, Janey Cohen, B. Patrick Hickey, Judy McCloskey, Tetsuo Miyabara, and Alexandre Tiersky. Foreign Assistance: Lack of Strategic Focus and Obstacles to Agricultural Recovery Threaten Afghanistan’s Stability. GAO-03-607. Washington, D.C.: June 30, 2003. Rebuilding Iraq. GAO-03-792R. Washington, D.C.: May 15, 2003. Cambodia: Governance Reform Progressing, But Key Efforts Are Lagging. GAO-02-569. Washington, D.C.: June 13, 2002. Issues in Implementing International Peace Operations. GAO-02-707R. Washington, D.C.: May 24, 2002. U.N. Peacekeeping: Estimated U.S. Contributions, Fiscal Years 1996-2001. GAO-02-294. Washington, D.C.: February 11, 2002. Bosnia: Crime and Corruption Threaten Successful Implementation of the Dayton Peace Agreement. T-NSIAD-00-219. Washington, D.C.: July 19, 2000. Bosnia Peace Operation: Crime and Corruption Threaten Successful Implementation of the Dayton Peace Agreement. GAO/NSIAD-00-156. Washington, D.C.: July 7, 2000. Balkans Security: Current and Projected Factors Affecting Regional Stability. NSIAD-00-125BR. Washington, D.C.: April 24, 2000. Bosnia Peace Operation: Mission, Structure, and Transition Strategy of NATO's Stabilization Force. GAO/NSIAD-99-19. Washington, D.C.: October 8, 1998. Bosnia Peace Operation: Pace of Implementing Dayton Accelerated as International Involvement Increased. GAO/NSIAD-98-138. Washington, D.C.: June 5, 1998. Former Yugoslavia: War Crimes Tribunal’s Workload Exceeds Capacity. GAO/NSIAD-98-134. Washington, D.C.: June 2, 1998.
|
The circumstances of armed conflicts in Bosnia, Kosovo, and Afghanistan differed in many respects, but in all three cases the United States and the international community became involved in the wars and post-conflict assistance because of important national and international interests. Over the past 10 years, GAO has done extensive work assessing post-conflict assistance in Bosnia and Kosovo and, more recently, has evaluated such assistance to Afghanistan. GAO was asked to provide observations on assistance efforts in these countries that may be applicable to ongoing assistance in Iraq. Specifically, GAO assessed (1) the nature and extent of post-conflict assistance in Bosnia, Kosovo, and Afghanistan; (2) essential components for carrying out assistance effectively; (3) challenges to implementation; and (4) mechanisms used for accountability and oversight. Humanitarian assistance following armed conflict in Bosnia, Kosovo, and Afghanistan--as well as in Iraq--is part of a broader, long-term assistance effort comprising humanitarian, military, economic, governance, and democracy-building measures. While the post-conflict situations in these countries have varied, they have certain conditions in common--most notably the volatile and highly politicized environment in which assistance operations take place. During years of work on post-conflict situations, GAO found that three key components are needed for effective implementation of assistance efforts: a secure environment where humanitarian and other civilian workers are able to perform their tasks; a strategic vision that looks beyond the immediate situation and plans for ongoing efforts; and strong leadership with the authority to direct assistance operations. GAO also observed a number of challenges to implementing assistance operations, including the need for sustained political and financial commitment, adequate resources, coordinated assistance efforts, and support of the host government and civil society. Finally, GAO found that the international community and the United States provide a number of mechanisms for accountability in and oversight of assistance operations.
|
Savings bonds offer investors the ability to purchase securities with lower minimum denominations than those for marketable Treasury securities. In response to concerns raised regarding the cost-effectiveness of the savings bond program as a funding mechanism for federal government operations, Treasury created a cost-effectiveness model that is now used and maintained by BPD. The model was intended to compare the projected costs for $1 billion of new savings bond borrowing and comparable borrowing through marketable Treasury securities. The model is based on the characteristics of the Series EE and Series I savings bonds and is intended to compare these costs on a present value basis. Treasury is authorized to borrow money on the credit of the United States to fund federal government operations. Within Treasury, BPD is responsible for prescribing the debt instruments, limiting and restricting the amount and composition of the debt, paying interest to investors, and accounting for the resulting debt. However, Treasury sets the financial terms and conditions of savings bonds and marketable Treasury securities, including denomination and pricing changes. Savings bonds are an alternative for investors unable or unwilling to pay the minimum denomination of marketable Treasury securities. Table 1 describes several principal differences between Series EE and Series I savings bonds and selected marketable Treasury securities. In March 2002 the Treasury Assistant Secretary for Financial Markets testified before the House Appropriations Committee, Subcommittee on Treasury, Postal Service, and General Government that Treasury believes that the availability of a savings vehicle with the full faith and credit of the United States should not be limited to those who can afford the minimum $1,000 denominations available in auctions of marketable Treasury securities. The official also said that even though savings bonds are not the most efficient form of borrowing in operational terms, Treasury would continue to offer them to the public. Treasury is seeking to reduce the operational costs of savings bonds by offering the securities in paperless form. Treasury has started to offer savings bonds that are held in direct Treasury accounts instead of issuing paper certificates for the bonds. The Series EE and Series I savings bonds are available through the new TreasuryDirect system. A BPD planning document describes BPD’s objective as enabling Treasury to stop issuing paper savings bonds and thus begin to realize the long-term cost reductions expected from additional automation and more efficient processing. In response to concerns raised regarding the cost-effectiveness of the savings bond program as a funding mechanism for federal government operations, Treasury created a cost-effectiveness model. According to a Treasury report to the House Committee on Appropriations and Committee on Financial Services, the savings bond cost-effectiveness model has been used to assess potential changes in the financial terms and conditions for Series EE and Series I savings bonds. According to model documentation, BPD also uses model results to project and trace annual costs and recoveries for distinct cost centers over the life of a savings bond loan. What is collectively referred to as the savings bond cost-effectiveness model comprises two submodels, Series EE and Series I, with the differences between the two reflecting differences between the two series of savings bonds. The results of each submodel are averaged to estimate the overall cost-effectiveness of the savings bond program. According to a BPD official, the model calculates the value of a single savings bond and its costs to Treasury, and extends this to the total savings bond population in a given year. Subsequently, the model attempts to quantify the differences between the savings bonds and marketable Treasury securities (noted in table 1). The model was intended to compare the projected costs for $1 billion of new savings bond borrowing and those for $1 billion in marketable Treasury securities on a present value basis, that is, discounting the costs over time to permit a valid comparison. The savings bond cost- effectiveness model utilizes seven key parameters: administrative costs, historic redemption patterns, sales volume, savings bond yields, maturity period, equivalent marketable yield, and tax recovery. Table 2 describes the key parameters of the model in detail. OMB guidelines state that a cost-effectiveness analysis is appropriate to use in an analysis of government programs when the benefits of competing alternatives are the same or where a policy decision has been made that the benefits of a program must be provided. A program is cost effective if, on the basis of life cycle cost analysis of competing alternatives, it is determined to have the lowest costs expressed in present value terms for a given amount of benefits. The conceptual design underlying the savings bond cost-effectiveness model reflects this OMB guidance. However, the present value calculations in the model contain errors. As a result, the model’s estimated “present values” do not follow OMB guidance and common financial economics practice, and the model does not provide Treasury with the information it needs to determine whether savings bonds are cost-effective. The model’s conceptual design follows OMB guidelines for cost- effectiveness analysis. Figure 1 shows the conceptual design of the model. OMB Circular A-94, which is applicable to executive branch agencies, provides that the standard criterion for deciding whether a government program is cost-effective is net present value—a comparison of the discounted monetized value of the expected life cycle costs of alternative means of achieving the same stream of benefits. However, in its comments on this report BPD asserted that the model’s approach follows an alternative OMB method to determine cost-effectiveness. BPD stated that the model measures cost-effectiveness as the “relative financial benefit from two borrowing options whose overall costs are identical. Treasury’s benefit from each alternative is the amount of financing realized at the time borrowing occurs.” We have addressed this comment in the Agency Comments and Our Evaluation section. A key concept in finance is recognizing that the value associated with funds received or paid at different points changes over time. Funds have a time value because of the opportunity to invest them at different interest rates and in different financial alternatives. Investors demand some compensation for making funds available today in return for future repayment. For example, the interest paid on a loan is a measure of this compensation. Essentially, a present value calculation measures the value today that would be equivalent to a future payment, or stream of payments, by discounting the future payments (using an appropriate discount rate). For Treasury, this is the value today of the future payments to investors of securities offered for sale which, in the context of the model, is the redemption value of Series EE and Series I savings bonds and the repayment stream of the alternative marketable Treasury security (that is, any coupons plus maturity value). Calculating the present value for each alternative takes the monetary value of costs over time and discounts them at an appropriate discount rate. Discounting transforms costs occurring in different time periods to a common unit of measurement (app. II describes this in greater detail). As table 1 notes, there are several distinctions between savings bonds and marketable Treasury securities; several of these are relevant to the model. Most notably, the interest rates and the timing of the interest payments are different. Accurate implementation of the conceptual design requires that the model address these issues in order to construct comparable present values for the costs of savings bonds and marketable Treasury securities. The model attempts to address these distinctions by (1) creating an after- tax present value discount factor for the marketable Treasury security from a 6-month average of the constant maturity yield curve, commonly referred to as the “constant maturity Treasury,” or CMT), and (2) reducing the present value of the marketable Treasury security by subtracting its estimated (that is, not paid) “discounted” coupons. In general, the model is conceptually designed to create a marketable Treasury security comparable to a savings bond such that the repayment stream (that is, any coupons plus maturity value) to an investor is equal to that of a savings bond’s net cost to Treasury (that is, redemption value adjusted for administrative unit costs and tax revenue implications). The repayment stream of the created marketable Treasury security and the adjusted redemption value of the savings bonds represent costs to Treasury from offering these securities for sale. From this point, the model is intended to compare the costs of these two financing alternatives on a present value basis. According to model documentation, the present value of the marketable Treasury security is constructed by discounting the savings bond’s redemption value, adjusted for Treasury’s unit cost of redemption and tax revenue implications, at an equivalent after-tax rate for marketable Treasury securities of the same maturity. The model’s calculation of the redemption value of the Series EE and Series I savings bond is similar; changes are due to the different structures of the two series. Table 3 provides additional detail on the redemption value calculation and variables for both the Series EE and Series I savings bond. Appendix III provides examples of redemption value calculations for both the Series EE and Series I savings bond. The tax revenue implications are reflected in the model as a tax recovery rate. The model assumes that all savings bond tax recoveries are deferred until redemption. Tax recovery—the taxes collected on savings bond earnings that had been deferred until redemption—increases the revenues to Treasury. The model calculates the effect of tax recovery, in terms of life cycle costs for the model, by reducing the amount Treasury pays to an investor at redemption. However, the tax recovery rate is reduced in the model to reflect the education bond program. In general, as shown in table 1, savings bonds are eligible for tax benefits upon redemption when used for qualified education expenses. The administrative unit cost to Treasury from redeeming savings bonds reduces the revenues to Treasury. The model calculates the effect of administrative unit redemption cost, in terms of life cycle costs for the model, by increasing the amount Treasury pays to an investor at redemption. The model constructs an after-tax “discount factor” based on a 6-month average of the CMT. At the time of our review, the window was the 6-month period ending October 31, 2001. According to BPD officials, the model is intended to perform an additional step to calculate the “present value” of the marketable Treasury security. This additional step, according to model documentation, is intended to reflect the difference between savings bonds, which do not pay periodic interest, and marketable Treasury securities that do pay such interest in the form of coupons. The model treats the coupon payments that the marketable Treasury security would pay as a separate security in which the tax recovery is simultaneous with the payment. First, the estimated coupons are created based on the savings bond’s redemption value, adjusted for Treasury’s unit cost of redemption and tax revenue estimates. Second, these coupons are “discounted” by an after-tax “discount factor” based on a 6-month average of the CMT. BPD believes that these coupons would reduce the benefit of the initial marketable Treasury security and therefore its “present value.” According to model documentation, these are subtracted from the “discounted” savings bond final payout, adjusted for Treasury’s unit cost of redemption and tax revenue implications. The model calculations for the “present value” of the marketable Treasury security, however, do not follow model documentation in that the savings bond final payout is not discounted. BPD officials confirmed that the model calculations actually construct the “present value” of the marketable Treasury security as equal to the redemption value of the savings bond, adjusted for Treasury’s unit cost of redemption and tax revenue implications, net of estimated “discounted” coupons. According to model documentation, the “present value” of the savings bond is its issue price less the unit cost to issue. The model’s calculation returns a value that is equal to Treasury receipts from a savings bond, net of the administrative unit cost of issuance. As previously mentioned, the model is intended to compare a savings bond and a marketable Treasury security on a present value basis. The difference, according to a BPD official, is then translated to the total savings bond population in a given year and converted to a ratio of millions of dollars in cost per $1 billion borrowed. The model calculation takes the difference between the two “present values” described above, projects the difference across the sales volume for the prior fiscal year, and then converts the difference to a ratio that measures cost savings in millions per $1 billion borrowed. Appendix IV provides a more detailed discussion of the model calculations. Although Treasury has presented the model as measuring cost- effectiveness on a present value basis, most notably in a July 2002 report to Congress, the model does not construct a present value comparison in accordance with OMB guidance. Our review indicates that the model does not accurately incorporate all the life cycle costs in the present value calculations for either alternative, does not calculate and apply a true economic discount factor needed to derive present value that would be relevant to the time periods, and ultimately compares values that are not equivalent based on the time value of money. The result is that the model’s calculation of a cost-effectiveness ratio does not provide an accurate present value assessment of the alternatives. As previously discussed, to create comparable borrowing between the two alternatives, the model is intended to set the repayment stream (that is, any coupons plus maturity value) of the marketable Treasury security equal to that of a savings bond’s net cost to Treasury (that is, redemption value adjusted for administrative unit costs and tax revenue implications). However, the model calculation does not incorporate all the life cycle costs of the savings bond into the marketable Treasury security’s “present value” calculation — the initial administrative cost of issuing the savings bond (that is, unit cost to issue) is not included. In addition, the model calculation does not incorporate all the life cycle costs of the savings bond into the saving bond’s “present value” calculation — the redemption value paid to an investor and the final administrative cost of the savings bond (that is, unit cost to redeem) is not included. The present value of a bond (or bond price) is equal to the present value of its expected cash flows (that is, any coupons plus maturity value). As noted previously, BPD officials confirmed that the model measures what it terms the “present value” of the marketable Treasury security as equal to the redemption value of the savings bond, adjusted for Treasury’s unit cost of redemption and tax revenue implications, net of estimated “discounted” coupons. However, this calculation implicitly values current and future funds as the same. In addition, as previously discussed, the model treats coupons as if they reduce the benefit of the marketable Treasury security and therefore its “present value.” However, since the CMT already reflects the value of the coupons that Treasury is obligated to pay, reducing the benefit to Treasury essentially counts the coupons twice. Additionally, the construction of the discount factor in the model departs from OMB guidance since the model’s “discount factor” does not create an appropriate time value. Further, the model treats Treasury receipts from a savings bond, net of the administrative unit cost of issuance, as the “present value” of the savings bond. However, the model does not include or discount over time the savings bond’s redemption value in this calculation, and therefore the model does not reflect a time value associated with these funds, or present value as the term is used in OMB guidance or in general finance usage. The savings bond cost-effectiveness model utilizes seven key parameters: administrative costs, historic redemption patterns, sales volume, savings bond yields, maturity period, equivalent marketable yield, and tax recovery. Since 1995, when BPD assumed responsibility for the model, it has made four model enhancements in an effort to better reflect changes in the savings bond program, three of which directly affect these parameters. Table 4 presents BPD’s changes to the three key model parameters since 1995. However, despite these enhancements, some of the data used to adjust the model’s parameters have not been updated and do not incorporate historical experience. In addition, the model contains other inaccuracies that could affect its reliability and accuracy. Finally, the model has not been subject to ongoing and periodic reviews by independent external reviewers, a common practice endorsed by OMB. The first enhancement affects a key calculation in the savings bond cost- effectiveness model: the redemption value, or future value, of the savings bond over time. These values form the basis for constructing a marketable Treasury security that is the alternative to savings bonds. The earlier bonds are redeemed, the less administrative costs are offset. Therefore, the accuracy of the model’s cost-effectiveness calculation depends heavily on the accuracy of predicted early redemptions. The key driver for the redemption values in the model is probability of redemption. The model accounts for redemption probabilities differently than in the original model transferred to BPD in 1995. According to BPD officials, the original model estimated redemption probabilities with the most recent 13 months of Series EE redemption data. When BPD assumed responsibility for the model, staff began estimating redemption probabilities for each denomination of Series EE savings bonds back to 1957. The current model continues to incorporate the historical redemption patterns from 1957 to 1993. However, citing cost concerns, BPD has not updated the probabilities to reflect redemption patterns of the most recent 10 years. Since then, however, a wide variety of financial instruments have become available to investors, which could affect the patterns of redemption. Further, the redemptions do not reflect current interest rates. As previously discussed, the model also applies the redemption probabilities of Series EE bonds to similarly priced Series I bonds (that is, the redemption probabilities for a $100 Series EE bond and a $50 Series I bond are equal). BPD has not estimated the redemption probabilities of Series I bonds, introduced in 1998, therefore the redemption probabilities applied in the Series I submodel have no direct relation to the Series I bond redemption patterns since 1998. The second model enhancement deals with the maturity period. The original model assumed the stated maturity date for both securities of 30 years. BPD adjusted the current model to include an additional 20-year horizon beyond the stated maturities of savings bonds and marketable Treasury securities to account for those investors who hold on to the securities past the maturity date. However, according to a statement made by Treasury, the regulations governing savings bonds provide that bonds for which no claims have been filed within 10 years of the maturity date will be presumed to have been properly paid. The 20-year horizon enhancement appears to be inconsistent with this Treasury statement. Further, adding the 20 years appears to be inconsistent with OMB guidance for the alternatives to be compared over their stated life cycles, which for both alternatives should be the 30-year maturity period. Finally, the third enhancement to the model adjusts the tax recovery rate for savings bonds by 10 percent to account for the education bond program. The 10 percent adjustment was an estimate since there was no program experience to guide the adjustment. The education bond benefit, which allows for the exclusion of interest earned subject to certain rules and limits, applies only to savings bonds. Adjusting the tax recovery rate for savings bonds to reflect this program is appropriate. However, the education bond program was introduced in 1990, providing program experience of at least 12 years. BPD has not analyzed whether the historical experience is consistent with the 10 percent adjustment and thus does not know whether the adjustment improves the model’s accuracy. A BPD official told us that the equivalent marketable yield, or CMT, has the strongest impact on model results. The CMT is the basis for BPD’s “discount factor” used to derive the “present value” of the alternative marketable Treasury security. Treasury discontinued the issuance of the 30- year Treasury bond in October 2001, however, directly affecting how the discount rate is calculated. Beginning on February 18, 2002, Treasury ceased publication of the 30-year constant maturity series. Instead, Treasury publishes a Long-Term Average Rate and a linear extrapolation factor that can be added to the Long-Term Average Rate to allow interested parties to compute an estimated 30-year rate. BPD staff told us that BPD is still considering how to reflect this change in the model’s “discount factor.” The model coding contains additional inaccuracies that, in comparison to the present value inaccuracy, appear to have a minor impact. Also, the model’s use of older software and lack of controls over changes may allow additional errors to remain undetected. A coding error in the bonds redeemed calculation occurs in some denominations for both submodels. From the time of issuance through 4 months outstanding, the formula uses the probability of redemption for the month following the correct month. BPD staff told us this error occurred during model maintenance by BPD staff. Additionally, BPD staff told us that this error would be corrected (app. IV describes the correction for this calculation). Another inaccuracy involves the redemption value calculation in the Series I submodel. The calculation there does not match the savings bond regulations since the redemption value does not reflect the accrued value at the beginning of each semiannual period. In addition, the savings bond cost-effectiveness model is maintained in older software. Use of the older software can be appropriate, but does increase the difficulty in maintaining and updating the model without introducing errors. As noted above, BPD acknowledged one such error. BPD staff told us that they have not moved the model into current software because of concerns that the software’s features used to calculate the scenario effects of program changes will not function properly. During the course of our review, we also found that the model contained unlabeled and undocumented data fields. BPD staff told us that these fields were remnants of scenarios that staff had run on the model in the past, which were accidentally left in the model when it was sent to us for review. One aspect of an effective general control and application environment is the protection of data, files, and programs from unauthorized access, modification, and destruction. BPD staff could, by saving scenario data in the master model file, inadvertently add, alter, or delete sensitive data or coding. While OMB guidance calls for an independent external review of cost- effectiveness models, as well as assessments of their accuracy and reliability, BPD has not commissioned such analysis. As a result, BPD cannot assess the accuracy and reliability of the model. OMB guidelines provide elements for a cost-effectiveness analysis and promote subjecting such analyses to independent external reviews. Verification and explicit assumptions are two of the four elements OMB identified for a cost-effectiveness analysis. OMB states that verification through retrospective studies to determine whether anticipated benefits and costs have been realized are potentially valuable. Such studies can be used to determine necessary corrections in existing programs, and to improve future estimates of benefits and costs in these programs. Agencies should have a plan for periodic, results-oriented evaluation of program effectiveness. OMB adds that a cost-effectiveness analysis should be explicit about the underlying assumptions used to arrive at estimates of future benefits and costs and include a statement of the assumptions, the rationale behind them, and a review of their strengths and weaknesses. Key data and results should be reported to promote independent analysis and review. OMB guidance also acknowledges that estimates are typically uncertain because of imprecision in both underlying data and modeling assumptions and states that analyses should attempt to characterize the sources and nature of uncertainty. In analyzing uncertain data, objective estimates of probabilities, such as those derived from market data, should be used whenever possible. Any limitations of the analysis because of uncertainty or biases surrounding the data or assumptions should be discussed. In addition, major assumptions should be varied and net present value and other outcomes recomputed to determine how sensitive outcomes are to changes in the assumptions. In general, sensitivity analysis should be considered for estimates of benefits and costs, the discount rate, the general inflation rate, and distributional assumptions of probabilities. Models used in the analysis should be well documented and, where possible, available to facilitate independent review. BPD has not independently verified the cost-effectiveness model. According to BPD officials, a survey and investigations team from the House Committee on Appropriations, which visited BPD’s Parkersburg, West Virginia, location in 1996, conducted the only review of the savings bond cost-effectiveness model. Since the team did not initiate further inquiry, BPD officials said they assumed that the team had found no issues requiring further review and discussion. Although BPD and Treasury officials have maintained in congressional testimonies and in a recent report that the model results are accurate, to date neither BPD nor Treasury has requested an independent external review to validate the savings bond cost-effectiveness model. Further, while BPD has used the model to estimate the potential effects of changes in the savings bond program, it has not sought to conduct any sensitivity analysis that could reveal the model’s limitations. Our review of BPD’s savings bond cost-effectiveness model indicates that the model’s results do not provide BPD, Treasury, OMB, or Congress appropriate information to assess the relative costs of the savings bond program versus marketable Treasury securities as a source of raising funds. Although the model was intended to compare savings bonds and marketable Treasury securities on a present value basis, the model’s comparison is not based on present values and thus does not follow OMB guidance and common financial economics practice. As previously discussed, a discount factor brings future costs and revenues into present value terms to permit comparisons. While the model calculates a value that BPD terms a “discount factor,” the calculation is incorrect and, as a result, the model does not correctly calculate the present value of the alternatives. In addition, this calculated value is not applied consistent with the model’s conceptual design and OMB guidance. Therefore the cost-effectiveness ratio that the model creates does not provide BPD with the information it needs to assess the relative costs of the savings bond program and marketable Treasury securities to determine which financing approach offers a greater financial benefit to Treasury. The model also uses data that may not be reliable. In particular, the probabilities of redemption for the Series EE bond are 10 years out of date and BPD has not estimated any probabilities of redemption that have any direct relation to the Series I bond redemption patterns since 1998 when these bonds were first introduced. The model also incorporates a time horizon that extends beyond the life cycles of either security and distorts the cost-effectiveness analysis, allowing a longer time period for the administrative costs of savings bonds to be offset. Finally, the reduction in the tax recovery rate to reflect the education bond program is not based on actual program experience and may be over- or underestimating the financial impact of the program to Treasury. Given that the model uses data that may not be reliable and BPD has not decided how to reflect the discontinuance of the 30-year constant maturity series in the model’s “discount factor,” we did not make corrections to the model. As a result, we do not know to what degree the present value errors and these data affect the model’s cost-effectiveness ratio. BPD’s ability to assess the impact of policy changes to the savings bond program, project cost centers for the savings bond program, and determine cost-effectiveness in relation to marketable Treasury securities is hampered by fundamental errors in the present value calculations. When combined with model data that may not be reliable, the need for independent reviews of cost-effectiveness and sensitivity analyses, which are called for in OMB Circular A-94, becomes particularly important. Because of the importance of measuring the cost-effectiveness of financing mechanisms used to fund the operations of the federal government, we recommend that the Secretary of the Treasury direct that the Commissioner of the Public Debt in conjunction with Treasury’s Office of Domestic Finance revise the savings bond cost-effectiveness model to estimate the relative (or net) present value of the life cycle costs of issuing savings bonds versus marketable Treasury securities. As part of that revision, the Commissioner should do the following: Update the Series EE probabilities of redemption to capture any changes in redemption patterns caused by the proliferation of financial products or interest rate changes in the last 10 years. At a minimum, Treasury and BPD should collect data for a sample of the more recent time period to test the validity of the 1957-93 data. Base Series I bond redemption patterns on actual experience with those bonds. Validate the cost estimate of education bond program participation based on the historical, 12-year data to date. Replace the 30-year equivalent marketable rate. Update the software used for the model to enhance BPD’s ability to maintain the model and protect against unauthorized modification. Put in place a process for ongoing verification, sensitivity analysis, and independent external review of the model. In a June 4, 2003, letter commenting on a draft of this report, the Commissioner of the Public Debt wrote that the cost-effectiveness model conformed to OMB Circular A-94, sec. 5b, since it “measures Treasury’s relative financial benefit from two borrowing options whose overall costs are identical. Treasury’s benefit from each alternative is the amount of financing realized at the time borrowing occurs.” Noting that the model “is not intended to be a classic present value exercise,” the Commissioner explained that the model “compares the present value of a projected stream of payments associated with the sale of savings bonds with the amount realized from the sale.” BPD suspects it may have inadvertently misused the terms “discount factor” and “present value” in internal Treasury discussions. Further, BPD said that we did not understand the model’s life-cycle duration, the minimum holding period, and the Series I redemption values. The Commissioner noted that while BPD disagreed with our conclusion that the model’s comparisons were invalid, BPD generally agreed with our recommendations for updating the model. However, the Commissioner also noted that, with Treasury’s goal of moving toward a totally electronic environment for the savings bond program, “we think it's appropriate for us to shelve the existing model, which is based on paper bonds, and focus our attention on the transition to a fully electronic program.” The model compares the amount of funds raised by selling a given amount of Series EE and I bonds in various denominations and the present value of the future costs to Treasury in connection with issuing the bonds. If the amount raised is greater than the present value of the future costs associated with the bonds, taking into account administrative costs and tax benefits, then the program is deemed to be cost-effective. While we agree that an approach comparing the benefits of two approaches having identical costs would be a valid alternative to the present value approach that we described in our report, the model’s calculations do not support this analysis. Marketable Treasury securities and savings bonds can be compared by either comparing the costs of raising identical sums using two alternative debt instruments or by comparing the funds raised when the costs of the two instruments are identical. However, if BPD were to base the cost-effectiveness model on a comparison of the “financial benefit from two borrowing options whose overall costs are identical,” the key issue that this approach would have to address is that marketable Treasury securities and savings bonds do not necessarily have identical overall costs. The challenge of this modeling approach would be to appropriately measure costs over time of the two options in a way that would permit treating the costs as identical. Since the costs vary over time, accurately calculating the present value of the costs of the two options would be an essential step. However, we did not find, for reasons noted in the report, that the model accurately or reliably calculates the present value of the stream of costs associated with the sale of savings bonds. In particular, the report explains that the discount rate used to calculate the present value of the projected stream of costs is inappropriate. We have changed the report to recognize that BPD allocates a small number of redemptions prior to 6 months to reflect hardship waivers; these waivers are not discussed in BPD’s model documentation. We have not, however, changed the report in response to the Commissioner’s statements regarding life cycle costs and Series I redemption values. As the report notes, the 20-year extension on the life cycle appears to be inconsistent with regulations that provide that bonds for which no claims have been filed within 10 years of the maturity date are presumed to have been properly paid. Our analysis of the Series I savings bond redemption value found that the formula does not correctly recognize the savings bond’s accrued value at the beginning of each semiannual period. The Commissioner’s letter noted that BPD generally agrees with the recommendations for updating the model, but that BPD believes it appropriate to “shelve” the existing model and focus on the transition to a fully electronic retail securities program. We agree that the importance of many administrative costs will decline if BPD successfully transforms the current paper-based savings bond program to an electronic environment. Further, there may be changes in investors’ purchases and redemptions of savings bonds in an electronic environment. However, important differences will continue to exist between the costs of savings bonds and marketable Treasury securities, particularly the payment of coupons and the tax treatment of the two debt instruments. A model that accurately recognizes these differences will continue to be as crucial to understanding the relative cost-effectiveness of the two debt instruments as it is in the current paper-based environment. That model, furthermore, will have to be updated regularly to reflect the effect of any changes in investor preferences and behavior on the savings bond program as it moves into this new electronic environment. Our recommendations will remain appropriate for assessing the cost-effectiveness of the savings bond program managed in an electronic environment. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to the Secretary of the Treasury, the Treasury Under Secretary for Domestic Finance, and the Commissioner of the Public Debt. Copies will be made available to others upon request. In addition, this report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you have any further questions, please call me at (202) 512-8678 or [email protected] or James M. McDermott, Assistant Director, at (202) 512- 5373 or [email protected]. To assess the savings bond cost-effectiveness model, we obtained an electronic copy of the model as of fiscal year 2001 in addition to hard-copy background and supporting documentation. Given that the model is maintained in older software, Lotus 1-2-3 version 5, we reviewed the model using the same program and version to avoid corruption or translation errors. We then identified and reviewed the various regulations regarding the Series EE and Series I savings bond structure on which the cost- effectiveness model is based. In addition, we reviewed relevant portions of Internal Revenue Service publications regarding the tax implications of the savings bond program. We compared these information sources with the model coding to verify that the calculations reflect the structure of the savings bond program. To determine if the model constructed a present value comparison, we analyzed the model coding and supplied documentation to determine if (1) the model’s design matched Office of Management and Budget and conventional approaches and (2) the model’s calculations accurately implement the model’s design to arrive at a present value comparison. Given that the model calculations did not result in a present value comparison, we did not assess the accuracy or completeness of the data input used in the various model parameters and assumptions. As a result, we do not know what effect such data had on the model’s cost- effectiveness calculation. We conducted our work in Washington, D.C., from September 2002 through April 2003 in accordance with generally accepted government auditing standards. The present value of a bond (or, bond price) is equal to the present value of its expected cash flows (any coupons plus maturity value). Each cash flow must be discounted at the relevant rate for the time period. ... ... ... ... ... ----------------------------1 +( ) In many explanations of present value, the discount rate is held constant, removing the need for r-1, to vary over time. 0= (coupon + face) = ----------------------------------------------- ---------------------------------------------- , in which ) ) ) )*...* 1 ) for some t >2, for example. Based on the above table, the following illustrations assume that the periods correspond to years (for example t = 1 represents the end of the first year). ----------------1 r+( ) In the illustrations above, the rate or rates used to discount each component is relevant to the time period. Model Calculation for the “Present Value” of the Marketable Treasury Based on model documentation, the coupon payments are subtracted to reflect that the coupon payments on the Treasury alternative would reduce the benefit Treasury would receive from the initial security. Redemption Value1*r1 ------------------------------------------------------- ) Redemption Value2* r2 -------------------------------------------------------- ) Redemption Value2* r2 -------------------------------------------------------- ) Redemption Value3* r3 -------------------------------------------------------- ) Redemption Value3* r3 -------------------------------------------------------- ) Redemption Value3* r3 --------------------------------------------------------r2+(r1+() 1) 1 r3+(1 ) The discounting is incorrect in the model, and is carried through for periods two and greater. Appendix IV provides a more detailed discussion of the model calculations, including the monthly after-tax “discount rate” mentioned above. Series EE - 31C.F.R. §351.2-(k)(1)(iii); Series I – 31C.F.R. § 359.2-(e)(1)(vi). Example: An I composite rate of 5.07 percent will result in a newly purchased hypothetical $25 bond increasing in value after 6 months to $25.63, when rounded to the nearest cent. At the beginning of the first semiannual rate period, the PV is equal to $25 such that Month 1: FV = 25 * {(1/6) = 25.10 Month 2: FV = 25 * {(2/6) = 25.21 Month 3: FV = 25 * {(3/6) = 25.31 Month 4: FV = 25 * {(4/6) = 25.42 Month 5: FV = 25 * {(5/6) = 25.53 Month 6: FV = 25 * {(6/6) = 25.63 Thus, a $5,000 bond purchased at the same time as the hypothetical $25 bond will be worth $5,126 after 6 months ( x $25.63 = $5,126). The PV variable changes in months 7 through 12 such that the PV is equal to the redemption value at the beginning of the semiannual rate period, which in the above example is equal to month 6 FV such that Month 7: FV = 25.63 * {(1/6) = 25.74 Month 8: FV = 25.63 * {(2/6) = 25.84 Month 9: FV = 25.63 * {(3/6) = 25.95 Month 10: FV = 25.63 * {(4/6) = 26.06 Month 11: FV = 25.63 * {(5/6) = 26.17 Month 12: FV = 25.63 * {(6/6) = 26.28 The PV variable changes in months 13 through 18 such that the PV is equal to the redemption value at the beginning of the semiannual rate period, which in the above example is equal to month 12 FV, and so on. The cost-effectiveness calculations in the Series EE and Series I submodels, as well as preceding steps, are similar; coding changes are due to the different structures of the two series, as noted in table 3, and modeling errors, as previously discussed. Though not shown here, both submodel calculations for savings bond redemption value incorporate the 3-month interest penalty for bonds redeemed before 5 years from issue. Calculations for Marketable Treasury “Present Value” — Five Steps Where n = months outstanding Step 1: Savings bond redemption value respective to denomination (as previously shown in table 3). Step 2: Savings bond redemption value after unit cost to redeem and tax revenue implications = Adjusted Redemption Value: Redemption Value – ((Redemption Value – savings bond issue price) * savings bond tax recovery rate) + unit cost to redeemStep 3: Algebraic after-tax “present value discount factor”several component calculations): 3a. after-tax semiannual CMT= (6-month average of CMT/2) * (1 – tax recovery rate) 3b. after-tax semiannual CMT expressed as a monthly rate = Y = ((after-tax semiannual CMT + 1) ^ (1/6)) – 1 3c. Y algebraic conversion leading to after-tax “present value 3c.1. S = 1/(1 + Y) 3.c.2 Month 1 FAC (that is, FAC; for all other n, FAC = S * (FACn-1 + 1) 3d. After-tax “present value discount factor” = 1 – (Y * FAC) Step 4: Algebraic after-tax “present value discount factor”monthly rate: Ie = (((1/FACT) ^ (1/n)) – 1) Step 5: Treasury “present value” / (1 + Ie) ^ The resulting marketable Treasury “present value” * FACT , which returns the following: Month 1 marketable Treasury “present value” = Adjusted Redemption Value - Adjusted Redemption Value / (1 + Y * Y) + Adjusted Redemption Value / (1 + Y)], and so on. Savings bond “present value” - marketable Treasury “present value” Month 0 bonds redeemed = probability of redemption * (savings bond sales volume respective to denomination – bonds redeemed0..n) Month 0 bonds outstanding = sales volume respective to denomination – bonds redeemed = bonds outstandingn-1 - bonds redeemedStep 6: Cost-effectiveness ratio for each denomination (comprising several 6a. (cumulative projected “present value” difference50 years – pieces outstanding50 years * savings bond unit cost to issue) 6b. (sales volume respective to denomination * savings bond issue price) 6c. (result of 6a. / results of 6b.) * 1,000 The resulting cost-effectiveness calculation returns a ratio of millions saved per $1 billion borrowed. Additional Calculations That Are Not Relevant to the Model’s Cost- effectiveness Calculation Though not detailed above, the model includes five calculations that do not produce output relevant to the cost-effectiveness calculation. In addition, the model performs an additional step in the after-tax “present value discount factor” calculation that is not necessary. The model creates an after-tax “present value discount factor” expressed as a monthly rate, shown above as step 4 in the calculations for the marketable Treasury “present value.” Step 5, as shown above in the calculations for the marketable Treasury “present value,” reverses this calculation through the 30-year life cycle of the marketable Treasury alternative. The following are GAO's comments on the Bureau of the Public Debt's (BPD) letter dated June 4, 2003. 1. As we note in this report's Agency Comments and Our Evaluation section, the approach that BPD outlines here would be an appropriate alternative to the cost-effectiveness model based on a present value analysis described in the report. The description in this report is based on documentation for the cost-effectiveness model that BPD provided in an October 16, 2002, meeting; on a July 2002 report that BPD prepared for the House Committees on Appropriations and on Financial Services; and on March 2002 testimony by the Commissioner of the Public Debt before the House Appropriations Subcommittee on Treasury, Postal Service, and General Government. 2. As we note in the report, the 20-year extension is not consistent with a statement previously made by the Department of the Treasury regarding the presumption of payment 10 years beyond the maturity date. We agree that all administrative costs are included in the model. As the report notes, however, the model calculation does not accurately incorporate these costs in computing the present value of the marketable Treasury security and the savings bond. 3. Based on BPD's explanation that redemptions within 6 months of a savings bond's issuance are sometimes granted in hardship cases, we have deleted discussion of their inclusion in the model from the report. 4. As we note in the report, the model's compound interest formula for the Series I bonds does not recognize the bond's accrued value at the beginning of each semiannual period. When calculated out over 30 years, the difference between the formula in the regulation and the model's calculation is minor but still exists. In addition to those named above, Heather T. Dignan, Mitchell B. Rachlis, and Barbara M. Roesmann made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
|
While the Treasury generally pays lower interest rates on U.S. Savings Bonds than it does on other forms of borrowing from the public, it also incurs substantially higher administrative costs to issue and redeem the paper savings bond certificates. To determine whether these higher administrative costs exceed its interest rate savings, Treasury's Bureau of the Public Debt uses a spreadsheet model to compare the costs of issuing Series EE and Series I savings bonds with those of issuing marketable Treasury securities. GAO was asked to review this model to judge its reliability in measuring the relative costs of Treasury's borrowing alternatives. Treasury has several alternative vehicles for issuing debt to the public. A substantial majority of that debt is issued in the form of marketable Treasury securities. U.S. Savings Bonds today account for about 3 percent of total Treasury securities outstanding. A majority of these bonds have lower minimum denominations or face amounts than marketable Treasury securities and generally pay lower interest rates as well, but provide the same assurance of the full faith and credit of the United States, making them an alternative for investors unable or unwilling to pay the minimum denominations of marketable Treasury securities. Savings bonds continue to be issued as paper certificates, rather than in the format of the "book entry" system for marketable Treasury securities; however, this increases the administrative costs of issuing, servicing, and redeeming savings bonds, relative to the marketable securities. The cost-effectiveness of the savings bond program depends on whether Treasury's savings--in terms of the generally lower interest payments on savings bonds relative to marketable Treasury securities--exceed the costs that Treasury incurs with processing the paper savings bond certificates. The question is complicated by the fact that the interest savings occur over the life of a savings bond, and that Treasury pays costs upfront at issuance and in the future when the savings bond is redeemed. As prescribed by the Office of Management and Budget and common financial practice, in dealing with savings or costs over time, the value of future savings or costs must be discounted to present value. Treasury has reported that its cost-effectiveness model does calculate the present values of the relative costs of savings bonds and marketable Treasury securities. However, because of flaws in the design and implementation of the spreadsheet used to calculate these present values, the cost-effectiveness model's results do not provide the Bureau of the Public Debt, Treasury, or Congress with accurate information that is needed to assess the relative costs of issuing debt through savings bonds or marketable Treasury securities, or to manage the savings bond program. Further, the bureau has not updated some key data elements in the cost-effectiveness model. In particular, citing budget considerations, the bureau uses data on the redemption patterns for savings bonds that date back to 1993, which do not reflect the effects of the wide variety of financial instruments now available to investors.
|
Critical infrastructures are physical or virtual systems and assets so vital to the nation that their incapacitation or destruction would have a debilitating impact on national and economic security and on public health and safety. These systems and assets—such as the electric power grid, chemical plants, and water treatment facilities—are essential to the operations of the economy and the government. Recent terrorist attacks and threats have underscored the need to protect our nation’s critical infrastructures. If vulnerabilities in these infrastructures are exploited, our nation’s critical infrastructures could be disrupted or disabled, possibly causing loss of life, physical damage, and economic losses. Although the vast majority of our nation’s critical infrastructures are owned by the private sector, the federal government owns and operates key facilities that use control systems, including oil, gas, water, energy, and nuclear facilities. Control systems are computer-based systems that are used within many infrastructures and industries to monitor and control sensitive processes and physical functions. Typically, control systems collect sensor measurements and operational data from the field, process and display this information, and relay control commands to local or remote equipment. Control systems perform functions that range from simple to complex. They can be used to simply monitor processes—for example, the environmental conditions in a small office building—or to manage the complex activities of a municipal water system or a nuclear power plant. In the electric power industry, control systems can be used to manage and control the generation, transmission, and distribution of electric power. For example, control systems can open and close circuit breakers and set thresholds for preventive shutdowns. The oil and gas industry uses integrated control systems to manage refining operations at plant sites, remotely monitor the pressure and flow of gas pipelines, and control the flow and pathways of gas transmission. Water utilities can remotely monitor well levels and control the wells’ pumps; monitor flows, tank levels, or pressure in storage tanks; monitor water quality characteristics such as pH, turbidity, and chlorine residual; and control the addition of chemicals to the water. Installing and maintaining control systems requires a substantial financial investment. DOE cites research estimating the value of the control systems used to monitor and control the electric grid and the oil and natural gas infrastructure at $3 billion to $4 billion. The thousands of remote field devices represent an additional investment of $1.5 billion to $2.5 billion. Each year, the energy sector alone spends over $200 million for control systems, networks, equipment, and related components and at least that amount in personnel costs. There are two primary types of control systems: distributed control systems and supervisory control and data acquisition (SCADA) systems. Distributed control systems typically are used within a single processing or generating plant or over a small geographic area, while SCADA systems typically are used for large, geographically dispersed operations. For example, a utility company may use a distributed control system to manage power generation and a SCADA system to manage its distribution. A SCADA system is generally composed of six components: (1) instruments, which sense conditions such as pH, temperature, pressure, power level, and flow rate; (2) operating equipment, which includes pumps, valves, conveyors, and substation breakers; (3) local processors, which communicate with the site’s instruments and operating equipment, collect instrument data, and identify alarm conditions; (4) short-range communication, which carry analog and discrete signals between the local processors and the instruments and operating equipment; (5) host computers, where a human operator can supervise the process, receive alarms, review data, and exercise control; and (6) long-range communications, which connect local processors and host computers using, for example, leased phone lines, satellite, and cellular packet data. Several key federal plans focus on securing critical infrastructure control systems. The National Strategy to Secure Cyberspace calls for DHS and DOE to work in partnership with industry to develop best practices and new technology to increase the security of critical infrastructure control systems, to determine the most critical control systems-related sites, and to develop a prioritized plan for short-term cyber security improvements for those sites. In addition, DHS’s National Infrastructure Protection Plan specifically identifies control systems as part of the cyber infrastructure, establishes an objective of reducing vulnerabilities and minimizing the severity of attacks on these systems, and identifies programs directed at protecting control systems. Further, in May 2007, the critical infrastructure sectors issued sector-specific plans to supplement the National Infrastructure Protection Plan. Twelve sectors, including the chemical, energy, water, information technology, postal, emergency services, and telecommunications sectors, identified control systems within their respective sectors. Of these, most identified control systems as critical to their sector and listed efforts under way to help secure them. Cyber threats can be intentional and unintentional, targeted or nontargeted, and can come from a variety of sources. Intentional threats include both targeted and nontargeted attacks, while unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. A targeted attack is when a group or individual specifically attacks a critical infrastructure system and a nontargeted attack occurs when the intended target of the attack is uncertain, such as when a virus, worm, or malware is released on the Internet with no specific target. There is increasing concern among both government officials and industry experts regarding the potential for a cyber attack on a national critical infrastructure, including the infrastructure’s control systems. The Federal Bureau of Investigation has identified multiple sources of threats to our nation’s critical infrastructures, including foreign nation states engaged in information warfare, domestic criminals, hackers, and virus writers, and disgruntled employees working within an organization. Control systems are vulnerable to flaws or weaknesses in system security procedures, design, implementation, and internal controls. When these weaknesses are accidentally triggered or intentionally exploited, they could result in a security breach. Vulnerabilities could occur in control systems’ policies, platform (including hardware, operating systems, and control system applications), or networks. Federal and industry experts believe that critical infrastructure control systems are more vulnerable today than in the past due to the increased standardization of technologies, the increased connectivity of control systems to other computer networks and the Internet, insecure connections, and the widespread availability of technical information about control systems. Further, it is not uncommon for control systems to be configured with remote access through either a dial-up modem or over the Internet to allow remote maintenance or around-the-clock monitoring. If control systems are not properly secured, individuals and organizations may eavesdrop on or interfere with these operations from remote locations. Reported attacks and unintentional incidents involving critical infrastructure control systems demonstrate that a serious attack could be devastating. Although there is not a comprehensive source for incident reporting, the following examples, reported in government and media sources, demonstrate the potential impact of an attack. Bellingham, Washington, gasoline pipeline failure. In June 1999, 237,000 gallons of gasoline leaked from a 16-inch pipeline and ignited an hour and a half later, causing three deaths, eight injuries, and extensive property damage. The pipeline failure was exacerbated by poorly performing control systems that limited the ability of the pipeline controllers to see and react to the situation. Maroochy Shire sewage spill. In the spring of 2000, a former employee of an Australian software manufacturing organization applied for a job with the local government, but was rejected. Over a 2-month period, this individual reportedly used a radio transmitter on as many as 46 occasions to remotely break into the controls of a sewage treatment system. He altered electronic data for particular sewerage pumping stations and caused malfunctions in their operations, ultimately releasing about 264,000 gallons of raw sewage into nearby rivers and parks. CSX train signaling system. In August 2003, the Sobig computer virus shut down train signaling systems throughout the East Coast of the United States. The virus infected the computer system at CSX Corporation’s Jacksonville, Florida, headquarters, shutting down signaling, dispatching, and other systems. According to an Amtrak spokesman, 10 Amtrak trains were affected. Train service was either shut down or delayed up to 6 hours. Los Angeles traffic lights. According to several published reports, in August 2006, two Los Angeles city employees hacked into computers controlling the city’s traffic lights and disrupted signal lights at four intersections, causing substantial backups and delays. The attacks were launched prior to an anticipated labor protest by the employees. Harrisburg, Pennsylvania, water system. In October 2006, a foreign hacker penetrated security at a water filtering plant. The intruder planted malicious software that was capable of affecting the plant’s water treatment operations. The infection occurred through the Internet and did not seem to be a direct attack on the control system. Browns Ferry power plant. In August 2006, two circulation pumps at Unit 3 of the Browns Ferry, Alabama, nuclear power plant failed, forcing the unit to be shut down manually. The failure of the pumps was traced to excessive traffic on the control system network, possibly caused by the failure of another control system device. As control systems become increasingly interconnected with other networks and the Internet, and as the system capabilities continue to increase, so do the threats, potential vulnerabilities, types of attacks, and consequences of compromising these critical systems. Industry-specific organizations in various sectors, including the electricity, oil and gas, and water sectors, have initiatives under way to help improve control system security, including developing standards and publishing guidance. Our report being released today provides a detailed list of industry initiatives; several of these initiatives are described below. Electricity. In 2007, the North American Electric Reliability Corporation began implementing cyber security reliability standards that apply to control systems and the Institute of Electrical and Electronics Engineers has several standards working groups addressing issues related to control systems security in the industry. Oil and gas. The American Gas Association supported development of a report that would recommend how to apply encryption to protect gas utility control systems; and, over the past three years, the American Petroleum Institute has published two standards related to pipeline control systems integrity and security and the design and implementation of control systems displays. Water. The water sector includes about 150,000 water, wastewater, and storm water organizations at all levels of government and has worked with the Environmental Protection Agency on development of the Water Sector-Specific Plan, which includes some efforts on control systems security. In addition, the Awwa Research Foundation is currently working on two research projects related to the cyber security of water utility SCADA systems. Over the past few years, federal agencies— including DHS, DOE, and others—have initiated efforts to improve the security of critical infrastructure control systems. For example, DHS is sponsoring multiple control systems security initiatives, including the Control System Cyber Security Self Assessment Tool, an effort to improve control systems’ cyber security using vulnerability evaluation and response tools, and the Process Control System Forum, to build relationships with control systems’ vendors and infrastructure asset owners. Additionally, DOE sponsors control systems security efforts within the electric, oil, and natural gas industries. These efforts include the National SCADA Test Bed Program, which funds testing, assessments, and training in control systems security, and the development of a road map for securing control systems in the energy sector. Our report being released today provides a more detailed list of initiatives being led by federal agencies. DHS, however, has not yet established a strategy to coordinate the various control systems activities across federal agencies and the private sector. In 2004, we recommended that DHS develop and implement a strategy for coordinating control systems security efforts among government agencies and the private sector. DHS agreed and issued a strategy that focused primarily on DHS’s initiatives. The strategy does not include ongoing work by DOE, the Federal Energy Regulatory Commission, NIST, and others. Further, it does not include the various agencies’ responsibilities, goals, milestones, or performance measures. Until DHS develops an overarching strategy that delineates various public and private entities’ roles and responsibilities and uses it to guide and coordinate control systems security activities, the federal government and private sector risk investing in duplicative activities and missing opportunities to learn from other organizations’ activities. Further, DHS is responsible for sharing information with critical infrastructure owners on control systems vulnerabilities, but lacks a rapid, efficient process for disseminating sensitive information to private industry owners and operators of critical infrastructures. An agency official noted that sharing information with the private sector can be slowed by staff turnover and vacancies at DHS, the need to brief agency and executive branch officials and congressional staff before briefing the private sector, and difficulties in determining the appropriate classification level for the information. Until the agency establishes an approach for rapidly assessing the sensitivity of vulnerability information and disseminating it—and thereby demonstrates the value it can provide to critical infrastructure owners—DHS’s ability to effectively serve as a focal point in the collection and dissemination of sensitive vulnerability information will continue to be limited. Without a trusted focal point for sharing sensitive information on vulnerabilities, there is an increased risk that attacks on control systems could cause a significant disruption to our nation’s critical infrastructures. Control systems are an essential component of our nation’s critical infrastructure and their disruption could have a significant impact on public health and safety. Given the importance of control systems, in our report being released today, we are recommending that the Secretary of the Department of Homeland Security implement the following two actions: develop a strategy to guide efforts for securing control systems, including agencies’ responsibilities, as well as overall goals, milestones, and performance measures and establish a rapid and secure process for sharing sensitive control system vulnerability information with critical infrastructure control system stakeholders, including vendors, owners, and operators. In its comments on our report, DHS neither agreed nor disagreed with these recommendations, but stated that it would take them under advisement. The agency also discussed new initiatives to develop plans and processes that are consistent with our recommendations. In summary, past incidents involving control systems, system vulnerabilities, and growing threats from a wide variety of sources highlight the risks facing control systems. The public and private sectors have begun numerous activities to improve the cyber security of control systems. However, the federal government lacks an overall strategy for coordinating public and private sector efforts. DHS also lacks an efficient process for sharing sensitive information on vulnerabilities with private sector critical infrastructure owners. Until DHS completes the comprehensive strategy, the public and private sectors risk undertaking duplicative efforts. Further, without a streamlined process for advising private sector infrastructure owners of vulnerabilities, DHS is unable to fulfill its responsibility as a focal point for disseminating this information. If key vulnerability information is not in the hands of those who can mitigate its potentially severe consequences, there is an increased risk that attacks on control systems could cause a significant disruption to our nation’s critical infrastructures. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact me at (202) 512-6244, or by e-mail at [email protected]. Other key contributors to this testimony include Scott Borre, Heather A. Collins, Neil J. Doherty, Vijay D’Souza, Nancy Glover, Sairah Ijaz, Patrick Morton, and Colleen M. Phillips (Assistant Director). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Control systems--computer-based systems that monitor and control sensitive processes--perform vital functions in many of our nation's critical infrastructures such as electric power generation, transmission, and distribution; oil and gas refining; and water treatment and distribution. The disruption of control systems could have a significant impact on public health and safety, which makes securing them a national priority. GAO was asked to testify on portions of its report on control systems security being released today. This testimony summarizes the cyber threats, vulnerabilities, and the potential impact of attacks on control systems; identifies private sector initiatives; and assesses the adequacy of public sector initiatives to strengthen the cyber security of control systems. To address these objectives, GAO met with federal and private sector officials to identify risks, initiatives, and challenges. GAO also compared agency plans to best practices for securing critical infrastructures. Critical infrastructure control systems face increasing risks due to cyber threats, system vulnerabilities, and the serious potential impact of attacks as demonstrated by reported incidents. Threats can be intentional or unintentional, targeted or nontargeted, and can come from a variety of sources. Control systems are more vulnerable to cyber attacks than in the past for several reasons, including their increased connectivity to other systems and the Internet. Further, as demonstrated by past attacks and incidents involving control systems, the impact on a critical infrastructure could be substantial. For example, in 2006, a foreign hacker was reported to have planted malicious software capable of affecting a water filtering plant's water treatment operations. Also in 2006, excessive traffic on a nuclear power plant's control system network caused two circulation pumps to fail, forcing the unit to be shut down manually. Multiple private sector entities such as trade associations and standards setting organizations are working to help secure control systems. Their efforts include developing standards and providing guidance to members. For example, the electricity industry has recently developed standards for cyber security of control systems and a gas trade association is developing guidance for members to use encryption to secure control systems. Federal agencies also have multiple initiatives under way to help secure critical infrastructure control systems, but more remains to be done to coordinate these efforts and to address specific shortfalls. Over the past few years, federal agencies have initiated efforts to improve the security of critical infrastructure control systems. However, there is as yet no overall strategy to coordinate the various activities across federal agencies and the private sector. Further, the Department of Homeland Security (DHS) lacks processes needed to address specific weaknesses in sharing information on control system vulnerabilities. Until public and private sector security efforts are coordinated by an overarching strategy, there is an increased risk that multiple organizations will conduct duplicative work. In addition, until information-sharing weaknesses are addressed, DHS risks not being able to effectively carry out its responsibility for sharing information on vulnerabilities with the private and public sectors.
|
SARS is a severe viral infection that is sometimes fatal. The disease first emerged in China in 2002 and then spread through Asia to 26 countries around the world. Although national governments are responsible for responding to infectious disease outbreaks such as SARS, WHO plays an important role in coordinating the response to the global spread of infectious diseases and assisting countries with their public health response to outbreaks. The U.S. government plays a role during international outbreaks in assisting WHO and affected countries and protecting U.S. citizens and interests at home and abroad. The virus that causes SARS is a member of a family of viruses known as coronaviruses, which are thought to cause about 10 percent to 15 percent of common colds. Within 2 to 10 days after infection with the SARS virus, an individual may begin to develop symptoms—including cough, fever, and body aches—that are difficult to distinguish from those of other respiratory illnesses. The primary mode of transmission appears to be direct or indirect contact with respiratory secretions or contaminated objects. Another feature of the disease is the occurrence of “superspreading events,” where evidence suggests that the disease is transmitted at a high rate due to a combination of patient, environmental, and other factors. According to WHO, the global case fatality rate for SARS is approximately 11 percent and may be more than 50 percent for individuals over age 65. The management of a SARS outbreak relies on the use of established public health measures for the control of infectious diseases—including case identification and contact tracing, transmission control, and exposure management, defined as follows: Case identification and contact tracing: defining what symptoms, laboratory results, and medical histories constitute a positive case in a patient and tracing and tracking individuals who may have been exposed to these patients. Transmission control: controlling the transmission of disease- producing microorganisms through use of proper hand hygiene and personal protective equipment, such as masks, gowns, and gloves. Exposure management: separating infected and noninfected individuals. Quarantine is a type of exposure management that refers to the separation or restriction of movement of individuals who are not yet ill but were exposed to an infectious agent and are potentially infectious. The emergence of SARS in China can be traced to reports of cases of atypical pneumonia in several cities throughout Guangdong Province in November 2002. (See fig. 1 for a timeline of the emergence of SARS cases and WHO and U.S. government actions.) Because atypical pneumonia is not unusual in this region and the cases did not appear to be connected, many of these early cases were not recognized as a new disease. However, physicians were alarmed because of the unusual number of health care workers who became severely ill after treating patients with a diagnosis of atypical pneumonia. The international outbreak began in February 2003 when an infected physician who had treated some of these patients in China traveled to Hong Kong and stayed at a local hotel. Some individuals who visited the hotel acquired the infection and subsequently traveled to Vietnam, Singapore, and Toronto and seeded secondary outbreaks. Throughout spring 2003, the number of cases continued to spread through Asia to 26 countries around the world, and at its peak—in early May— hundreds of new SARS cases were reported every week. (See app. I for a map of total SARS cases and deaths.) In July 2003, WHO announced that the outbreak had been contained. (See app. II for a detailed chronology of the SARS outbreak.) Although national governments bear primary responsibility for disease surveillance and response, WHO, an agency of the United Nations, plays a central role in global infectious disease control. WHO provides support, information, and recommendations to governments and the international community during outbreaks of infectious disease that threaten global health or trade. The International Health Regulations outline WHO’s authority and member states’ obligations in preventing the global spread of infectious diseases. Adopted in 1951 and last modified in 1981, the International Health Regulations are designed to ensure maximum security against the international spread of diseases with a minimum of interference with world traffic (that is, trade and travel). The current regulations require that member states report the incidence of three diseases within their borders—cholera, plague, and yellow fever—and WHO can investigate an outbreak only after receiving the consent of the government involved. Efforts to revise the regulations began in 1995, and the revised regulations are scheduled to be ready for submission to the World Health Assembly, the governing body of WHO, in May 2005. While the International Health Regulations provide the legal framework for global infectious disease control, WHO’s Global Outbreak Alert and Response Network (GOARN), established in April 2000, is the primary mechanism by which WHO mobilizes technical resources for the investigation of, and response to, disease outbreaks of international importance. Because WHO does not have the human and financial resources to respond to all disease outbreaks, GOARN relies on the resources of its partners, including scientific and public health institutions in member states, surveillance and laboratory networks (e.g., WHO’s Global Influenza Surveillance Network), other U.N. organizations, the International Committee of the Red Cross, and international humanitarian nongovernmental organizations. WHO collects intelligence about outbreaks through various sources, including formal reports from governments and WHO officials in the field as well as informal reports from the media and the Internet. When WHO receives a formal request for assistance from a national government, it responds primarily through GOARN. GOARN’s key response objectives are to ensure that appropriate technical assistance rapidly reaches affected areas during an outbreak and to strengthen public health response capacity within countries for future outbreaks. Its response activities may include providing technical advice or support (e.g., public health experts and laboratory services), logistical aid (e.g., supplies and vaccines), and financial assistance (e.g., emergency funds). In addition to the support provided through GOARN, technical assistance and deployments are also arranged through WHO’s regional offices. Two departments of the U.S. government, the Department of Health and Human Services (HHS) and State, play major roles in responding to infectious disease outbreaks overseas. Within HHS, the Office of Global Health Affairs and CDC work closely with WHO and foreign governments in response efforts. CDC also works with other federal agencies, state and local health departments, and the travel industry to limit the introduction of communicable diseases into the United States. State’s roles include protecting U.S. government employees working overseas and disseminating information about situations that may pose a threat to U.S. citizens living and traveling abroad. In addition, State may coordinate the provision of technical assistance by various U.S. government agencies and use its diplomatic contacts to engage foreign governments on policy issues related to infectious disease response. In recent years, Asia has become increasingly vulnerable to emerging infectious disease outbreaks, and governments have had to deal with diseases such as avian influenza and dengue fever. In China, Hong Kong, and Taiwan, such infectious disease outbreaks are managed through the public health authorities of these governments: China: The Ministry of Health maintains lead authority over health policy at the national level, although provincial governments exercise significant authority over local health matters. In January 2002, the national Center for Disease Control and Prevention was established, along with centers at the provincial and local levels, and charged with matters ranging from infectious disease control to chronic disease management. Hong Kong: The Health, Welfare, and Food Bureau has overall policy responsibility for health care delivery and other human services in Hong Kong. Within the bureau, the Department of Health and its Disease Prevention and Control Division, which was established in July 2000, are responsible for formulating strategies and implementing measures in the surveillance, prevention, and control of communicable diseases. The Hospital Authority is responsible for the management of 43 public hospitals in Hong Kong. Taiwan: The Department of Health is responsible for national health matters and for guiding, supervising, and coordinating local health bureaus. A division of the department, the Taiwan Center for Disease Control, was established in 1999 and consolidated the disease prevention work of several national public health agencies involved in infectious disease control. WHO’s actions to respond to the SARS outbreak were extensive, but its response was delayed by an initial lack of cooperation from officials in China and challenged by limited resources. WHO’s actions included direct technical assistance to affected areas and broad international actions such as alerting the international community about this serious disease and issuing information, guidance, and recommendations to government officials, health professionals, the general public, and the media. (See fig. 1 for key WHO actions during the SARS outbreak.) However, an initial lack of cooperation on the part of China limited WHO’s access to information about the outbreak, and WHO had to stretch its resources for infectious disease control to capacity. WHO’s response to SARS was coordinated jointly by WHO headquarters and its Western Pacific Regional Office (WPRO). At headquarters, WHO activated its GOARN. Although GOARN had been used before to respond to isolated outbreaks of Ebola, meningitis, viral hemorrhagic fever, and cholera in African countries and elsewhere, the SARS outbreak was the first time the network was activated on such a large scale for an international outbreak of an unknown emerging infectious disease. There were two primary aspects to WHO’s activities during the SARS outbreak: One was the direct deployment of public health specialists from around the world to affected Asian governments to provide technical assistance; the other was the formation of three virtual networks of laboratory specialists, clinicians, and epidemiologists who pooled their knowledge, expertise, and resources to collect and develop the information WHO needed to issue its guidance and communications about SARS. Under GOARN’s auspices, WHO rapidly deployed 115 specialists from 26 institutions in 17 countries to provide direct technical assistance to SARS- affected areas. WPRO also facilitated the deployment of an additional 80 public health specialists to SARS-affected areas. Asian governments identified their needs for technical assistance—consisting primarily of more senior, experienced staff—and then WHO issued a request for staff from its partners. WHO officials at headquarters and at WPRO worked jointly to quickly process contracts and send teams into the field within 48 hours of the request. The work of the teams varied, depending on local need. For example, a team of 5 public health experts sent to China reviewed clinical and epidemiologic data to improve the detection and surveillance of SARS cases in Guangdong. A team of 4 public health experts sent to Hong Kong included environmental engineers to help investigate the spread of SARS in a housing complex. WHO also formed several international networks of researchers and clinicians, including a laboratory network, a clinical network, and an epidemiologic network. These networks operated “virtually,” communicating through a secure Web site and teleconferences. The SARS laboratory network, based on the model of WHO’s global influenza surveillance network and using some of the same laboratories, consisted of 13 laboratories in 9 countries. Within one month of its creation, participants in this network had identified the SARS coronavirus and shortly afterward sequenced its genome. The SARS clinical network consisted of more than 50 clinicians in 14 countries. Clinicians in this network helped to develop the SARS case definition and wrote infection control guidelines. The SARS epidemiologic network, which consisted of 32 epidemiologists from 11 institutions, collected data and conducted studies on the characteristics of SARS, including its transmission and control. WHO and other public health experts noted that there was a high level of collaboration and cooperation in these scientific networks. During the SARS outbreak, WHO played a key role in alerting the world about the disease and issuing information, guidance, and recommendations to government officials, health professionals, the general public, and the media that helped raise awareness and control the outbreak. When WHO became concerned about outbreaks of atypical pneumonia in China, Hong Kong, and Vietnam, it issued a global alert on March 12, 2003, warning the world about the appearance of a severe respiratory illness of undetermined cause that was rapidly spreading among health care workers. Three days later, on March 15, WHO issued a second, higher-level global alert in which it identified the disease as SARS and first published a definition of suspect and probable cases. At the same time, WHO also issued its first emergency travel advisory to international travelers, calling on all travelers to be aware of the main symptoms of SARS. When, on March 27, it became clear to WHO that 27 cases of SARS were linked to exposure on five airline flights, WHO recommended the screening of air passengers on flights departing from areas where there was local transmission of SARS. On April 2, WHO began issuing travel advisories— recommendations that travelers should consider postponing all but essential travel to designated areas where the risk of exposure to SARS was considered high. The first designated areas were Hong Kong and Guangdong Province, China; later, the list was expanded to include other parts of China; Toronto; and Taiwan. During the SARS outbreak, WHO also publicized a list of areas with recent local transmission of SARS. In addition to travel recommendations, WHO developed more than 20 other guidelines and recommendations for responding to SARS during the outbreak. These included advice on the detection and management of cases, laboratory diagnosis of SARS, hospital infection control, and how to handle mass gatherings of persons arriving from an area of recent local transmission of SARS. These guidelines and recommendations were disseminated through WHO’s SARS Web site, which was updated regularly and received 6 million to 10 million hits per day. In issuing guidance and recommendations about SARS, WHO had to respond immediately while making the best use of limited scientific knowledge about the disease (e.g., its cause, mode of transmission, and treatment), and it had to communicate effectively to public health professionals and the general public. This situation posed challenges, and WHO’s efforts came under some criticism. For example, officials in Canada, Taiwan, and Hong Kong—areas that were directly affected by the travel recommendations—criticized WHO for not being more transparent in the process it used to issue and lift the recommendations. They also stated that the evidentiary foundation for issuing the recommendations was weak and the process did not allow countries enough time to prepare (e.g., to develop press releases and inform the tourism industry). WHO officials and others also acknowledged that communicating effectively about the risks of transmitting SARS and recommending appropriate action were major challenges for the organization. For example, even though WHO officials believed that the use of face masks by the general public was ineffective in preventing SARS, it had a difficult time communicating this fact and educating the general public about appropriate preventive measures. In addition, WHO recommended screening of airline passengers before departure, but the recommendation was vague and allowed countries to execute it in different ways. Although WHO officials at headquarters and in the field received various informal reports of a serious outbreak of atypical pneumonia in China’s Guangdong Province early in the SARS outbreak, WHO did not issue its global alerts until mid-March 2003. This delay occurred both because there was scientific uncertainty about the disease and because of initial lack of cooperation by China, which limited WHO’s access to information and its ability to assist in investigating and managing the outbreak. As detailed in appendix II, WHO first received informal reports about a serious disease outbreak in Guangdong Province in November 2002. At the time, influenza was suspected as the primary cause of this outbreak. When WHO requested further information from Chinese authorities, it was told that influenza activity in China was normal and that there were no unusual strains of the virus. Despite WHO’s repeated requests, Chinese authorities did not grant it permission to go to Guangdong Province and investigate the outbreak until April 2, 2003. WHO lacked authority under the International Health Regulations to compel China to report the SARS outbreak and to allow WHO to assist in investigating and managing it. WHO officials told us that, in general, the organization tries to play a neutral, coordinating role and relies on government cooperation to investigate problems and ensure that appropriate control measures are being implemented. Vietnam, for example, cooperated with WHO early in the outbreak, which may have contributed to a less severe outbreak in that country. In the case of China, WHO exerted some pressure, as did the U.S. government, and the international media, which eventually helped persuade China to become more open about the situation and to allow WHO to assist in investigating and managing the outbreak. While extensive, WHO’s response to SARS in Asia was challenged by limited resources devoted to infectious disease control and in particular to GOARN. WHO’s ability to respond in a timely and appropriate manner to outbreaks such as SARS is dependent upon the participation and support of WHO’s partners and adequate financial support. During the SARS outbreak, GOARN’s human resources were stretched to capacity. GOARN experienced difficulty in sustaining the response to SARS over time and getting the appropriate experts out into the field. WHO officials in China told us that they could not obtain experienced epidemiologists and hospital infection control experts and that ultimately they had to look outside the network to find assistance. GOARN was largely dependent on CDC staff to deploy to Asia to manage the epidemic response. According to a senior CDC official, if the United States had experienced many SARS cases during the global outbreak, CDC might not have been able to make as many of these staff available. Furthermore, some GOARN partners told us that the staffing requests that they received from GOARN, WPRO, and WHO country offices were not well coordinated. This issue was raised at a GOARN Steering Committee meeting in June 2003, and it was suggested that a stronger regional capacity for coordination could help ensure the necessary public health experts are mobilized and deployed to the field. The SARS outbreak also highlighted the limitations in GOARN’s financial resources. Historically, the network has received limited financial support from WHO’s core budget, which consists of assessed contributions from members. The network tries to make up for shortfalls by soliciting additional contributions from member states, foundations, and other donors. There are limited resources to pay for headquarters staff and technical resources such as computer mapping software and to support management initiatives such as strategic planning and evaluation activities. While acknowledging that planning and evaluation are important both for responding to future outbreaks and for ensuring epidemic preparedness and capacity building, WHO officials told us that GOARN is usually focused on the response to an immediate emergency and thus lacks the time and resources to retrospectively review what worked well and what did not. CDC, as part of HHS, and State played major roles in responding to the SARS outbreak, but their actions revealed limits in their ability to address emerging infectious diseases. CDC worked with WHO and Asian governments to identify and respond to the disease and helped limit its spread into the United States. However, CDC encountered obstacles that made it unable to trace international travelers because of airline concerns over CDC’s authority and the privacy of passenger information, as well as procedural issues. State applied diplomatic pressure to governments, helped facilitate U.S. government efforts to respond to SARS in Asia, and supported U.S. government employees and citizens in the region. However, State encountered multiple difficulties in helping to arrange medical evacuations for U.S. citizens infected with SARS overseas. Based in part on this experience, State ultimately authorized departure of all nonessential U.S. government employees at several Asian posts. Throughout the SARS outbreak, CDC was the foremost participant in WHO’s multilateral efforts to recognize and respond to SARS in Asia, with CDC officials constituting about two-thirds of the 115 public health experts deployed to the region under the umbrella of GOARN. CDC also contributed its expertise and resources to epidemiological, laboratory, and clinical research on SARS. According to CDC, its involvement in recognizing the disease began in February 2003, when CDC officials joined WHO efforts to identify the cause of atypical pneumonia outbreaks in southern China, Vietnam, and Hong Kong. In March 2003, CDC set up an emergency operations center to coordinate sharing of information with WHO’s epidemiology, clinical, and laboratory networks (see fig. 1). Under GOARN’s auspices, CDC also assigned epidemiologists, laboratory scientists, hospital infection control specialists, and environmental engineers to provide technical assistance in Asia. For example, CDC assigned senior epidemiologists to help a WHO team investigate the outbreak in China. The team met with public health officials and health care workers in affected provinces to determine how they were responding to SARS. It also recommended steps to bring the outbreak under control, such as hospital infection control measures, quarantine strategies, and free health care for individuals with suspected SARS. In addition, because Taiwan is not a member of WHO, CDC gave direct assistance to support Taiwan’s response to SARS, serving as a link between Taiwanese health authorities and WHO and providing technical information and expertise that enabled Taiwan to control the outbreak. Shortly after Taiwan identified its first case of SARS imported from China in March 2003, Taiwanese authorities asked WHO for assistance. WHO officials transmitted the request to CDC and asked it to respond. Between March and July 2003, 30 CDC experts traveled to Taiwan and advised health authorities on various aspects of the SARS response. CDC epidemiologists recommended changes in Taiwan’s approach to classifying SARS cases, which was time consuming and resulted in a large backlog of cases awaiting review as the outbreak expanded. They advised Taiwanese health authorities to replace their case classification system with a two-tiered approach that would categorize patients with SARS-like symptoms as either “suspect” or “probable” SARS. This strategy enabled public health authorities to institute precautionary control measures, such as isolation, for suspected SARS patients, and according to senior CDC and Taiwanese officials, it helped reduce transmission, including within medical facilities, and stop the outbreak. When WHO issued its global SARS alert on March 12, 2003, CDC officials attempted to limit the disease’s spread into the United States by (1) providing information for people traveling to or from SARS-affected areas and (2) ensuring that travelers arriving at U.S. borders with SARS-like symptoms received proper medical treatment. Beginning in mid-March 2003, CDC posted regular SARS updates on its Web site for people traveling to SARS-affected countries. At the same time, CDC’s Division of Global Migration and Quarantine deployed quarantine officers to U.S. airports, seaports, and land crossings where travelers entered the United States from SARS-affected areas. The officers distributed health alert notices to all arriving travelers and crew (see fig. 2). The notices, printed in eight languages and describing SARS symptoms, incubation period, and what to do if symptoms developed, also contained a message to physicians to contact a public health officer or CDC if they treated a patient who might have SARS. CDC staff distributed close to 3 million health alert notices over a 3-month period. Department of Homeland Security staff assisted CDC by passing out the notices at land crossings between the United States and Canada. CDC’s quarantine officers also responded to dozens of reports of passengers with SARS-like symptoms on airplanes and ships arriving in the United States from overseas. The officers boarded the airplane or ship, assessed the ill individuals to determine if they might have SARS and, if necessary, arranged the individuals’ transport to a medical facility. CDC officials wanted to advise passengers who had traveled on an airplane or ship with a suspected SARS case to monitor themselves for SARS symptoms during the virus’s 10-day incubation period, but due to airline concerns over authority and privacy, as well as procedural constraints, CDC was unable to obtain the passenger contact information it needed to trace travelers. Although HHS has statutory authority to prevent the introduction, transmission, or spread of communicable diseases from foreign countries into the United States, HHS regulations implementing the statute do not specifically provide for HHS to obtain passenger manifests or other passenger contact information from airlines and shipping companies for disease outbreak control purposes. CDC officials told us that some airlines failed to provide necessary contact information to CDC, which may be attributable to the lack of specific regulations in this area. Moreover, CDC officials said that in response to their requests, some airlines refused to give CDC passenger contact information from frequent flier databases or credit card receipts because of privacy concerns. Even when CDC was able to obtain passenger information, CDC staff responsible for contacting travelers found passenger data untimely (because some airlines provided it after SARS’s 10-day incubation period), insufficient (because some airlines could provide only passenger names but no contact information), or difficult to use (because it was available on paper rather than electronically). According to senior CDC officials, the inability to trace travelers who might have been exposed to SARS could have hampered their ability to limit the disease’s spread into the United States. The obstacles to tracing travelers remain unresolved, and senior CDC officials are concerned they will encounter difficulties in limiting the spread of infectious diseases into the United States during future global infectious disease outbreaks. CDC officials told us they are exploring several options to overcome the problems they encountered during the SARS outbreak. CDC may adopt one or more of these options, including: clarifying CDC’s authority by promulgating regulations specifically to obtain passenger contact information; coordinating with the Department of Homeland Security and other federal agencies for this purpose; developing a memorandum of understanding with airlines on sharing passenger information; and creating a system for obtaining passenger contact information in an electronic format. However, CDC officials said they have already faced obstacles in pursuing some of these options. For example, both CDC and Department of Homeland Security officials told us that Homeland Security’s computer-based passenger information system could not be used for purposes other than national security. State also played an important role in the U.S. response to SARS, primarily by applying diplomatic pressure, helping facilitate government efforts overseas, and disseminating information. In March 2003, the U.S. Ambassador to China communicated with Chinese government officials to encourage China to be more transparent in reporting SARS cases and to grant WHO and CDC officials access to southern China. State also established two working groups to facilitate the U.S. government response to SARS in Asia. The first working group, comprising various State offices and bureaus, issued daily reports on the status of the outbreak to U.S. embassies and consulates. The second working group, established in May 2003, convened various U.S. government agencies, including State, HHS, and the Departments of Defense and Homeland Security, to address policy and response issues. U.S. government officials agreed that State’s efforts helped provide valuable information during an uncertain period and allowed for a unified response to the outbreak. U.S. embassies and consulates in Asia also disseminated information to U.S. government employees and U.S. citizens living and traveling abroad. For example, they publicized CDC updates on SARS through e-mail alerts and on their Web sites and informed U.S. citizens about medical care available in-country. During the outbreak, even the strongest local health care systems were overwhelmed, and State was concerned that U.S. government employees might receive treatment that did not meet U.S. standards. For example, in Hong Kong and China, U.S. consular staff told us they were concerned about sending U.S. government employees to local hospitals because of inadequate infection control practices, limited availability of health care workers with English language skills, and controversial treatment protocols such as administering steroids to SARS patients. In a few cases, State worked with private medical evacuation companies to help arrange medical evacuations for U.S. citizens with suspected SARS. However, early in the outbreak, CDC had not yet developed guidelines to prevent transmission during flight, and medical evacuation companies could not obtain aircraft and crew willing to transport SARS patients because of the perceived health risks. Even after CDC developed guidelines, medical evacuation companies still had difficulty finding aircraft because only about 5 percent of existing air ambulances could comply with the stringent guidelines, according to a private air medical evacuation official. Furthermore, a U.S. state and some medical facilities in the United States refused to accept SARS patients brought from Asia. For example, the state of Hawaii initially said it would accept medically evacuated SARS patients but later reneged and prevented one air ambulance company from bringing a U.S. citizen with suspected SARS to a medical facility in Honolulu. Although the Department of Defense (Defense) performed one medical evacuation for a U.S. civilian under special circumstances, officials at State and Defense told us that military priorities and scarce resources are likely to prevent Defense from performing civilian evacuations in the future. Ultimately, State concluded that inadequate local health care and difficulties arranging medical evacuations put U.S. government employees at risk, and, in turn, State authorized departure for nonessential employees and their dependents at several posts. State has not developed a strategy to overcome the challenges that staff encountered in arranging international medical evacuations during the SARS outbreak, but it is working with other U.S. government agencies to develop guidance on this issue. Officials at State, CDC, Defense, and medical evacuation companies told us that the same obstacles could resurface during a new outbreak of SARS or another unknown infectious disease with airborne transmission. State officials said the medical evacuation companies that provide State’s medical evacuation services have agreed to evacuate SARS patients, and the companies with whom we spoke confirmed that since the SARS outbreak, they have identified sufficient aircraft and crew to transport a limited number of patients. The exact number would depend on the nature of the disease, the patient’s condition, and the type of medical care required. State officials said they have not investigated how many SARS patients private medical evacuation companies or Defense could transport; they also do not know which U.S. states and medical facilities would accept patients with SARS or another emerging infectious disease. State officials are concerned about a scenario in which dozens of staff at a U.S. embassy or consulate contract SARS or another infectious disease, in which case medical evacuation would probably not be feasible given the current constraints. This would also pose a problem if many U.S. citizens living or traveling overseas contracted such a disease. Private medical evacuation companies acknowledged that they might not be able to transport large numbers of patients; furthermore, they are unsure which destinations in the United States would accept patients with an infectious disease such as SARS. State officials said they are working with other U.S. government agencies to develop guidelines for consular staff to arrange international medical evacuations. However, it is not clear that this guidance will resolve some of the obstacles encountered during the SARS outbreak. For example, a CDC official said the agency is working with medical facilities near international ports of entry to identify treatment destinations for medically evacuated patients with quarantinable infectious diseases such as SARS, but no agreements have been reached yet. The Asian governments we studied initially struggled to respond to SARS but ultimately brought the outbreak under control. As acknowledged by Asian government officials, poor communication within China and between China and Hong Kong, Taiwan, and WHO obscured the severity of the outbreak during its initial stages. As the extent of the outbreak was recognized, the large-scale response to SARS in China, Hong Kong, and Taiwan was hindered by an initial lack of leadership and coordination. Further, weaknesses in disease surveillance systems, public health capacity, and hospital infection control limited the ability of Asian governments to track the number of cases of SARS and implement an effective response. Improved screening, rapid isolation of suspected cases, enhanced hospital infection control, and quarantine of close contacts ultimately helped end the outbreak. In the aftermath of SARS, efforts are under way to improve public health capacity in Asia to better deal with SARS and other infectious disease outbreaks. The Chinese government’s poor communication within the country, with Hong Kong and Taiwan, and with WHO limited the flow of information about the severity of the SARS outbreak in its initial stages. For example, the Ministry of Health did not widely circulate a report concerning the spread of atypical pneumonia (later determined to be SARS) in Guangdong Province. The report was produced by health officials in Guangdong Province on January 23, 2003—more than 2 weeks before the Ministry of Health’s first official public announcement on the outbreak. The report warned all hospitals in the province about the disease and provided advice to control its spread. Officials in Hong Kong, which directly borders the province, were not aware of the report, and a senior official in Taiwan, which maintains significant travel and commercial ties with Guangdong Province, said Taiwan did not receive the report or any official communication about the outbreak. In addition, WHO did not receive this information. Officials in Guangdong Province told us they could not share this information outside of China because this is the responsibility of the Ministry of Health. Further, according to Chinese regulations on state secrets, information on widespread epidemics is considered highly classified. Chinese scientists also did not effectively communicate their findings about the cause of SARS early in the outbreak because of government restrictions. For example, as reported in a scientific journal and later confirmed in our own fieldwork, Chinese military researchers successfully identified the coronavirus as a potential cause of SARS in early March 2003, several weeks before a network of WHO researchers proved it was the cause of SARS. One Chinese scientist directly involved in the effort told us that these researchers were instructed to defer to scientists at the Chinese Center for Disease Control and Prevention, who announced erroneously that Chlamydia pneumoniae, a type of bacteria, was responsible for the atypical pneumonia outbreak. In addition, we were told that these researchers were not permitted to communicate their findings on the coronavirus directly to WHO officials because only the Ministry of Health could communicate directly with WHO. Communication problems persisted as late as April 2003, 5 months after the first cases occurred. On April 3, the Minister of Health announced that the outbreak was under effective control and that only 12 cases of SARS had been reported in Beijing. However, a physician working at a military hospital in Beijing wrote a letter to an Asian news magazine claiming that there were significantly more SARS cases in military hospitals and that hospital officials were told not to disclose information about SARS to the public. On April 15, in response to rumors of underreporting, WHO officials leading an investigation into the outbreak were granted permission to visit military hospitals but stated that they were not authorized to report their findings. By April 20, the Ministry of Health announced the existence of 339 previously undisclosed cases of SARS in Beijing. As acknowledged by government officials, a lack of effective leadership and coordination within the governments of China, Hong Kong, and Taiwan early in the outbreak hindered attempts to organize an effective response to SARS. In China, provincial and local authorities maintained significant responsibility and autonomy in conducting epidemiological investigations of SARS but failed to coordinate with one another and national authorities early in the outbreak. However, as SARS spread into Beijing, the highest political leaders of the Chinese Communist Party, citing an increased number of cases and the impact on travel and trade, advised officials to be more forthcoming about SARS cases. The Ministry of Health also acknowledged the ministry’s failure to introduce a unified mechanism for collecting information about the outbreak and setting guidance and requirements across the country. Soon after those announcements, the Minister of Health and Mayor of Beijing were dismissed from their posts for downplaying the extent of the outbreak, and the public health response was brought under stronger central control. A vice premier of the central government assumed control of the Ministry of Health and convened ministerial level officers to take the lead in the nationwide SARS control effort. In Hong Kong, an expert committee convened after the outbreak to investigate the government’s response questioned the leadership and coordination of the public health system. For example, the committee found that responsibility for managing infectious disease outbreaks was spread throughout different departments within the Health, Welfare, and Food Bureau, with no single authority designated as the central decision- making body during outbreaks. The committee also stated that poor coordination between the hospital and public health system further complicated the response. For example, the Hospital Authority responded to an outbreak within a hospital without informing the Department of Health, which learned of the outbreak through media reports. Further, the Hospital Authority and Department of Health used separate databases during the initial stages of the outbreak and could not communicate information on new cases in real time. In Taiwan, a report by WHO stated that the initial response to SARS was managed by senior political figures who sometimes did not heed the advice of technical experts. Furthermore, WHO noted that the failure to follow the advice of public health experts delayed the decision-making process and slowed the response to the outbreak in Taiwan. Taiwanese government officials noted that the leadership of the public health system was weak during the outbreak. In addition, the process they used to classify SARS cases was too slow to isolate suspected or probable cases. As the outbreak worsened and spread into hospitals throughout Taiwan, the Minister of Health and the director of the Taiwan Center for Disease Control resigned over criticisms about failing to control the spread of SARS. As Asian governments monitored the spread of SARS, weaknesses in disease surveillance systems, public health capacity, and hospital infection control caused delays and gaps in disease reporting, which further constrained the response. In China, health officials at the provincial level and WHO advisers working in the country noted that data gathering systems established in the epicenter of the outbreak in Guangdong Province were strong. However, Chinese officials also found that the effectiveness of a national disease surveillance system established in 1998 was limited. For example, disease prevention staff below the county level did not have access to computer terminals to report the number of SARS cases and had to relay disease reports to central authorities by fax or mail. In addition, the computer- based system did not permit the reporting of suspect cases that were not yet confirmed. Further, protocols for reporting were time consuming, since information was sent through multiple levels of the public health system. For example, during the outbreak, reports from doctors of suspect SARS cases could take up to 7 days to reach local public health authorities. In Beijing, an executive vice minister stated that the large number of undetected cases of SARS patients occurred because they could not collect information on SARS cases that were spread across 70 hospitals in the city. In Taiwan, duplicative reporting between municipal and federal levels led to unclear data on the total number of cases throughout the island. A WHO official reported that the surveillance data were entered into formats that were difficult to analyze and could not inform the public health response. In Hong Kong, a quickly established atypical pneumonia surveillance system detected early cases of severe pneumonia admitted into hospitals. However, the expert committee reviewing the response noted that the limited access to data from private sector health care providers and a lack of comprehensive laboratory surveillance made it difficult for public health authorities to gain accurate information about the full extent of the outbreak and implement necessary control measures. In China, officials told us that a lack of funding and a reliance on market forces to finance public health services have weakened the country’s ability to respond to outbreaks. For example, the newly established Center for Disease Control and Prevention system in China derives more than 50 percent of its revenue from user fees for immunizations and other services. WHO noted that the dependence on user fees has drawn attention and resources away from nonrevenue producing activities, such as disease surveillance, that are important for responding to infectious disease outbreaks. Furthermore, China did not have enough public health workers skilled in investigating diseases, and thus staff who had never been involved in disease investigations were used to trace SARS contacts and did not always collect the correct data on these cases. In Hong Kong, the expert committee noted that there was a shortage of expertise in field epidemiology and inadequate support for information systems. In addition, the committee found disproportionate funding of public health services compared with the public hospital system, which receives 10 times more government funds. Taiwanese officials cited problems in public health infrastructure, including the lack of equipment to deal with infectious patients in hospitals and underfunded laboratories. Another major weakness in public health capacity cited by health officials in China, Hong Kong, and Taiwan was a lack expertise in hospital infection control. In many SARS-affected areas, transmission of SARS to health care workers and other hospital patients was a significant factor sustaining the outbreak. In some instances, hundreds of hospital-acquired infections were due to inadequate isolation of individual patients and limited availability and use of personal protective equipment (masks, gowns, and gloves) for hospital workers. For example, in Taiwan, health officials reported that after initial success in rapidly identifying and isolating cases arriving from other SARS-affected areas, hospitals failed to recognize SARS cases occurring within Taiwan, resulting in a secondary, and much larger, outbreak in hospitals throughout the island. WHO, U.S. CDC, and Taiwanese officials told us that the number of physicians trained in infection control practices was inadequate and that infection control was not a priority for hospital management. In Hong Kong, the expert committee noted that there was no clear leadership from infection control doctors and that there were insufficient numbers of nurses trained in hospital infection control. In China, WHO officials noted in field reports that infection control procedures were rudimentary and relied on a range of measures, including disinfection of health care facilities, instead of the recommended isolation measures needed to limit spread to patients and health care workers. The SARS outbreak was ultimately brought under control by a more coordinated response that included the implementation of basic public health strategies. Measures such as improved screening and reporting of cases, rapid isolation of SARS patients, enhanced hospital infection control practices, and quarantine of close contacts were the most effective ways to break the chain of person-to-person transmission. Screening of patients with symptoms of SARS permitted the early identification of suspect cases during the early phase of illness. Furthermore, because SARS is transmitted when individuals have symptoms of the disease, detecting symptomatic patients was considered critical to stopping its spread. For example, in Beijing, fever clinics were established to screen people with fevers before presentation to hospitals or other health care providers to limit exposure to SARS. Between May 7 and June 9, 2003, there were 65,321 fever clinic visits. Through this effort, 47 probable SARS cases were identified, representing only 0.1 percent of all fever clinic visits but 84 percent of all probable cases hospitalized during that period. In addition, policies were implemented requiring daily reports from all areas regardless of whether any SARS cases were found. In Hong Kong, designated medical centers were established to conduct medical monitoring of close contacts of SARS patients to ensure early detection of secondary cases. In Taiwan, hospital staff and other individuals who had contact with SARS patients in hospitals were monitored on a daily basis to detect SARS symptoms. The identification of patients with suspect and probable cases of SARS and their close contacts reduced the rate of contact between SARS patients and healthy individuals in both community and hospital settings. For example, toward the end of the outbreak, one Chinese province decreased the average time between onset of SARS symptoms to hospitalization from 4 days to 1, and the time to trace contacts of these patients from 1 day to less than half a day. These declines in the time for hospitalization and contact tracing generally coincided with a decrease in the number of new cases. In Hong Kong, officials facilitated tracing by linking a SARS database used by public health officials with police databases to track and verify the addresses of relatives and other close contacts of SARS patients. To limit the spread of SARS in the hospital system, specific hospitals were designated to treat suspected SARS patients in all SARS-affected areas. Another strategy in SARS-affected areas was the cancellation of school, large public gatherings, and holiday activities. For example, in China the weeklong May Day celebration was shortened. The widespread use of personal protective equipment helped contain the spread of SARS in hospitals. For example, in China, when hospital infection control measures were instituted toward the end of the outbreak in a 1,000- bed hospital constructed exclusively for SARS patients, there were no further cases of SARS transmission in health care workers. Similarly in Hong Kong and Taiwan, these measures led to a decline in the number of infections in health care workers. In addition, in all these affected areas, guidelines were ultimately established for the use of personal protective equipment in outbreak situations. China, Taiwan, and Hong Kong implemented quarantine measures to isolate potentially infected individuals from the larger community, which, when restricted to close contacts of SARS patients, proved to be an efficient and effective public health strategy. In Hong Kong, for example, close contacts of SARS patients and people in high-risk areas were isolated for 10 days in designated medical centers or at home to ensure early detection of secondary cases. However, more wide-scale quarantine took place in Taiwan, where 131,000 individuals who had any form of contact with a SARS patient or traveled to SARS-affected areas were placed under quarantine, and in Beijing, where more than 30,000 people were quarantined. Analysis of data from these areas indicated that the quarantine of individuals with no close contact to SARS patients was not an effective use of resources. For example, among the 133 probable and suspect cases identified in Taiwan, most were found to have had direct contact with a SARS patient. Similarly, researchers found that in Beijing, limiting quarantine to close contacts of actively ill patients would have been a more efficient strategy and a better use of resources. Following the SARS epidemic, Asian governments have attempted to improve public health capacity, revise their legal frameworks for infectious disease control, increase regional communication and cooperation, and utilize international aid to improve preparedness. During our fieldwork, we met with public health representatives at various levels—from senior health ministry officials to local hospital health care workers—who provided information on efforts to improve public health capacity. For example, after the SARS outbreak the Chinese government provided additional budgetary support and expanded authority to improve coordination and communication. The government also devised a plan to build capacity in its weak rural health care system. In Hong Kong, the government focused its efforts on early detection and response to infectious disease outbreaks and is developing a Center for Health Protection focused on infectious disease control. Several drills were conducted to test the system, and the government has identified protecting populations in senior citizen homes, schools, and hospitals as a priority. In Taiwan, the government responded to public health management shortcomings by establishing a new public health command structure with centralized authority and decision-making power and making numerous changes in health leadership positions. The government invested public funds to upgrade its health infrastructure—for example, to construct fever wards, isolation rooms with negative pressure relative to the surrounding area, and other improvements in hospitals. The SARS outbreak also led to legal reforms specific to SARS control and the function of public health systems in SARS-affected areas. For example, China, Hong Kong, and Taiwan passed legislation or regulations during the outbreak that required clinicians and public health authorities to report cases of SARS. In China, regulations on the prevention of SARS were passed that, among other things, were intended to improve communication with the public and outline administrative or criminal penalties for officials who do not report SARS cases. A broader set of regulations that may have a long-term impact was also passed that requires the creation of a unified command during public health emergencies, reporting of such emergencies within 2 hours, and improved public health capacity at all levels of the government. In Hong Kong, the law was revised to enhance the power of public health authorities to isolate cases and control the spread of SARS through international travel. Senior government officials have taken steps to improve public health communication and coordination in the region. Health officials in Hong Kong and Taiwan stated it is critical that information on disease outbreaks in mainland China be quickly reported so that neighboring governments can take preventive actions. A post-SARS agreement among Guangdong Province, Hong Kong, and Macau has thus far led to monthly sharing of information on a list of 30 diseases. A senior Chinese health official stated that the SARS outbreak taught the Chinese government the need for international cooperation in fighting infectious disease outbreaks. According to WHO officials, since the 2002-2003 SARS outbreak, they have experienced increased transparency and willingness on the part of the Chinese government to work with WHO health experts. The international community and the United States have committed financial and human resources to support the recent financial investments in public health capacity made by the Chinese government. For example, in July 2003 the World Bank announced a multidonor-supported program to strengthen disease surveillance and reporting and improve the skills of clinicians in China. The program is funded by US$11.5 million in loans from the World Bank, a 3 million British pound grant from the United Kingdom’s Department for International Development, a Can$5 million grant from the Canadian International Development Agency, and a US$2 million regional grant from the Japan Social Development Fund. HHS is in the process of finalizing a multiyear, multimillion-dollar program of cooperation between HHS and the Chinese Ministry of Health aimed at strengthening China’s capacity in public health management, epidemiology, and laboratory capacity. As part of the initiative, CDC staff members will be stationed in China to help strengthen the epidemiology workforce. During the SARS outbreak, consumer confidence temporarily declined as a result of consumer fears about SARS and precautions taken to avoid contracting the disease. This decline in consumer confidence in turn led to economic losses in Asian economies estimated in the billions of dollars. Service sectors were hit the hardest due to declines in travel and tourism to areas with SARS outbreaks and declines in retail sales involving face-to- face exchanges. Additionally, to counter economic losses associated with SARS, many Asian governments implemented costly economic stimulus programs. While the number of cases and associated medical costs for the SARS outbreak were relatively low compared with those for other major historical epidemics, the economic costs of SARS were significant because they derived primarily from fears about the disease and precautions to avoid the disease, rather than the disease itself. As shown in table 1, one industry and one official estimate of the economic cost of SARS in Asia calculated the net loss in total output at roughly $11 billion to $18 billion, respectively. (These estimates reflect changes in growth forecasts that were calculated concurrent with the outbreak. See app. III for a discussion of methodologies and varied assumptions used to obtain these estimates.) For example, the Far Eastern Economic Review estimates SARS’s economic costs in Asia at around $11 billion, with the largest losses in China, Hong Kong, and Singapore. The Asian Development Bank also shows the largest losses in these three economies, although they estimate the total cost at around $18 billion. As the Asian Development Bank reported, using its cost estimate, the cost per person infected with SARS was roughly $2 million. While economic costs associated with a general loss in consumer confidence are difficult to quantify exactly, they illustrate how emerging diseases and fears associated with those diseases can have widespread ramifications for a large number of economies. The economic cost of SARS in terms of a percentage loss in each selected Asian economy’s GDP has also been estimated by the Asian Development Bank and industry organizations at roughly 0.5 percent to 2 percent, with some variation among economies depending upon the importance of affected sectors in total output (see app. III for a more detailed discussion of these models’ assumptions and their GDP loss estimates per country). Figure 3 shows quarterly GDP growth for four Asian economies most affected by SARS—China, Hong Kong, Singapore, and Taiwan—and illustrates that GDP weakened in the second quarter of 2003, concurrent with the height of the SARS outbreak. However, given that the outbreak was brought under control by July 2003, the economic impacts were concentrated primarily in this second quarter. In fact, when WHO declared that the SARS outbreak was over in July 2003, pent-up demand during the outbreak likely contributed to an economic rebound in the third and fourth quarters. The SARS outbreak produced negative impacts on Asian economies through a variety of mechanisms. The most important channel through which SARS affected these economies was by temporarily dampening consumer confidence, particularly in the travel and tourism industry. In addition, decreased consumer confidence likely reduced retail sales and, to a lesser extent, some foreign trade and investment. Due to reduced demand, employment in affected economies fell. Some businesses also reported an increase in costs as business operations were disrupted, international shipments of goods and trade were hampered, and disease prevention costs rose. The most severe economic impacts from SARS occurred in the travel and tourism industry, with airlines being particularly hard hit. As shown in figure 4, declines in regional airline traffic reached 40 percent to 50 percent in April and May, two months in which WHO travel advisories for Asia Pacific were in effect. The estimated percentage decline in overall tourism earnings amounted to 15 percent in Vietnam, 25 percent in China, and more than 40 percent in Hong Kong and Singapore, according to the World Travel and Tourism Council. Estimated job losses resulting from these SARS-related impacts were also significant. For example, the World Travel and Tourism Council estimated tourism sector job losses of around 27,000 in Hong Kong and 18,000 in Singapore, while the World Bank estimated airline job losses in the region at around 36,000. Dampened consumer confidence from SARS also had negative impacts on retail sales and foreign trade and investment, according to anecdotal evidence. The retail sector was negatively affected by the SARS outbreak as consumers curbed shopping trips and visits to restaurants in fear of contracting SARS. For example, China shortened the weeklong May Day celebration that it introduced in 1999 to stimulate private consumption. As shown in figure 5, retail sales fell concurrent with the SARS outbreak in China, Hong Kong, Singapore, and Taiwan, a decline particularly important for Hong Kong and Taiwan due to their large retail sectors. However, the rebound in consumer confidence is also illustrated by an increase in retail sales in the third quarter of 2003. Regarding foreign trade and investment, trends in these variables indicate less distinct SARS-related declines. Nonetheless, there is some indication of the impact of SARS on these activities, such as the reduced sales at the major Guangzhou Trade Fair in China, which totaled only 26 percent of the previous year’s amount, or the lagged effect of a decrease in foreign direct investment into China in July 2003. In response to SARS, governments in Asia implemented economic stimulus packages that also cost billions of dollars. Asian governments provided spending for medical and public health sectors to prevent and control the spread of SARS as well as for fiscal policy programs to more generally stimulate the economy. As shown in table 2, the Asian Development Bank estimates that the cost of these stimulus packages in the region could total nearly $9 billion. While many of the spending and tax measures are designed to improve GDP growth, they can also be considered an economic cost of SARS due to the diversion of government expenditures away from investments in needed public services. The SARS epidemic elevated the importance of the International Health Regulations’ revision process. The proposed revisions, currently in draft form and scheduled for completion in May 2005, would expand the regulations’ coverage and encourage better cooperation between member states and WHO. Member states will have to resolve at least five important issues, regarding (1) scope of coverage, (2) WHO’s authority to conduct investigations in countries absent their consent, (3) the public health capacity of developing country members, (4) an enforcement mechanism to resolve compliance issues, and (5) how to ensure public health security without unnecessary interference with travel and trade. The draft regulations expand the scope of reporting beyond the current three diseases to include all events potentially constituting a public health emergency of international concern, such as SARS. They also promote enhanced member state cooperation with WHO and other countries. Additional changes under consideration include (1) designating national focal points with WHO for notification of public health emergencies and (2) requiring minimum core surveillance and response capacities at the national level to implement a global health security strategy. The overall goal of the revision process is to create a framework under which WHO and others can actively assist states in responding to international public health risks by directly linking the revised regulations to the work of GOARN. Nevertheless, the draft regulations contain several provisions that have been the subject of ongoing debate, including: Scope of coverage. As part of the revision process, WHO has developed criteria to determine whether an outbreak is serious, unexpected, and likely to spread internationally. Furthermore, the draft regulations broaden the definition of a reportable disease to include significant illness caused by biological, chemical, or radionuclear sources. In its initial comments to WHO on the draft regulations, the U.S. government supported the use of criteria for determining what would be a public health emergency of international concern. Nevertheless, the U.S. strongly believed that the draft should also require reporting of a defined list of certain known, serious, communicable diseases that have the potential for creating such a concern. Authority to conduct investigations. Member states are considering the appropriate level of authority for the regulations. Specifically, an unresolved issue is the degree to which the regulations will require binding international commitments or more voluntary standards. To address this issue, member states are examining whether the benefits that would result from agreeing to more rigorous, comprehensive, and mandatory regulations would outweigh losses in sovereignty. For example, the draft regulations eliminate the language in the current regulations that specifically requires WHO to first obtain consent from the member state involved before conducting on-the-spot investigations of disease outbreaks. However, the draft regulations are still somewhat ambiguous about whether consent is necessary. According to a senior WHO official, the proposed regulations were intentionally left vague about consent because it is a subject that members will want to debate thoroughly. Public health capacity of developing countries. The draft regulations provide member states with direction regarding the minimum core surveillance and response capacities required at the national level, including at airports, ports, and other points of entry. However, U.S. and WHO officials note that many developing countries currently lack even the most rudimentary public health capacity and will be dependent on significant international assistance to reach minimum surveillance and response capabilities. HHS officials have expressed caution about developing more comprehensive and demanding requirements that will be difficult for many countries with limited resources to implement. WHO officials acknowledge that, while WHO is able to provide technical assistance through GOARN, multilateral institutions, such as the World Bank, and donor countries will have to provide significant resources for developing countries to meet minimum surveillance and response requirements. A WHO official also indicated that while the proposed revisions to the regulations do not have specific provisions on technical assistance, developing countries are likely to raise the issue of adding such a provision during the revision process. Enforcement mechanism. The members will have to address what kind of enforcement mechanism they want included in the regulations to resolve compliance issues and to deal with violations of the regulations. According to WHO officials, failure to comply with WHO public health requirements is often a problem. The draft regulations, like the current regulations, include a nonbinding mechanism for resolving disputes. Thus, the WHO Director-General is directed either to (1) make every effort to resolve disputes or (2) refer disputes to a WHO Review Committee, which is tasked to forward its views and advice to the parties involved. Although WHO would continue to be dependent on the voluntary compliance of member states, WHO officials believe that if key countries (such as the United States) and neighboring trade partners are sufficiently concerned about the dangers of emerging diseases to press for compliance with the revised regulations, other countries are likely to fulfill their obligations. Furthermore, though it is too early to predict how China’s response to SARS in 2003 will affect future compliance, WHO officials say the negative political, economic, and public health effects China suffered from its initial response to SARS served as a warning to countries that ignore their international public health responsibilities. International traffic. The stated purpose of the draft regulations, which is similar to the current regulations, is to provide security against the international spread of disease while avoiding unnecessary interference with international traffic. Although the term international traffic appears to refer to international travel and trade, neither the proposed nor the current regulations define the term. Furthermore, the draft regulations do not include detailed criteria for determining what constitutes interference with international trade and travel. A WHO official indicated that it was preferable not to include detailed criteria and to allow this issue to be decided on a case-by-case basis because of the very broad range of situations that could ultimately cause such interference. This issue could receive a good deal of attention in the revision process as member states try to balance medical and economic concerns. According to WHO officials, in past epidemics, concerns about economic loss and restrictions on trade and travel caused some countries not to report outbreaks within their borders and to refuse international assistance. Furthermore, for certain outbreaks—for example, those involving cholera in Peru in 1991 and plague in India in 1994—some experts reported that the international response may have exceeded the level of threat and led to unwarranted trade and travel losses in those countries. The process for revising the International Health Regulations was intensified by a WHO World Health Assembly resolution passed in May 2003, during the SARS outbreak, urging members to give high priority to the revision process and to provide the resources and cooperation to facilitate this work. The resolution also requested that the WHO Director-General consider informal sources of information to respond to outbreaks such as SARS; collaborate with national authorities in assessing the severity of infectious disease threats and the adequacy of control measures; and, when necessary, send a WHO team to conduct on-the-spot studies in places experiencing infectious disease outbreaks. Although the resolution did not impose legally binding obligations on members, according to WHO officials and some observers it did lay the political groundwork for improved international cooperation on infectious disease control. In January 2004, WHO distributed to its member states an interim draft of the revisions proposed by the WHO Secretariat. Composed of 55 articles and 10 technical annexes, the draft will be discussed in a series of regional consultations throughout 2004. The degree of consensus on the draft’s technical and political issues will then determine the need for subsequent meetings at the global level. The goal is to convene an intergovernmental working group at the end of 2004 to finalize revisions to the draft regulations. It is hoped the regulations will then be ready for submission to the 58th World Health Assembly in May 2005. However, according to WHO and HHS officials, reaching both technical and political consensus on the regulations will be a difficult task, and they expect the revision process to extend beyond its target date. While the 2002-2003 SARS outbreak had an impact on health and commerce in Asia, the extensive response by WHO and Asian governments, supported in large measure by the U.S. government, was ultimately effective in controlling the outbreak. This event highlighted a number of important issues, including the limited resources to support WHO’s global infectious disease network and deficiencies in Asian governments’ public health systems. It also revealed limitations in the International Health Regulations. In the aftermath of SARS, WHO and member states have recognized the importance of strengthening international collaboration and cooperation to respond to global infectious disease outbreaks. To be successful, this effort will require a greater commitment of resources for global infectious disease control and a concerted effort to revise the International Health Regulations to make them more relevant and useful in future outbreaks. As the regulations are revised, WHO and member states face the challenge of improving the management of disease outbreaks while mitigating adverse economic impacts. The content, manner of acceptance, and means of enacting the final revisions are not certain, and much work remains to be done to resolve outstanding issues. As of April 2004, SARS has not re- emerged to cause another major international outbreak, but outbreaks of other infectious diseases can be expected in the future. Therefore, strengthening public health capacity will be essential for responding to future infectious disease outbreaks. The SARS outbreak also revealed gaps in U.S. government protective measures, including difficulties in arranging medical evacuations from overseas and the inability to trace and contact individuals exposed to SARS during travel. In regard to tracing international travelers who may have been exposed to an infectious disease, we believe that amending HHS regulations to specify that the agency has authority to obtain this information would assist this effort. This action would facilitate HHS’s ability to obtain necessary contact information (1) from airlines or shipping companies that may have concerns about sharing passenger information with HHS, or (2) in the event that issues involving coordination with other federal agencies cannot be effectively resolved. This report is making three recommendations to improve the response to infectious disease outbreaks. First, to strengthen the international response, we recommend that the Secretary of Health and Human Services, in collaboration with the Secretary of State, work with WHO and official representatives from other WHO member states to strengthen WHO’s global infectious disease network capacity to respond to disease outbreaks, for example, by expanding the available pool of public health experts. Second, to help Health and Human Services prevent the introduction, transmission, or spread of infectious diseases into the United States, we recommend that the Secretary of HHS complete the necessary steps to ensure that the agency can obtain passenger contact information in a timely and comprehensive manner, including, if necessary, the promulgation of regulations specifically for this purpose. Third, to protect U.S. government employees and their families working overseas and to better support other U.S. citizens living or traveling overseas, we recommend that the Secretary of State continue to work with the Secretaries of Health and Human Services and Defense to identify public and private sector resources for medical evacuations during infectious disease outbreaks and develop procedures for arranging these evacuations. Such efforts could include working with private air ambulance companies and the Department of Defense to determine their capacity for transporting patients with an emerging infectious disease such as SARS, and working to develop agreements under which U.S. medical facilities near international ports of entry will accept medically evacuated patients with infectious diseases such as SARS. HHS, State, and WHO provided written comments on a draft of this report (see apps. IV, V, and VI for a reprint of HHS’s, State’s, and WHO’s comments). They also provided technical and clarifying comments that we have incorporated where appropriate. HHS said the report is a good summary of the SARS outbreak in Asia and the actions taken by WHO, affected countries, and U.S. agencies. HHS stated that the report’s recommendations are appropriate and emphasized the national and international interagency collaboration that will be required to implement them in preparation for the next epidemic. HHS also noted that to carry out some of the recommendations, sensitive legal and privacy issues and diplomatic concerns must be carefully addressed. HHS also noted that the report contains a useful overview of WHO’s efforts to revise its International Health Regulations and correctly ties WHO’s increased effort to the impact of SARS and lessons learned. In that regard, HHS provided additional information on coordination and collaboration efforts it took during the outbreak. State indicated that the report is a useful summary of the SARS outbreak and its impact and documents important lessons for other infectious disease outbreaks beyond the 2003 SARS epidemic. Regarding our first recommendation, State said it is committed to working with WHO and its member states to strengthen the response capacity of WHO’s global infectious disease network. Regarding our recommendation on contact tracing of arriving passengers infected or exposed to infectious disease, State noted that it has been working on this issue with its interagency partners since the SARS outbreak but underscored that serious legal issues still exist for both the United States and other governments. State also agreed with our recommendation on developing procedures for arranging medical evacuations during an airborne infectious disease outbreak. State indicated that it is working with CDC to develop protocols on how to handle medical evacuations for quarantinable diseases but noted that capacity for such medical evacuations will be limited, as will capacity of U.S. medical facilities to handle a large influx of patients. WHO stated that, overall, the report provides a factual analysis of the events surrounding the emergence of SARS and addresses the major weaknesses in national and international control efforts. WHO noted, however, that the report presents major criticisms of the response by China, Hong Kong, and Taiwan to SARS but does not reflect these governments’ actions throughout the SARS epidemic or the depth and intensity of their control efforts later on. WHO also stated that the report puts little emphasis on other countries that experienced problems— Canada, for example. We disagree that the report does not adequately balance the governments’ shortcomings with accomplishments, as the report includes specific sections on improved screening and reporting of SARS cases, rapid isolation and contact tracing, enhanced hospital infection control practices, and quarantine measures. The report details steps Asian governments have taken in response to SARS to build capacity for future outbreaks. The preponderance of our evidence on Asian governments’ response was provided directly by Chinese, Hong Kong, and Taiwan government and public health officials and from post-SARS evaluation reports produced by these governments and WHO-sponsored conferences. We focused our report on the response of China, Hong Kong, and Taiwan since 95 percent of the SARS cases occurred there. The response of other countries, such as Canada was outside the scope of our examination. Regarding our discussion of WHO’s global infectious disease network, WHO stated that GOARN is one of the mechanisms by which WHO mobilizes technical resources for outbreak investigation and response provided further information about the role of the Western Pacific Regional Office (WPRO) in the SARS response. We clarified the role of GOARN and expanded our discussion on the activities of WPRO. WHO also said that its response was challenged, but not constrained, by limited resources. While we agree with this more general characterization, we believe that not being able to obtain the appropriate multidisciplinary staff and sustain a response over time were significant constraints that warrant serious attention in preparing for future emerging infectious diseases. WHO also noted that the world’s dependence on a fragile process and on the personal commitment and sacrifice of WHO and GOARN staff is a concern. To assess WHO’s actions to respond to SARS in Asia, we analyzed WHO policy, program, and budget documents, including WHO’s Web-based situation updates and guidelines that served as the primary instrument for disseminating information on SARS. We interviewed WHO officials responsible for managing the international response at WHO headquarters in Geneva and public health specialists who served on country teams that were deployed to Asia. We examined WHO’s GOARN, including its guiding principles and how it operated during the SARS outbreak. We also interviewed Asian government officials in Beijing, Guangdong Province, Hong Kong, and Taipei who received WHO’s technical advice and support; U.S. government officials; and recognized experts within the public health community. To assess the role of the U.S. government in responding to SARS in Asia and limiting its spread into the United States, we analyzed program documents and interviewed officials from the Departments of Health and Human Services, State, Defense, and Homeland Security, and the U.S. Centers for Disease Control and Prevention (CDC). To examine CDC’s ability to trace travelers who may have been exposed to an infectious disease, we interviewed officials from the Air Transport Association and the Department of Transportation and reviewed applicable legislation and regulations. To assess State’s ability to provide medical evacuation of U.S. citizens, we examined CDC guidelines on air transport of SARS patients and interviewed officials from major private medical evacuation companies. We also interviewed U.S. embassy (Beijing), consulate (Hong Kong and Guangzhou), and American Institute in Taiwan officials responsible for managing the U.S. government response at the country level. To describe how governments in Asia responded to the SARS outbreak, we focused on those parts of Asia most affected by SARS in the 2002-2003 outbreak, including China, Hong Kong, and Taiwan. While in the region, we met with public health officials at various levels responsible for managing their governments’ public health response, including senior ministry of health and provincial and municipal government officials, as well as hospital administrators and health care workers. We also examined government documents on public health programs and post-SARS evaluations, and reviewed applicable China, Hong Kong, and Taiwan laws and regulations. To describe the economic impact of SARS in Asia, we reviewed impact estimates provided by (1) the Asian Development Bank’s Economic and Research Department, which used a simulation model from Oxford Economic Forecasting; (2) a simulation model using data from the Global Trade Analysis Project Consortium; and (3) a simulation model by Global Insight, a leading U.S. economic data and forecasting firm. Specifics of each of these models are discussed in appendix III. Another organization, the Far Eastern Economic Review, a regional economic business weekly, gathered studies and data on SARS and reported a summary cost estimate that we also reviewed. To supplement our analysis of these impact estimates, we examined trends in official macroeconomic data as reported by the countries’ central banks or departments of statistics, the Asian Development Bank, the Organization for Economic Cooperation and Development, and the World Travel and Tourism Association. Trends in international airline traffic were obtained from the International Air Transport Association. We corroborated our findings with information provided by the U.S. National Intelligence Council and interviews with government officials in Asia. Finally, to examine the status of efforts to update the International Health Regulations, we reviewed the current International Health Regulations, a draft of WHO’s proposed revision of the regulations, the initial U.S. government response to the proposed revisions, and the WHO constitution. We also interviewed WHO and U.S. government officials who are actively engaged in the revision process and other legal experts to determine the potential impacts of the revised rules. We performed our work from July 2003 to April 2004 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Health and Human Services, State, and Defense; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact one of us. Other contacts and key contributors are listed in appendix VII. Appendix II lists key worldwide events during the SARS outbreak, from November 2002, when the disease first emerged, to the most recent reported cases in January 2004. First known case of atypical pneumonia, later determined to be SARS. World Health Organization (WHO) influenza expert attends workshop in Beijing and learns from a participant from Guangdong Province of a “serious outbreak with high mortality and involvement of health care staff.” Global Public Health Intelligence Network (GPHIN) picks up reports of a “flu outbreak” in China. WHO requests further information from China on the influenza outbreak. Chinese government replies that influenza activity in Beijing and Guangdong is normal and that surveillance system detected no unusual strains of the virus. Infection in second city in Guangdong Province. Guangdong’s provincial health authorities produce a report about the outbreak detailing the nature of transmission, clinical features, and suggested preventive measures. The report is circulated to hospitals in the province, but is not shared with WHO or Hong Kong. Multiple Locations WHO Beijing office, Global Outbreak and Alert Response Network (GOARN) partners, and U.S. Centers for Disease Control (CDC) receive reports of a “strange contagious disease” and “pneumonic plague” causing deaths in Guangdong Province. China, Hong Kong Chinese Center for Disease Control and Prevention erroneously announces that the probable causative agent of the atypical pneumonia is Chlamydia. At the same time, cases of avian influenza in a family that traveled between Hong Kong and China result in two deaths. This leads to speculation that the atypical pneumonia outbreak is caused by avian influenza. WHO activates its global influenza laboratory network and calls for heightened global surveillance. First superspreader event in Hong Kong: A physician from Guangdong Province stays at the Metropole Hotel in Hong Kong and is soon hospitalized with respiratory failure. While at the hotel, he transmits the disease to at least 16 other people. A team of WHO experts, including CDC staff, arrives in Beijing but is given limited access to information; Chinese authorities deny WHO’s repeated requests for permission to travel to Guangdong Province. GPHIN detects Chinese newspaper report that more than 50 hospital staff in Guangzhou are infected with “mysterious pneumonia.” Chinese-American businessman admitted to the French Hospital in Hanoi with fever and respiratory symptoms. WHO official Dr. Carlo Urbani notifies WHO office in Manila of an unusual disease. WHO headquarters moves to heightened state of alert. State Department establishes an intradepartmental working group to deal with impact of outbreak. Woman who stayed at the Metropole Hotel in Hong Kong is hospitalized with respiratory symptoms. Second superspreader event in Hong Kong: a resident who had visited the Metropole Hotel is admitted to hospital with respiratory symptoms; within a week, at least 25 hospital staff, all linked to the patient’s ward, develop respiratory illness. Toronto woman who also stayed at the Metropole Hotel in Hong Kong dies at home. Shortly after, her son becomes ill, is admitted to Scarborough Grace Hospital, and dies. His admission triggers an outbreak at the hospital. Businessman with travel history to Guangdong Province is hospitalized with respiratory symptoms. Chinese Health Ministry asks WHO for technical and laboratory support to clarify cause of the Guangdong outbreak of atypical pneumonia. WHO issues global alert about cases of severe atypical pneumonia following mounting reports of spread among hospital staff in Hong Kong and Hanoi. CDC offers assistance to WHO. WHO sends emergency alert to GOARN partners. CDC activates Emergency Operations Center. WHO issues rare global travel advisory, names the mysterious illness “severe acute respiratory syndrome” (SARS), and declares it a “worldwide health threat.” WHO issues its first definitions of suspect and probable cases, calls on travelers to be aware of symptoms, and issues advice to airlines. CDC issues travel advisory suggesting postponement of nonessential travel to Hong Kong, Guangdong Province, and Hanoi. CDC issues preliminary case definition for suspected SARS and initiates domestic surveillance for SARS. First suspected U.S. case is identified. CDC begins distributing health alert cards to passengers arriving from Hong Kong at four international airports. CDC team arrives in Taiwan to assist in SARS response. WHO sets up worldwide network of laboratories to expedite detection of causative agent and to develop a robust and reliable diagnostic test. A similar network is set up to pool clinical knowledge on symptoms, diagnosis, and management. A third network is set up to study SARS epidemiology. China joins WHO’s collaborative networks, initially set up on March 17. Third superspreader event in Hong Kong: Health authorities announce that 213 residents of Amoy Gardens housing estate have been hospitalized with SARS. WHO issues most stringent travel advisory in its 55-year history, recommending that people postpone all but essential travel to Hong Kong and Guangdong Province until further notice. WHO team arrives in Guangdong. President Bush signs executive order adding SARS to the list of quarantinable communicable diseases. This order provides CDC, through its Division of Global Migration and Quarantine, with the legal authority to implement isolation and quarantine measures. WHO laboratory network announces conclusive identification of SARS causative agent: a new coronavirus. Change in political stance by Chinese leadership. Top leaders advise officials not to cover up cases of SARS; mayor of Beijing and Health Minister, both of whom downplayed the SARS threat, are removed from their posts. First country to successfully contain its outbreak of SARS. State Department holds interagency meeting on SARS. WHO sends officials to Taiwan to assist CDC team. First global consultation on SARS epidemiology concludes its work, confirming that available evidence supports the control measures recommended by WHO. World Health Assembly resolution recognizes the severity of the threat that SARS poses and calls on all countries to report cases promptly and transparently. A second resolution strengthens WHO’s capacity to respond to disease outbreaks. WHO holds Global Conference on SARS to review scientific findings on SARS and examine public health interventions to contain it. WHO announces that the global SARS outbreak has been contained. Singapore announces that a medical researcher is infected with SARS. Based on an investigation of this incident, WHO concludes that the patient was accidentally infected in the laboratory. Taiwan announces that a researcher is infected with SARS. Public health authorities conclude that the infection was acquired in a laboratory. A man in Guangdong Province is hospitalized with SARS-like symptoms on December 20. Chinese authorities inform WHO on December 26. After initial diagnostic tests are inconclusive, authorities send the samples to two WHO-designated reference laboratories in Hong Kong. On January 5, the laboratories confirm that the patient has SARS. None of the patient’s contacts contracted SARS. A woman in Guangdong Province is hospitalized with SARS-like symptoms on December 31. Chinese authorities inform WHO and samples are submitted to two WHO-designated reference laboratories in Hong Kong. On January 17, Chinese authorities announce that the patient has SARS. None of the patient’s contacts contracted SARS. A man in Guangdong Province is hospitalized with SARS-like symptoms on January 6. Chinese authorities inform WHO and samples are submitted to WHO-designated reference laboratories in Hong Kong. On January 27, WHO announces that the patient has probable SARS. A doctor in Guangdong Province becomes ill with SARS-like symptoms and is diagnosed with pneumonia on January 14. However, he was not properly isolated in hospital until January 16, he was not declared as a suspected SARS case to China’s Ministry of Health until January 26, and WHO was not informed until January 30. A team of international experts from WHO conducts a joint investigative mission in Guangdong Province with colleagues from China’s Ministry of Health, Ministry of Agriculture, the Chinese Center for Disease Control and Prevention, and the Guangdong Center for Disease Control and Prevention to identify the sources of infection of the most recent SARS cases. The team finds no definitive source of infection for any of the cases. Date of the first known case(s) of SARS. Estimates of the economic impact of SARS have been produced by multiple sources and vary due to the inexact nature of estimating the impact of a recent event such as SARS. When the SARS outbreak first emerged, a number of institutions began estimating the potential economic impact of the disease. These institutions included private investment banks, industry organizations, academics, consulting firms, and international financial institutions such as the Asian Development Bank. To produce their estimates, assumptions had to be incorporated regarding the expected duration of SARS, the number of sectors affected, and country-specific macroeconomic conditions. As such, estimates of economic impact have been broad in nature, have varied depending on model assumptions, and were often revised when actual data were received. For example, some of the initial economic impact estimates were revised downward once data emerged showing China’s strong economic growth during the first 4 months of 2003. To describe the economic impact of SARS in Asia, we primarily relied on impact estimates generated from institutions using simulation models. Table 3 provides information on the models we reviewed. As the table shows, each of these models was used to analyze a low scenario case and a high scenario case, which differed based on assumptions regarding the expected duration of the SARS outbreak and hence the expected duration of the shock to the economy resulting from SARS. To accord with the shorter duration of the actual outbreak, the low scenario results estimated the economic impact of SARS at roughly 0.5 percent to 2 percent of gross domestic product (GDP). All three models show that the largest economic impacts as a percentage of GDP were estimated for Hong Kong and Singapore, which is due to their previously lowered consumption demand and high share of tourism and retail. In addition to the model estimates provided in table 3, we also reviewed SARS cost estimates provided by the Far Eastern Economic Review. The Far Eastern Economic Review’s estimate of $11 billion was generated by calculating an average estimated percentage loss in GDP using reports from various governments and financial institutions and applying that average to the nominal GDP figures provided by the International Monetary Fund. In addition to the persons named above, Janey Cohen, Patrick Dickriede, Anne Dievler, Suzanne Dove, Sharif Idris, Roseanne Price, Kendall Schaefer, and Richard Seldin made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
|
Severe acute respiratory syndrome (SARS) emerged in southern China in November 2002 and spread rapidly along international air routes in early 2003. Asian countries had the most cases (7,782) and deaths (729). SARS challenged Asian health care systems, disrupted Asian economies, and tested the effectiveness of the International Health Regulations. GAO was asked to examine the roles of the World Health Organization (WHO), the U.S. government, and Asian governments (China, Hong Kong, and Taiwan) in responding to SARS; the estimated economic impact of SARS in Asia; and efforts to update the International Health Regulations. WHO implemented extensive actions to respond to SARS, but its response was delayed by an initial lack of cooperation from officials in China and challenged by limited resources for infectious disease control. WHO activated its global infectious disease network and deployed public health specialists to affected areas in Asia to provide technical assistance. WHO also established international teams to identify the cause of SARS and provide guidance for managing the outbreak. WHO's ability to respond to SARS in Asia was limited by its authority under the current International Health Regulations and dependent on cooperation from affected areas. U.S. government agencies played key roles in responding to SARS in Asia and controlling its spread into the United States, but these efforts revealed limitations. The Centers for Disease Control and Prevention supplied public health experts to WHO for deployment to Asia and gave direct assistance to Taiwan. It also tried to contact passengers from flights and ships on which a traveler was diagnosed with SARS after arriving in the United States. However, these efforts were hampered by airline concerns and procedural issues. The State Department helped facilitate the U.S. government's response to SARS but encountered multiple difficulties when it tried to arrange medical evacuations for U.S. citizens infected with SARS overseas. Although the Asian governments we studied initially struggled to recognize the SARS emergency and organize an appropriate response, they ultimately established control. As the governments have acknowledged, their initial response to SARS was hindered by poor communication, ineffective leadership, inadequate disease surveillance systems, and insufficient public health capacity. Improved screening, rapid isolation of suspected cases, enhanced hospital infection control, and quarantine of close contacts ultimately helped end the outbreak. The SARS crisis temporarily dampened consumer confidence in Asia, costing Asian economies $11 billion to $18 billion and resulting in estimated losses of 0.5 percent to 2 percent of total output, according to official and academic estimates. SARS had significant, but temporary, negative impacts on a variety of economic activities, especially travel and tourism. The SARS outbreak added impetus to the revision of the International Health Regulations. WHO and its member states are considering expanding the scope of required disease reporting to include all public health emergencies of international concern and devising a system for better cooperation with WHO and other countries. Some questions are not yet resolved, including WHO's authority to conduct investigations in countries absent their consent, the enforcement mechanism to resolve compliance issues, and how to ensure public health security without unduly interfering with travel and trade.
|
The Food Stamp Program is designed to promote the general welfare and to safeguard the health and well-being of the nation’s population by raising the nutrition levels of low-income families. Recipients use their food stamp benefits to purchase allowable food products from authorized retail food merchants. Eligibility for food stamp benefits is determined on a household basis. A household can be either an individual or a family or other group that lives together and customarily purchases and prepares food in common. The value of food stamp benefits for a household is determined by the number of eligible household members and their income, adjusted for assets and such costs as shelter and utilities. The household’s monthly food stamp allotment increases with each additional member, provided income limits are not exceeded. Household members who are incarcerated and fed by a correctional facility are not eligible for food stamp benefits and are not to be included in the household for purposes of calculating the food stamp benefit. Households that receive food stamps are required to report changes in household membership, such as a member’s incarceration, to the administering state or local agency. Within USDA, the Food and Consumer Service (FCS) administers the Food Stamp Program through agreements with state agencies. FCS is responsible for approving state plans for operation and ensuring that the states are administering the program in accordance with regulations. States are required to establish a performance reporting system to monitor the program, including a quality control review process to help ensure that benefits are issued only to qualifying households and that the benefit amounts are correct. State agencies are responsible for imposing penalties for violations of program requirements and for recovering food stamp overpayments. The program is administered at the local level by either a state agency or a local welfare agency, depending on the state. In California, county agencies operate the program at the local level, while in New York State, districts operate the program. The state agency supervises operations in both states. In Florida and Texas, state agencies operate the program through district and regional offices, respectively. Whatever the administering authority, local service centers work directly with clients to certify household eligibility and determine benefit amounts at the time of application and at least annually thereafter. To identify prisoner participation, we performed a computer match comparing 1995 food stamp rolls with inmate rolls. To ensure that our analyses resulted in valid matches, we (1) verified the prisoners’ social security numbers through the Social Security Administration’s verification system, (2) used only those matches showing dates of incarceration that coincided with the dates that food stamp benefits were issued to the household, and (3) used only those matches showing that the prisoner had been incarcerated for at least a full month and that sufficient time had elapsed for the household to notify the state of the change and for the state to take action. The food stamp rolls covered three large states (Florida, New York, and Texas) and one large county (Los Angeles, California). (See app. I.) The inmate rolls covered the state prison population in the four states and the jail population in large metropolitan areas of each state, that is, Los Angeles County, California; Dade County, Florida; New York City, New York; and Harris County, Texas. Our detailed methodology is discussed in appendix II. During calendar year 1995, about $3.5 million in food stamp benefits were issued on behalf of state prison and county jail inmates claimed as household members in the locations we examined. (See table 1.) Of this total, nearly 9,500 state prison inmates included as household members accounted for an estimated $2.6 million in benefits. About 2,700 county jail inmates accounted for over $900,000 in benefits. The inmate participants that we identified in our match were members of households of varying sizes, some with multiple members and some with a single member—the prisoner was the household. For households with multiple members, the household continued to receive its monthly benefits, which were calculated on the presumption that the prisoner was present in the home. For single-member households, someone other than the prisoner was issued the benefits. The stamps could have been issued either to a person designated as the prisoner’s authorized representativeor to someone who fraudulently represented himself or herself as the prisoner to receive the benefits. Food stamp benefits are issued either as coupons or via electronic benefit transfer systems. For coupons, issuance procedures require that the client presents various items of identification, such as Food Stamp Program cards bearing the client’s signatures, in order to pick up food stamps from a service center or other outlet. A small number of clients receive their coupons through the mail. Under electronic benefit transfer systems, the state agency issues access cards (similar to credit cards) and personal identification numbers to clients who obtain benefits through point-of-sale terminals in stores. However, the effectiveness of the issuance procedures to ensure that only eligible participants receive benefits depends on how rigorously the procedures are implemented by the responsible staff. Prisoners are able to participate in the Food Stamp Program because local welfare agencies seldom verify the composition of a household. Instead, most agencies rely on food stamp applicants to provide accurate household information and to report subsequent changes, such as the incarceration of a household member. Most agencies do not, for example, routinely compare lists of prison or jail inmates with lists of household members. In general, the Food Stamp Program has to balance the issues of client convenience, administrative simplicity, and payment accuracy; consequently, controls over such eligibility factors as household composition are not rigorous. A household that wishes to receive benefits must present an application listing members and provide information about their income and other eligibility factors. Caseworkers review this information, interview a household representative, and certify eligibility.In addition, they recertify the household at least annually. However, at no time are all household members required to appear and present identification. Furthermore, clients are responsible for identifying changes in household composition. According to FCS’ 1995 quality control review, which identified error rates for each state by reviewing a random sample of cases, client errors or misrepresentations contributed significantly to incorrect benefits, particularly when an overpayment occurred. FCS reported that overpayments occurred in about 15 percent of the cases reviewed nationwide and that 62 percent of the dollar value of overpayments was attributable to inaccuracies in client-provided information. Nevertheless, FCS’ regulations do not require verification of client-provided information on household composition, unless the caseworker deems the information “questionable.” The regulations allow each state agency to develop guidance for identifying questionable information. In the states we visited, the guidance defined questionable information as applicants’ statements that were contradictory or did not agree with information that was in the case record or otherwise available to the caseworker. When the caseworkers in the states we visited suspected fraudulent information, they could refer the application to investigators before granting aid. Investigators in each state told us that they attempted to verify questionable information on household composition by visiting homes and making collateral contacts to confirm information with friends, neighbors, or landlords. According to the investigators, these techniques were hit-or-miss, time-consuming, and costly undertakings, and provided information that was only as reliable as its source. Furthermore, investigative resources were generally very limited; for example, the Miami area, which contains about 26 percent of Florida’s food stamp recipients, had just one field investigator to conduct household visits. Some agencies have employed computer matching as a means of identifying ineligible recipients, such as prisoners, but the practice does not appear to be widespread. According to FCS, four states (Florida, Massachusetts, Missouri, and New York) currently perform a monthly computer match between state prisons’ inmate records and food stamp rolls; two states were in the process of developing such a match; and one state performed an annual match. FCS’ regional offices identified only one local agency that compared food stamp recipients with county jail inmates. However, our discussions with officials in the states we visited indicated that the actual number of local agencies conducting such matches was larger. For example, in California, the state agency reported that 14 of the state’s 58 county agencies collected and reviewed data on local jails’ inmates at least once a week. Of the states we visited, Florida and New York operated matching programs, Texas was in the process of establishing a program, and California had plans to implement a program at some future date. While Florida and New York conduct routine matching programs, we identified prisoner participation in the Food Stamp Program in these states because (1) our matches covered a time period not covered by the states’ matches and (2) we used prisoners’ social security numbers, which were verified by the Social Security Administration, a step the states had not taken. Although computer matching of inmate data is not used often, our test in four states demonstrates that it can be a useful technique for identifying households that improperly include prisoners as members. A study by an FCS contractor of other computerized information verification processes in place at state agencies demonstrated that such matches are cost-effective, particularly when properly targeted. Ongoing and developing state matching programs could benefit from use of targeted matching and from sharing experiences. Officials in the four states we visited viewed the matching of prisoner data with food stamp data as a fairly straightforward, effective process. These officials said that they did not encounter or foresee any privacy issues that precluded such matching. Furthermore, while they were unable to provide detailed cost or savings information regarding their prison match programs, the two states we visited that had implemented such programs believed that they were beneficial. New York State did not track implementation costs but calculated savings in the Food Stamp Program of over $900,000 from August 1995 to April 1996. Because Florida was legislatively mandated to implement computer prison matches, the cost of implementation was not a major concern and therefore was not tracked. Florida has yet to calculate savings in the Food Stamp Program. Although detailed data supporting the cost-effectiveness of a computer prison match is not available from the states we visited, strong evidence exists that such a match, particularly when properly targeted, is cost-effective. The Income and Eligibility Verification System (IEVS) compares wage, benefit, and other payment information reported by food stamp clients with records in six databases, including those maintained by the Social Security Administration, the Internal Revenue Service, and state unemployment insurance agencies. After this matching program was implemented, some caseworkers charged that much of the information provided in the IEVS matches did not lead to savings in the Food Stamp Program. The problems most often cited were (1) out-of-date information, (2) lack of agreement in the time periods covered by data sources, and (3) duplicate data. In response, in 1991, FCS engaged a contractor to evaluate the cost-effectiveness of the IEVS system in two sample states, Arizona and Michigan. Various targeting criteria, such as beneficiaries over a specific age or matches when specific dollar thresholds were exceeded, were used to select cases for follow-up. All of the targeted IEVS matching programs reviewed in the study were found to be cost-effective. The study determined that the largest cost of the IEVS matching program is the time spent by caseworkers on follow-ups, approximately $5 to $7 per follow-up. Data-processing costs averaged 2 cents per case, and Arizona spent approximately $104,000 to develop its software. Every match had a cost-effectiveness ratio (program savings compared with the costs of the match, targeting, follow-up and claims collection) greater than 1, indicating that every dollar spent on IEVS returned more than a dollar in savings to the program. In addition, each match was found to have positive net savings for the program, with the more narrowly targeted matches yielding the largest net savings, since they focused follow-up actions on the more egregious situations. The states we visited were implementing their prison matches in a manner that was very similar to that reported in the study. Matches were sent to local offices, where caseworkers, specialists, clerical staff, and fraud investigators could participate in the process. The case file information was reviewed, the client was contacted, and the discrepancy was verified or refuted. If the discrepancy was verified, the client’s eligibility and benefits were redetermined and, as appropriate, overpayments could be recovered and fraud investigations conducted. Our test showed that developing the computer programs to identify prisoner participation did not require a large investment of a programmer’s time. Our programmer required an average of about 20 days to develop a series of substantially different programs for each state. The 20 days included time to become familiar with the data as well as to write, test, and execute the programs. State programmers may require less time because they are already familiar with the food stamp data. As in the IEVS study, we used some targeting criteria to enhance the effectiveness of the matching process. Before using the inmate data, we sent the information to the Social Security Administration for verification of the prisoners’ social security numbers (the identifier common to all major federal databases on individuals) to ensure that our cases did not include incorrect numbers that would render the match invalid. None of the states we visited with computer matching programs submitted inmate social security numbers to the Social Security Administration for verification. (The agency performs this service for government customers at no charge.) We matched only those social security numbers that had been verified by the Social Security Administration. (See app. III.) The majority of the inmate participants we identified in our match occurred as a result of the verification process. (See table 2.) By selecting only prisoners (1) whose dates of incarceration matched the dates that food stamp benefits were issued to their household and (2) who had been incarcerated at least a full month, we avoided some of the pitfalls that have been or could be encountered by states implementing matching programs. For example, case analysts in Florida told us that they could not take any action on many of their matches because the prisoners had been incarcerated for only a few days or benefits had not actually been issued to the household during the period the prisoner was incarcerated. In other cases, an unverified social security number in the prison records resulted in a match with an eligible food stamp recipient. Our analysis of results reported from Florida’s match of June 1996 for Dade County indicates that of 674 matches, 423 resulted in no action taken by the caseworker; for 41 matches, the record did not indicate any review. On the positive side, 210 matches resulted in a case closure (household dropped from Food Stamp Program), removal of the participant from a case (individual dropped from household membership and benefits recalculated), a referral for fraud, or some combination of those actions. Florida officials acknowledged weaknesses in their matching process and stated that they intend to review and improve the process to better identify cases for which caseworkers could take action. The states we visited that had or were developing matches were acting with little or no knowledge of the matching efforts of other states. As a result, each state started without any information, rather than building on the experiences of others. Thus, any cost or time savings that could have arisen from the sharing of information were not realized. The participation of ineligible individuals undermines the credibility of the Food Stamp Program and results in overpayments. Conventional methods state agencies have used to verify the membership of food stamp households have not prevented households from including ineligible individuals, such as inmates in local jails and state prisons. Prisoners’ participation in the Food Stamp Program resulted in overpayments of $3.5 million for the locations where we conducted matches. A computer match of data on states’ food stamp participants and verified inmates could be a cost-effective method for identifying a prisoner’s participation in a food stamp household and thus provide the evidence needed to remove the prisoner from the calculation of a household’s eligibility and benefits. Some states have recognized that matching is a cost-effective way to reduce overpayments. Sharing of information on effective matching practices, such as methods of targeting the most productive cases, would benefit states. To identify state and county prisoners who are included as members of households receiving food stamps, we recommend that the Secretary of Agriculture actively encourage states to implement periodic computer matches of data on state and local prison inmates with data on participants in the Food Stamp Program. To facilitate this effort, we recommend that the Secretary of Agriculture direct FCS to (1) collect from the states that conduct matches information on the policies and procedures used to implement their matches and (2) evaluate, summarize, and disseminate to the states the policies and procedures that represent best practices, such as the verification of prisoners’ social security numbers with the Social Security Administration. We provided copies of a draft of this report to FCS for review and comment. In commenting on the draft report, FCS agreed with the report’s findings, conclusions, and recommendations. These comments, which appear as appendix IV, contained suggestions regarding the phrasing used in the report that we incorporated as appropriate. We conducted our work from March 1996 through February 1997 in accordance with generally accepted government auditing standards. Our detailed methodology is presented in appendix II. We are providing copies of this report to appropriate congressional committees, interested Members of Congress, and other interested parties. We will also make copies available to others on request. If you have any questions about this report, please contact me at (202) 512-5138. Major contributors to this report are listed in appendix V. Nationwide, California, Florida, New York, and Texas represent almost 36 percent of the cost of Food Stamp Program benefits and approximately 39 percent of the states’ prison population. The prison data in this table are based on the prison population as of June 30, 1995. In response to the Congress’ strong interest in reducing the level of fraud, waste, and abuse in the Food Stamp Program, we reviewed food stamp beneficiaries to determine whether prisoners, who are not eligible for food stamps, were inappropriately included as members of households receiving food stamps. Specifically, we determined (1) how many prisoners were included as members of households that received food stamp benefits and the estimated value of improper benefits that were issued to the households, (2) how prisoner participation could take place without detection, and (3) whether computer matching can be an effective method for identifying prisoner participation. To determine if inmates of correctional facilities were included as members of households that received food stamp benefits, and the estimated value of benefits that were issued to the households, we matched the food stamp records and state prison records of the four states with the largest Food Stamp Program benefits and the largest state prison populations. We also matched food stamp records and jail records in four metropolitan areas. Specifically: The Florida, New York, and Texas state welfare agencies provided us with computer files containing information on all members of households and the amount of household food stamp benefits issued during 1995. In California, this information is maintained only at the county level, so we obtained information only for Los Angeles County beneficiaries, who account for approximately one-third of the benefits California issues. The data provided personal identifiers, including name, social security number (SSN), date of birth, gender, and the months in which food stamp benefits had been issued to the household of which each individual was a member. The state agencies had verified the SSNs for the data on food stamp beneficiaries through the Social Security Administration’s Enumeration Verification System (EVS). The state prison system in each state provided us with computer data on all prisoners incarcerated in a state facility for all or any part of 1995. The data provided the same personal identifiers as we obtained for food stamp beneficiaries and listed the admission and release dates for each period of incarceration during the year. To expedite the delivery of data, New York State simply listed each full month that a prisoner was incarcerated rather than providing specific dates. We verified the prisoners’ SSNs through the Social Security Administration’s verification system. Four large metropolitan county or local jail systems gave us permission to use data they had previously provided on our review of erroneous Supplemental Security Income payments to prisoners. The systems, including one from each state in our review, were Los Angeles County, California; Dade County, Florida; New York City, New York; and Harris County, Texas. The local jail system data included all prisoners who were incarcerated as of specific dates—these dates were selected by the jail systems and were based on their available resources. The jail systems provided available personal identifiers, as listed above, and the date of incarceration. The jail inmates’ SSNs had been verified by the Social Security Administration’s verification system during our previous review. We matched the verified SSNs of prisoners in each state or local prison with the verified SSNs in the states’ records of membership in food stamp households. For those prisoners identified as members of households, we determined the periods in which food stamp issuance and incarceration coincided. We estimated the dollar value of food stamps issued to households with participating prisoner members by applying the state’s average monthly issuance per individual recipient from 1995 to each period where incarceration and issuance coincided. Food stamp benefits are calculated for households, not for individuals. As such, it is difficult to determine the exact value of benefits issued to a prisoner participating in a household, unless he or she is the only member of a household. Even then, the amount will vary from individual to individual, depending on factors such as income, assets, and the cost of shelter. Therefore, we relied on the average monthly benefit issuance per person in the locations we reviewed, which ranged from a high of $78.84 in New York State to a low of $68.89 in Los Angeles County. In recognition of the notification and processing time frames that allow 10 days for clients to report household changes and 10 days for the state agency to take action, we did not consider any issuance in the month of incarceration to be an overpayment. Furthermore, if a prisoner was admitted on or after the tenth day of the month, we did not consider issuance in the following month to be an overpayment. We prorated the average monthly issuance to determine the overpayment for days incarcerated in the month of discharge. Because of the quality control program operated by USDA’s Food and Consumer Service (FCS) and the states’ ongoing quality assurance efforts, we accepted the computerized food stamp data as reliable. The prison data, such as dates of incarceration and release, would have been very difficult to verify within the time frames of this audit because these data are sensitive, dispersed within the states, or not available in hard copy. State prison officials attested to the reliability of the admission and release data. They said that because these data are critically important, they are under the constant scrutiny of the courts, law enforcement authorities, and inmates. In our previous study of prisoners receiving supplemental security income, we verified a random sample of jail data and found the data reliable. To determine why prisoner participation was not detected, we asked FCS to identify state or local agencies that collect prison data and compare that data with data on food stamp recipients to identify prisoner participation. To discuss and review policy and procedures for verifying applicant data and any subsequent changes, we visited state agency officials in Sacramento, California; Tallahassee, Florida; Albany, New York; and Austin, Texas. We discussed fraud detection programs, quality control and assurance efforts, and methods of food stamp issuance with state officials. In addition, we visited social service administrative and service centers in the four large metropolitan areas we selected for review. At each location we observed and discussed the food stamp application, data verification, certification and recertification process. We discussed local fraud detection efforts and observed the food stamp issuance process. To determine whether computer matching can be an effective method for identifying prisoner participation, we discussed with agency officials in each of the states we visited the cost, quality, savings, and barriers to matching inmate data with state food stamp data. At the social service centers we visited, we discussed the quality of the matches and observed the follow-up process. To identify the effort associated with data matching to identify prisoner participation, we identified the time used by our programmer to develop and implement the match programs and reviewed a cost study performed for FCS regarding similar matching routines. Verification of prisoners’ SSNs by the Social Security Administration’s EVS significantly increased the number of valid SSNs that we could use in our matches. The state prison systems provided us with available SSNs for all prisoners incarcerated in a state facility for all or any part of 1995. Therefore, this table contains more prisoner data than table I.1, which contains data from one point in time. As shown in table III.1, over 60 percent of 522,525 prisoner SSNs were validated as accurate and usable as submitted. EVS identified an additional 120,525 valid SSNs for prisoners by comparing submitted prison data (SSN if available, date of birth, name, and gender) against information contained in Social Security Administration records. This comparison yielded numbers not contained in the prison records, corrected transposition errors, and substituted correct numbers for invalid numbers. Because of the Social Security Administration’s confidence in, and the historical reliability of the EVS process, we accepted these additional validated SSNs for use in our match process. Similarly, the SSNs for local jail prisoners had been validated through EVS. Keith Oleson, Assistant Director David Moreno, Project Leader Brad Dobbins Don Ficklin Jon Silverman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO provided information on Food Stamp Program overpayments, focusing on: (1) how many prisoners were included as members of households that received food stamp benefits, hereafter referred to as prisoner participation, and the estimated value of improper benefits that were issued to the households; (2) how prisoner participation could take place without detection; and (3) whether computer matching can be an effective method for identifying prisoner participation. GAO noted that: (1) despite federal regulations prohibiting inmates of correctional institutions from participating in the Food Stamp Program, GAO identified 12,138 inmates in the areas it examined who were included in households receiving food stamps; (2) these households improperly collected an estimated $3.5 million in food stamp benefits; (3) prisoner participation goes undetected because agencies generally do not verify the information on household membership provided by food stamp applicants; (4) furthermore, according to officials of the Department of Agriculture's Food Stamp Program, most state or local agencies responsible for administering the program do not routinely collect and review lists of individuals incarcerated in state and local facilities to determine whether any of these individuals are being counted as members of food stamp households; (5) given the program's reliance on client-provided information, computer matching of lists of prisoners and food stamp household members provides a straightforward and potentially effective mechanism to accurately and independently identify prisoners' participation; and (6) while states have implemented various computer matching routines, such as the Income and Eligibility Verification System, which compares data on welfare clients with data on state and federal wages and benefits, many states have not yet implemented a computer matching program to identify prisoners participating in the Food Stamp Program.
|
VHA’s health care system is geographically divided into 21 VISNs, each of which is headed by a VISN director. Each VISN is comprised of a network of VAMCs, and the VISN office serves as the basic budgetary and decision-making unit for providing health care services to veterans within that geographical area. Each VAMC and its affiliated CBOCs and ambulatory care centers are headed by a VAMC director who manages administrative functions, and a chief of staff who manages clinical functions for these facilities. VHA’s Central Office establishes system- wide scheduling policy. VHA’s scheduling policy establishes processes and procedures for scheduling medical appointments, and for ensuring the competency of staff directly or indirectly involved in the scheduling process. This policy is designed to help VAMCs meet VHA’s commitment to scheduling medical appointments with no undue waits or delays. Specifically, VHA’s scheduling policy includes, but is not limited to, the following requirements: Requires VAMCs to use VHA’s Veterans Health Information Systems and Technology Architecture (VistA) medical appointment scheduling system to schedule medical appointments. Requires VAMCs to keep appointment schedules open and available for patients to make medical appointments at least 3 to 4 months into the future. Requires schedulers to record in the VistA scheduling system the date on which the patient or provider wants the patient to be seen as the desired date. To determine the desired date, schedulers should be in communication with the patient when scheduling the medical appointment. Requires schedulers to record the desired date correctly and describes how to determine and record the desired date for new patients—patients who haven’t been seen by a health care provider in a clinic within the past 2 years, including those scheduled in response to a consult request—as well as specifying how to determine the desired date for established patients’ follow-up medical appointments—patients who have been seen within the past two years. Requires VAMCs to track new patients waiting for medical appointments using the electronic wait list within VistA and to remind established patients of follow-up medical appointments using the recall/reminder software within VistA, which enables clinics to create a list of established patients who need follow-up medical appointments more than 3 or 4 months in the future. Additionally, VHA has a separate directive that establishes policy on the provision of telephone service related to clinical care, including facilitating telephone access for medical appointment management. Officials at the VHA central office, VISN, and VAMC all have oversight responsibilities for the implementation of VHA’s scheduling policy. In the VHA central office, the Director of Systems Redesign, through the Office of the Deputy Undersecretary for Health for Operations and Management, is responsible for the oversight and implementation of medical appointment scheduling requirements. This oversight includes measurement and monitoring of ongoing performance. Each VISN director, or designee, is responsible for oversight of enrollment and medical appointment scheduling for eligible veterans. director, or designee, is responsible for ensuring that clinics’ scheduling of medical appointments complies with VHA’s scheduling policy, including clinics in affiliated CBOCs and ambulatory care centers. In addition, the VAMC director is responsible for ensuring that any staff who can schedule medical appointments in the VistA scheduling system has completed VHA scheduler training. To obtain VHA healthcare services, veterans generally must enroll with VHA and register at a specific VAMC. aspects of VHA’s scheduling policy as well as overall compliance, partial compliance, or noncompliance with VHA’s scheduling policy as a whole. According to officials, VHA’s central office does not penalize noncompliance with the certification and expects oversight to be managed locally. VHA’s central office uses this certification of compliance as a tool for VAMCs to identify and improve performance on important aspects of the policy. VistA is the single integrated health information system used throughout VHA in all of its health care settings. There are many different VistA applications for clinical, administrative, and financial functions, including VHA’s electronic medical record, known as the Computerized Patient Record System, and the scheduling system. As we reported in May 2010, the VistA scheduling system is more than 25 years old and inefficient in facilitating care coordination between different sites. In 2000, VHA began an initiative to modernize the scheduling system, but VA terminated the project in 2009. We also reported that VA’s efforts to successfully replace the scheduling system were hindered by weaknesses in its project management processes and lack of effective oversight. In 1995, VHA established a goal of scheduling primary and specialty care medical appointments within 30 days to ensure veterans’ timely access to care.for both primary and specialty care medical appointments based on improved performance reported in previous years. Specifically, VA’s reported wait times for fiscal year 2010 showed that nearly all primary care and specialty care medical appointments were scheduled within 30 days of desired date. In fiscal year 2012, VHA added a goal of completing primary care medical appointments within 7 days of the desired date. In fiscal year 2011, VHA shortened the wait time goal to 14 days To facilitate accountability for achieving its wait time goals, VHA includes wait time measures—referred to as performance measures—in its VISN and VAMC directors’ performance contracts known as Network Director Performance Plans (NDPP) and Facility Director Performance Plans (FDPP), respectively. Wait time performance measures also are included in VA’s budget submissions and performance reports to Congress and stakeholders; the performance reports are published annually in VA’s Performance and Accountability Report (PAR). However, the medical appointment wait time performance measures included in the NDPPs and FDPPs differ from the measures that are reported in the PAR. (See table 1.) For example, in fiscal year 2012, VHA’s wait time goal of 7 days for primary care medical appointments was reflected in the NDPP and FDPP performance measures, but the fiscal year 2012 PAR reported primary care wait time performance using a 14-day standard. The performance measures have also changed over time. At the time of our review, all of VHA’s medical appointment wait time performance measures reflected the number of days elapsed from the patient’s or provider’s desired date, which is recorded in the VistA scheduling system by VAMCs’ schedulers. According to VHA central office officials, VHA measures wait times based on desired date in order to capture the patient’s experience waiting and to reflect the patient’s or provider’s wishes; which is not reflected by other available wait time measures. Medical appointment wait times used for measuring and assessing performance toward VHA’s wait time goals are unreliable due to problems with recording the appointment desired date in the VistA scheduling system. Acknowledging limitations of the wait time measures, VHA uses additional information to monitor patients’ access to medical appointments. VHA measures its medical appointment wait times as the number of days that have elapsed from the patient’s or provider’s desired date. Consequently, the reliability of reported wait time performance is dependent on the consistency with which schedulers record the desired date in the VistA scheduling system. However, aspects of VHA’s scheduling policy and training documents regarding how to determine and record the desired date are unclear and do not ensure replicable and reliable use of the desired date. In addition, we found that some schedulers at select VAMCs did not correctly implement other aspects of VHA’s scheduling policy for recording the desired date. Aspects of VHA’s scheduling policy and related training documents on how to determine and record the desired date are unclear and do not ensure replicable and reliable recording of the desired date by the large number of staff across VHA who can schedule medical appointments in the VistA scheduling system. Specifically, VHA’s scheduling policy and related scheduler training documents do not provide consistent guidance about when or whether the desired date should be based on the patient’s or provider’s preference. While the policy defines desired date as “the date on which the patient or provider wants the patient to be seen,” it also instructs that the “the desired date needs to be defined by the patient” for new patient medical appointments, medical appointments scheduled in response to consult requests, and established patient follow-up medical appointments. When there is a conflict between the provider and patient desired date, the scheduler is instructed to contact the provider for a decision on the return time frame, but the policy and training documents do not clearly describe under what circumstances the provider’s date should be used as the desired date. Further, providers may designate a desired appointment time frame for a follow-up medical appointment rather than a specific date; in such cases, the policy is unclear as to which date within the provider’s designated time frame the scheduler should enter as the desired date. The scheduling policy and training do not provide sufficient guidance to ensure consistent use of desired date in these various scheduling scenarios. VHA central office officials responsible for developing VHA’s scheduling policy and related training documents told us that the desired date is intentionally broad to account for all of the scheduling scenarios that may exist. However, leadership officials from the four VAMCs we visited and their corresponding VISNs reported problems with the unclear guidance on the desired date definition, and difficulties achieving consistent and correct use of the desired date by their schedulers. In addition, given the ambiguity in the scheduling policy and related training documents, there are different interpretations of the desired date between officials at different levels. For example, a VISN director stated that if a provider gives a desired time frame, the scheduler is to use the earliest date in that range as the desired date; whereas a provider in a specialty care clinic at the VAMC we visited within that VISN stated that the clinic uses the latest date in the range to meet the 14-day specialty care medical appointment scheduling goal. Additionally, when presented with various scheduling scenarios, schedulers at the VAMCs we visited determined and recorded the desired date differently. For example, when posed with the question “What date do you enter into the scheduling system as the desired date for an established patient follow-up medical appointment?”, 12 schedulers said they would enter the patient’s desired date, 4 said the provider’s date, and the remaining 3 said they used the next available medical appointment date. When posed with the question “If the patient’s stated desired date conflicts with the provider’s designated desired date or time frame, what date do you enter as the desired date?”, 1 scheduler said that the patient’s desired date would be entered, while another said the desired date has to come from the provider. The variation in schedulers’ interpretation of the desired date suggests confusion about its correct use in different scheduling scenarios. Although unclear about when to use the patient’s or provider’s desired date, VHA’s scheduling policy clearly instructs that, in all circumstances, the desired date should be defined without regard to schedule capacity, and should not be altered once established to reflect a medical appointment date the patient accepts because of lack of medical appointment availability on the desired date. However, we found that at least one scheduler from each of the VAMCs we visited did not correctly implement these aspects of the policy when recording the desired date in the VistA scheduling system for specific hypothetical scheduling situations. As summarized in table 2, we identified the following three types of errors, each of which would have resulted in desired dates that did not accurately reflect the patients’ or providers’ desired date, as well as potentially result in the reporting of more favorable wait times for those medical appointments. Determined appointment availability prior to establishing desired date: Although VHA’s scheduling policy requires schedulers to establish the desired date for a medical appointment without regard to the schedule capacity, four schedulers from three VAMCs determined the clinic’s next available medical appointment dates before establishing a desired date. Therefore, reported wait times for these appointments may not have accurately reflected how long patients actually waited. Altered original desired date based on appointment availability: Three schedulers from two VAMCs established a desired date that was recorded in the VistA scheduling system independent of schedule capacity, but later altered the desired date because of appointment availability. Specifically, two of the three schedulers altered the originally established desired date to match the agreed-upon appointment date, which would have incorrectly resulted in no wait time reported for the appointment. The third scheduler altered the established desired date when there was no appointment availability within 2 weeks of that date; which would have resulted in an incorrectly reported wait time that was shorter than the patient actually waited from his or her original desired date. Recorded a new desired date when rescheduling appointment: Additionally, eight schedulers from three VAMCs incorrectly recorded a new desired date when rescheduling an appointment cancelled by the clinic rather than keeping the original desired date as required by VHA’s scheduling policy. Changing the desired date in this way would incorrectly decrease the reported wait times for the rescheduled appointments; veterans actually would wait longer than the reported wait times indicated. During our site visits, staff at some clinics told us they change medical appointment desired dates to show clinic wait times within VHA’s performance goals. A scheduler at one primary care clinic specifically stated that she changes the recorded desired date to the patient’s agreed-upon appointment date in order to show shorter wait times for the clinic. A provider at a specialty care clinic at another VAMC said providers in that clinic change the desired dates of their follow-up appointments if a patient cannot be scheduled within the 14-day performance goal. In addition, the reported wait times, derived from desired date, for one of the specialty care clinics we visited were inconsistent with the VAMC’s account of appointment scheduling backlogs and scheduling challenges, indicating reported wait time inaccuracies. At the time of our site visit, officials from this clinic indicated that long waits for new patient appointments had existed prior to our visit and told us that the next available appointment for a new patient was in 6 to 8 weeks. However, reported wait time data for the month we visited showed that the clinic completed all new patient appointments on the desired date, resulting in an unlikely high percentage of appointments with zero-day wait times that was inconsistent with information gathered during our site visit, raising questions about whether the desired date was recorded in accordance with VHA’s scheduling policy. Furthermore, according to reported wait times for the VAMC, this clinic completed nearly all new patient appointments within 14 days of the desired date for the 2 months prior to our visit; and, similarly, in the 2 months after our visit, reported wait times for this clinic show completion of all new patient appointments within the 14-day time frame. VHA central office officials told us that they recognized the potential reliability issues of using the desired date for measuring wait times, but stated that use of the desired date is the best approach for capturing patient experience and preference. Officials told us that there is no single industry standard for measuring how long patients wait for appointments and commonly used measures—such as capacity measures—do not account for patient preference or reflect how long the patient actually In addition, officials told us that the VistA waited for an appointment.scheduling system was not designed to capture data for management purposes, which has limited VHA’s options for developing wait time measures. Over the years, VHA has tried using many different approaches to measuring wait times, such as capacity measures and using the date the appointment was created rather than the desired date to determine wait times.used for performance accountability or reported on the PAR or NDPP and FDPP in fiscal year 2012, data on these measures are available to VISNs and VAMCs for performance monitoring. Officials told us that improving how wait times are measured is an ongoing effort, and they have conducted research to identify wait time measures that most closely correlate with patient satisfaction and positive outcomes. At the time of our review, VHA had not implemented changes to wait time performance measures based on the results of this research. Although these measures were not officially In addition to measuring medical appointment wait times, VHA central office officials reported that VHA also uses other information to monitor patients’ access to medical appointments and to assist VISNs and VAMCs in managing clinics. Patient Satisfaction Measures: VHA central office and VISN officials with whom we spoke identified patient satisfaction as another important indicator of patient access to medical appointments and VA has incorporated measures of self-reported patient satisfaction in its performance assessments. Specifically, the annual PAR includes a measure of overall patient satisfaction with VHA inpatient and outpatient healthcare in addition to the wait time measures derived from desired date. Separate measures related to patient satisfaction with obtaining outpatient care were also among the measures available for VISN and VAMC directors to include in their fiscal years 2011 and 2012 performance plans (NDPP and FDPP). VHA also makes the satisfaction measures available to VISNs and VAMCs for continuous performance monitoring as well as available to the public. One of the four VAMCs we visited included the satisfaction measures in their performance plan for fiscal year 2012, and officials cited monitoring these measures on a regular basis. Officials from one VISN also specifically cited comparing its VAMCs’ patient satisfaction scores to reported wait times to identify inconsistencies. However, the director of another VAMC said he does not rely on the satisfaction measures to monitor access because the data are dated by the time the VAMC sees the results, and instead, he relies on the scheduling data derived from wait time measures. Clinic Management Information: In addition to wait time measures, VHA has other information available for VISNs and VAMCs to manage clinics and monitor and improve clinic access, such as no-show rates and consult lists. Several clinic officials reported monitoring no-show rates— the rate at which patients do not appear for their scheduled appointment—in order to reduce unused appointments, for example, by identifying and providing additional appointment reminders to patients with frequent no-shows. Officials from multiple specialty clinics said they monitor lists of consults—requests for specialty care appointments—to ensure they are acted upon in a timely manner. Although the time between when the provider requests a consult and when the specialty clinic reviews the consult can affect the total time a patient waits for a specialty appointment, this time is not reflected in current wait time performance measures. The four VAMCs we reviewed did not consistently implement certain elements of VHA’s scheduling policy, including oversight requirements, which may result in increased wait time or delays in scheduling medical appointments. VAMCs also described other problems with scheduling timely medical appointments, including outdated technology, gaps in staffing of schedulers and providers, and telephone access problems. The four VAMCs we visited did not consistently implement VHA’s scheduling policy, which is intended to facilitate the creation of medical appointments that meet patients’ needs with no undue waits or delays. This policy includes the use of the VistA scheduling system to schedule medical appointments, and the use of the electronic wait list to track new patients waiting for medical appointments. (See table 3 for information on the number of clinics we visited that did not implement selected elements of the VHA’s scheduling policy.) Inconsistent implementation of VHA’s scheduling policy can result in increased wait time or delays in obtaining medical appointments. One of the clinics we visited did not use the VistA scheduling system to determine available medical appointment dates and times, and to schedule medical appointments, as required by VHA’s scheduling policy. Officials noted that this clinic lacked a full-time staff person dedicated to scheduling, and therefore, the providers called their patients to schedule their own medical appointments. Clinic staff reported that providers recorded medical appointments on sheets of paper and gave those sheets to a scheduler, who maintained a paper calendar of all medical appointments; this scheduler later recorded the appointment into the VistA scheduling system. Failing to use VistA to schedule medical appointments could create additional backlogs or scheduling errors because the schedule in VistA may not accurately reflect providers’ availability. According to one provider in this clinic, for example, “staff from other departments look in VistA and it looks like the clinic is not booked, so they’ll send their patients as walk-in appointments. However, the clinic is really fully booked and patients are waiting.” Officials from six clinics across two different VAMCs reported that staff scheduled new patient or established patient follow-up medical appointments without speaking to patients, and then notified patients of the scheduled medical appointment by letter, if the appointment was at least a few weeks away. This method of scheduling—referred to as “blind” scheduling by one official —is not in accordance with VHA’s scheduling policy and could result in missed medical appointments for patients who do not receive the letters, or are not available at the scheduled time because patients are not involved in the scheduling process. One scheduler noted that he sent medical appointment letters because he didn’t have time to call all patients to schedule appointments as he performs scheduling duties for 27 different clinics. Furthermore, outdated or incorrect patient contact information is an impediment to scheduling appointments via letters; an official in one of the six clinics told us that the databases containing patient contact information used to send such letters often do not have veterans’ correct or up-to-date contact information. Officials in four clinics across three VAMCs that had backlogs of patients waiting for medical appointments stated that they do not use the electronic wait list, the official VHA wait list used to track patients with whom a clinic does not have an established relationship. Clinics that do not use the electronic wait list may be at risk of losing track of new patients waiting for medical appointments. For example, at one specialty clinic with a backlog of consult requests, medical appointments for new patients were backed up almost 3 months; VAMC officials reported tracking patients waiting for medical appointments by printing paper copies of the consult requests from the electronic medical record. A provider at this clinic expressed concern that the clinic manager “has a tall stack of unscreened consult referrals just sitting on her desk, and no one is addressing them.” Officials from one VAMC stated that it did not have the required recall/reminder software to facilitate reminders for patients who need to return to the clinic for follow-up medical appointments more than 3 to 4 months into the future; therefore, none of its clinics, including the five clinics that we visited, were able to use it as intended. Instead clinics at this VAMC use a work-around in the scheduling system to remind clerks to print and send letters reminding patients to call and schedule their follow-up medical appointments. However, this work-around is not automated and relies on schedulers to remember to generate a list of patients who need follow-up medical appointments, and print and send those letters. The VAMC is in the process of implementing recall/reminder software, according to officials. One clinic in each of the four VAMCs visited did not keep their medical appointment schedules open 3 to 4 months into the future as required by VHA’s scheduling policy. appointments to be booked only 1 to 2 months into the future. Limiting the future medical appointment schedule may limit patients’ ability to schedule a follow-up medical appointment before leaving the clinic, as recommended by the policy, and also may result in additional work for clinic staff to send recall/reminder letters to patients for medical appointments less than 3 to 4 months away. VHA’s scheduling policy states that for clinics to most efficiently operate, “schedules must be open and available for the patient to make appointments at least three to four months into the future. Permissions may be given to schedulers to make appointments beyond these limits when doing so is appropriate and consistent with patient or provider requests. Blocking the scheduling of future appointments by limiting the maximum days into the future an appointment can be scheduled is inappropriate and is disallowed.” completion of the training by all staff who were required to complete it.Although all VAMCs we visited provided a list of staff who can schedule appointments, three VAMCs did not provide documentation that all staff on the list had successfully completed the required training. For example, officials from one VAMC stated that it maintained a list of staff who can schedule appointments, and a separate list of staff who had completed the training, but only in response to GAO’s request for documentation did the VAMC identify staff with scheduling access who needed to complete the training. Further, three of the 19 schedulers we interviewed said they completed training other than the required VHA scheduler training. Completion of required VHA scheduler training and maintaining up-to- date documentation of schedulers’ completion of the training is particularly important for ensuring consistent implementation of VHA’s scheduling policy, given the high rates of scheduler turnover described by officials. All four of the VAMCs we visited completed the required self-certification of compliance with the VHA’s scheduling policy for fiscal year 2011, three of which certified overall compliance, and one certified overall noncompliance.including the only one of the four that certified overall noncompliance, were initially uncertain who completed the certification or the steps taken to complete it, indicating that VAMCs are not always using the self- certification process to identify and improve problems with compliance with VHA’s scheduling policy. VAMCs identified several problems that can impede the timely scheduling of medical appointments, which also may impact their compliance with VHA’s scheduling policy. VHA central office officials and officials from all of the VAMCs we visited said the VistA scheduling system is outdated and inefficient, which hinders the timely scheduling of medical appointments. In particular, officials said the scheduling system requires schedulers to use commands requiring many keystrokes and does not allow them to view multiple screens at once. Schedulers must open and close multiple screens to check a provider’s or clinic’s full availability when scheduling a medical appointment, which is time-consuming and can lead to errors. For example, providers have separate schedules within VistA to accommodate the various types of services they provide. Because the scheduling system cannot display multiple schedules on the same screen, schedulers have to enter and exit multiple screens to check a provider’s full daily schedule when scheduling a medical appointment. If schedulers do not open all of the necessary screens, they may unknowingly create scheduling errors such as booking two medical appointments at the same time in different sections of a provider’s schedule. Further, staff at one VAMC told us the problem of not being able to easily view a provider’s full schedule can result in the failure to ensure that appointments are cancelled when a provider requests it. This error could cause patients to come to the VAMC unnecessarily or a failure to reschedule cancelled appointments in a timely way, both of which might lead to increased wait times for those patients. Officials from all the VAMCs we visited also noted that the VistA scheduling system is not easily adapted to meet clinic needs. For example, staff cannot create a provider schedule in the scheduling system that is longer than 8 hours. If a provider wants to extend his or her schedule on certain days, staff must create additional clinic schedules in the scheduling system for that provider, which can result in more delays and possible errors because schedulers have to check additional screens for medical appointment availability. Furthermore, officials told us that the scheduling system does not automatically interface with VHA’s electronic medical record, which makes the scheduling process more time- consuming as schedulers alternate between the two software applications to ensure medical appointments are made in accordance with providers’ guidance. VAMC officials described steps they take to ensure schedulers use VistA in accordance with the scheduling directive, including ongoing scheduler training and supervisory reviews of scheduler performance. However, as noted above, a lack of clarity in the desired date training documents and a lack of documentation of scheduler training at certain facilities may limit the effectiveness of these interventions. One VAMC provides schedulers with dual monitors to enable them to open multiple screens at once. Another VAMC told us they considered this solution in their primary care clinic, but found that limited physical space in the clinic did not accommodate additional monitors. In response to ongoing problems with the VistA scheduling system, VHA undertook an initiative to replace it in 2000, but VA abandoned the replacement due to weaknesses in project management and a lack of effective oversight. VA released a new request for information in December 2011 to gather information about vendors and possible software packages that could replace the current scheduling system. In September 2012, VHA told us that vendors’ responses to the request for information indicated that VHA will be able to choose among several viable software packages. According to officials, VA’s next step is to compare different vendors’ software packages through the summer of 2013, and subsequently issue a request for vendor proposals. VHA central office officials and officials from all of the VAMCs we visited stated that shortages or turnover of schedulers also creates problems for the timely scheduling of medical appointments. Officials said that schedulers perform many important roles, including greeting patients, checking patients in and out of clinics, answering telephone calls, scheduling medical appointments for primary care, as well as specialty care consults, and performing other administrative support functions on behalf of the clinical staff. Officials explained, however, that high stress and a demanding workload as well as the entry-level pay grade of the scheduler position leads to high turnover. Further, officials told us that high-performing schedulers often are quickly promoted to other positions within VA. According to VHA officials, most scheduler positions are classified as a low grade within the government general schedule pay scale with little room for upward movement within the grade. Officials at two of the VAMCs we visited told us they are working to raise the pay level for schedulers; for example, one VAMC has begun to assess scheduler position descriptions to determine whether they can be reclassified to allow for more flexibility in determining scheduler salaries based on the variation in their assigned duties. Given the important role of schedulers in the scheduling process, officials said that even temporary staffing gaps or shortages can cause medical appointment delays or wait times. Staff with whom we spoke in several clinics said that when scheduler staffing is lacking, including when a scheduler is on short-term leave, it is difficult to cover all the scheduler’s duties, and that such gaps can cause delays for patients. Further, we were told that scheduler staffing gaps resulted in inefficient use of clinical staff time. For example, at one specialty clinic that lacked its own scheduler, providers routinely scheduled their own medical appointments, which took away from time seeing patients, and also resulted in incorrect scheduling practices. Given the training needs associated with using the VistA scheduling system, following VHA’s scheduling policy, and ensuring the correct use of desired date, high rates of scheduler turnover could contribute to inconsistent use of desired date in the scheduling process or other appointment scheduling problems. Officials at two VAMCs noted that scheduler staffing gaps are compounded by recent changes in their roles and responsibilities as VHA implements a new team-based model of primary care, which calls for one scheduler to be assigned to each primary care team. Officials told us that these changes generally increase the administrative demands placed on schedulers, as they are asked to respond to team duties while continuing to answer phones, greet patients, and register new patients, among other responsibilities. Officials from two VAMCs told us they had requested approval to hire additional staff to meet these added administrative needs. Scheduler staffing gaps may also create problems managing patient flow through clinics, which can impede scheduling of follow-up appointments, according to officials at two of the VAMCs we visited. Staff at these VAMCs told us that they sometimes do not have sufficient schedulers available to staff check-out desks, and staff at one VAMC added that as a result patients might “fall through the cracks,” leaving follow-up medical appointments unscheduled unless the patient remembers to call in to schedule the appointment. In addition, when patients do not check out, schedulers are responsible for tracking patients needing follow-up medical appointments. This situation may be exacerbated in clinics that do not use the required recall/reminder software to facilitate the scheduling of follow-up medical appointments more than 3 to 4 months in the future, adding further to the backlog of patients in need of follow-up medical appointments. Officials from all of the VAMCs we visited told us that provider shortages also contribute to scheduling backlogs in certain locations and specialties. Recruitment and retention of providers was a particular challenge for VAMCs in rural areas, areas with high costs of living, and for certain provider specialties. All of the VAMCs we visited described gaps in provider staffing in certain specialty care clinics. Officials at all VAMCs also stated that a lack of salary competitiveness or the length of time to hire new providers into the VA system also contributed to gaps in provider staffing and scheduling backlogs. Gaps in provider staffing also can result from providers being on extended or unexpected leave, including vacation time, sick leave, or military deployments. These absences may result in longer wait times for patients. For example, officials at one VAMC told us that even a brief absence of one provider on leave can cause significant wait times, and that it is difficult to catch up and eliminate the backlog. Staff from some clinics described steps they take to reduce backlogs caused by gaps in provider staffing, including overbooking provider schedules and scheduling temporary Saturday hours. Officials at one VAMC told us that they employ a “floater” primary care physician to provide coverage for providers on leave, but an official at another clinic told us that they were unable to hire additional providers to meet the demand for medical appointments. Officials at all of the VAMCs we visited told us that high call volumes and a lack of staff dedicated to answering the telephones impede the timely scheduling of medical appointments. Despite VHA’s telephone policy requiring the provision of continuous telephone service for clinical care and medical appointment management, VAMC officials noted that schedulers are frequently overwhelmed by high call volumes and are unable to respond to calls in a timely way. In addition, officials at one VAMC told us that outdated telephone technology, and the lack of a dedicated VAMC-wide call center, limited their ability to improve their telephone responsiveness. VHA has reported that telephone access to VHA health services has historically been a frustrating experience for veterans, including dropped calls, multiple transfers, and long waits to reach a staff person able to resolve their inquiries. Further, patients at all of the VAMCs we visited registered complaints about the difficulty of reaching outpatient clinic staff by telephone and unreturned telephone calls. According to information on patient complaints provided by the four VAMCs we visited, patient complaints about unreturned telephone calls ranked among the top two categories of complaints in fiscal year 2012 at all four VAMCs. Further, staff at two of the VAMCs reported that their telephone calls to outpatient clinics within their own VAMC went unanswered, and one added that their inability to reach staff in their own clinics also was an obstacle to timely medical appointment scheduling. In January 2012, VHA distributed suggested best practices for improving telephone design, service, and access in its Telephone Systems Improvement Guide. This guide outlines steps VHA found to be effective means of improving telephone service and maintaining health care access, including regularly monitoring the purpose and volume of telephone calls; establishing dedicated staff to answering calls, especially at times of peak call volume; and training staff responsible for answering telephones in call centers. To address telephone issues, officials at one VAMC we visited told us they were developing a proposal to establish a call center with a new telephone system, to be staffed by schedulers dedicated to answering the telephones. Officials at a different VAMC stated that a scheduling supervisor periodically checks schedulers’ telephones to ensure that voice mail messages are listened to and that calls are returned. VHA is implementing several initiatives to improve veterans’ access to medical appointments. Specifically, these initiatives focus on more patient-centered care; using technology to provide care, through means such as telehealth; and using care outside of VHA to reduce travel and wait times for veterans who are unable to receive certain types of outpatient care in a timely way through local VHA facilities. VHA officials told us they are monitoring the implementation of these initiatives; however, in some cases, more information is needed to determine their impact on timely access to care over time. VHA’s patient-centered medical home model for primary care, Patient Aligned Care Teams (PACT), is intended, in part, to improve access to medical appointments and care coordination through the use of interdisciplinary care teams and technology to communicate with patients. Implementation of PACT began in 2010, and is an ongoing effort, according to VHA officials. PACT differs from how primary care was previously delivered by assigning each patient to an interdisciplinary team. The PACT team is intended to be comprised of a primary care provider, registered nurse care manager, a clinical support staff member These teams offer such as a licensed practical nurse, and a scheduler.patients a centralized way to get questions answered by nurses or other clinical support staff and aim to reduce the need for face-to-face medical appointments, thereby enabling more efficient use of providers’ time. For example, at one of the VAMCs we visited, patients are given a direct telephone number to contact their PACT team and leave a voice mail message to be returned by the team’s registered nurse. Encouraging PACT teams’ use of telephone communication and telephone appointments is intended to enable patients to more quickly obtain answers to some of their administrative and medical questions, such as requests for prescription refills, without having to schedule a face-to-face medical appointment. VHA officials told us that they expect PACT teams’ use of telephone communication and telephone appointments will open up face-to-face medical appointment slots for patients who need them and might enable clinics to reduce backlogs and improve access to same- day primary care medical appointments. Officials at two VAMCs we visited told us that the transition to the PACT model has created some initial scheduling and staffing difficulties. For example, officials at these VAMCs noted that it is difficult for scheduling staff to respond to their PACT team duties in addition to meeting other responsibilities such as answering phones, checking in patients, registering new patients, and scheduling for more than one clinic. This is compounded by the fact that not all PACT teams have been assigned their own scheduler, as prescribed by the PACT model, so an individual scheduler is sometimes serving multiple PACT teams. Officials at these two VAMCs explained that they would need to hire more schedulers to meet the goal of assigning one to each PACT team. To measure the progress of PACT implementation and its impact on access to quality care, VHA is collecting data and tracking a series of measures in a monthly internal data report. Five of the PACT measures are (1) primary care medical appointments completed within 7 days of the desired date; (2) same day access with primary care provider, or the percentage of appointments completed within 1 day; (3) telephone utilization, or the percentage of total encounters that occur by telephone; (4) continuity of care, or the percentage of primary care appointments with the patient’s assigned primary care provider; and (5) post-hospital discharge contact, or percentage of patients discharged from the hospital who were contacted by their primary care provider within 2 days. As described earlier, accurate measurement of medical appointment wait times—including the first two PACT measures—is dependent upon the correct recording of the desired date in the VistA scheduling system. In fiscal year 2012, PACT measures were also included in the NDPP and FDPP. Part of VHA’s goal of achieving improved access to medical appointments is the increased use of technology such as telehealth and secure messaging. Use of these tools is intended to improve communication between patients and providers and open up providers’ schedules for needed face-to-face medical appointments, thereby improving access to face-to-face appointments. home telehealth for chronic disease management such as diabetes; real-time clinic-based video telehealth, in which patients at a local CBOC may connect with a VHA provider at a different location to receive services that are unavailable at the CBOC, such as mental health or speech pathology; and store-and-forward telehealth, in which digital images such as x-rays or images of skin problems, are taken, stored, and sent to an expert for review and consultation. VHA officials told us that the use of telehealth can reduce both travel and wait times for medical appointments and help meet the needs of patients with chronic conditions. All VAMCs we visited told us they were using telehealth to improve access to care. Another initiative that uses technology to reduce unnecessary face-to- face medical appointments is VHA’s My HealtheVet, a web-based program that enables veterans to create and maintain a web-based personal health record with secure access to health information; services such as prescription refill requests; and secure messaging. Secure messaging allows veterans to communicate electronically with their health care team. According to VHA, of the more than 8 million veterans enrolled in VHA, 1.4 million are registered in My HealtheVet as of August 2012, and more than 437,000 have created secure messaging accounts. A recent VA study reports that secure messaging may improve access, patient perceptions about access, and provides for better communication. VHA uses non-VA care to reduce wait times and backlogs and to provide veterans’ access to specialists not available through VHA.statutory requirement to help veterans receive care closer to home, VHA is piloting a new model of non-VA care known as Project ARCH (Access Under a Received Closer to Home). Project ARCH is a five-site, 3-year pilot program administered by the VHA Office of Rural Health to provide health care services through contracts with local community providers.According to VHA officials, Project ARCH might help alleviate wait times for specialty care services with high demand, or for which there is a shortage of local providers. At the Montana Project ARCH pilot site, which we visited as part of our site visit to the Montana VAMC, staff from the VAMC and the Billings Clinic, a non-VA provider delivering services to veterans through Project ARCH, identified both benefits and obstacles for patients enrolled in Project ARCH. For example, though VAMC and Billings Clinic staff noted that Project ARCH reduced both travel and wait times for Montana veterans in need of orthopedic care, Billings Clinic staff also noted that difficulties in coordinating care for veterans moving between VHA and non-VA providers at times resulted in delays in providing care to those and other veterans. Additionally, problems with processing authorizations for certain services were among the concerns raised in an April 2012 evaluation of the Montana Project ARCH program. Project ARCH contractors must submit monthly reports, including information on medical appointment scheduling timeliness, wait times, and other topics. For example, the contractor for the Project ARCH Program in Montana is required to report on the extent to which it is meeting VHA’s 14-day wait time goal for medical appointments— according to VHA officials, the contractor must meet a 90 percent target. These wait times may not accurately reflect how long patients are waiting for a medical appointment, however, because the wait time is counted from the time the contractor receives the authorization from VA, rather than from the time the patient or provider requests a medical appointment. VHA officials have expressed an ongoing commitment to providing veterans with timely access to medical appointments and have reported continued improvements in achieving this goal. However, unreliable wait time measurement has resulted in a discrepancy between the positive wait time performance VA has reported and veterans’ actual experiences. Ambiguity in what constitutes the medical appointment desired date—the date VHA uses as the basis for measuring wait time—as well as manipulation of the desired date to meet goals have contributed to these inaccuracies. With more than 50,000 schedulers making approximately 80 million medical appointments in fiscal year 2011, establishing a clear definition of the desired date or finding and reporting another acceptable measure of wait time is key to understanding how long veterans are actually waiting for medical appointments. Without reliable measurement of how long patients are waiting for medical appointments, VHA is less equipped to identify and address factors that contribute to wait times, or gauge the success of its initiatives to improve access to timely medical appointments, including efforts to improve primary care medical appointments. More consistent adherence to VHA’s scheduling policy and oversight of the scheduling process, as well as the allocation of staffing resources in accordance with clinics’ demands for scheduling of medical appointments, would potentially reduce medical appointment wait times. Furthermore, persistent problems with telephone access must be resolved to assure veterans’ ability to schedule timely medical appointments. Ultimately, VHA’s ability to ensure and accurately monitor access to timely medical appointments is critical to ensuring quality health care to veterans, who may have medical conditions that worsen if access is delayed. To ensure reliable measurement of veterans’ wait times for medical appointments, we recommend that the Secretary of VA direct the Under Secretary for Health to take actions to improve the reliability of wait time measures either by clarifying the scheduling policy to better define the desired date, or by identifying clearer wait time measures that are not subject to interpretation and prone to scheduler error. To better facilitate timely medical appointment scheduling and improve the efficiency and oversight of the scheduling process, we recommend that the Secretary of VA direct the Under Secretary for Health to take actions to ensure that VAMCs consistently and accurately implement VHA’s scheduling policy, including use of the electronic wait list, as well as ensuring that all staff with access to the VistA scheduling system complete the required training. To improve timely medical appointment scheduling, we recommend that the Secretary of VA direct the Under Secretary for Health to develop a policy that requires VAMCs to routinely assess clinics’ scheduling needs and resources to ensure that the allocation of staffing resources is responsive to the demand for scheduling medical appointments. To improve timely medical appointments and to address patient and staff complaints about telephone access, we recommend that the Secretary of VA direct the Under Secretary for Health to ensure that all VAMCs provide oversight of telephone access and implement best practices outlined in its telephone systems improvement guide. In reviewing a draft of this report, VA generally agreed with our conclusions and concurred with our recommendations. (VA’s comments are reprinted in app. I.) In summary, VA stated that VHA officials have closely followed our review and proactively taken steps in response to our findings. Specifically, VHA is revising and improving directives, policies, training, clinic management tools, and oversight related to scheduling practices. VA further stated that VHA is committed to routinely assessing clinics’ scheduling needs and resources and developing practices and guidelines to ensure adequate staffing resources for scheduling medical appointments. VA described its plans to address each recommendation as follows: In response to our recommendation that VA take actions to improve the reliability of wait time measures, VA concurred and stated that VHA will revise its scheduling policy to implement more reliable wait time measures and new processes to better define desired date with a targeted completion date of November 1, 2013. In response to our recommendation that VA take actions to ensure that VAMCs consistently and accurately implement VHA’s scheduling policy and ensure that all staff complete required training, VA concurred and stated that the revised scheduling policy will include improvements and standardization of the use of the electronic wait list. Additionally, VHA will require VISNs to update each VAMC’s scheduler master list and verify that all schedulers on the list have completed required training, and will require schedulers to complete a standardized training update on the revised scheduling policy. The targeted completion date for these activities is November 1, 2013. In response to our recommendation that VA develop a policy that requires VAMCs to routinely assess clinics’ scheduling needs and resources, VA concurred and stated that VHA will ask VAMCs to routinely assess clinics’ availability and ensure staff is distributed to meet access standards in clinics. However, VA has not specified requirements for VAMCs to complete these assessments nor has the agency provided a timeline for this process. Because schedulers are key to ensuring timely appointment scheduling, we believe that VA should establish a targeted completion date for requiring these assessments in policy or guidance. In response to our recommendation that VA ensure that all VAMCs provide oversight of telephone access and implement best practices outlined in its telephone improvement guide, VA concurred and stated that VHA will require each VISN director to assess current phone service and develop strategic improvement telephone service plans to improve service. Additionally, VHA will identify a process to monitor performance on a quarterly basis for at least 1 year after the assessment. The targeted completion date for the telephone service assessments and plans is March 30, 2013. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 28 days after its issuance date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Veterans Affairs and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Bonnie Anderson, Assistant Director; Rebecca Abela; Jennie Apter; Rich Lipinski; Sara Rudow; and Ann Tynan made key contributions to this report. VA Mental Health: Number of Veterans Receiving Care, Barriers Faces, and Efforts to Increase Access. GAO-12-12. Washington, D.C.: October 14, 2011. Information Technology: Department of Veterans Affairs Faces Ongoing Management Challenges. GAO-11-663T. Washington, D.C.: May 11, 2011. Information Technology: Management Improvements Are Essential to VA’s Second Effort to Replace Its Outpatient Scheduling System. GAO-10-579. Washington, D.C.: May 27, 2010. VA Health Care: Access for Chattanooga-Area Veterans Needs Improvement. GAO-04-162. Washington, D.C.: January 30, 2004. VA Health Care: More National Action Needed to Reduce Waiting Times, but Some Clinics Have Made Progress. GAO-01-953. Washington, D.C.: August 31, 2001. Veterans’ Health Care: VA Needs Better Data on Extent and Causes of Waiting Times. GAO/HEHS-00-90. Washington, D.C.: May 31, 2000.
|
VHA provided nearly 80 million outpatient medical appointments to veterans in fiscal year 2011. While VHA has reported continued improvements in achieving access to timely medical appointments, patient complaints and media reports about long wait times persist. GAO was asked to evaluate VHAs scheduling of timely medical appointments. GAO examined (1) the extent to which VHAs approach for measuring and monitoring medical appointment wait times reflects how long veterans are waiting for appointments; (2) the extent to which VAMCs are implementing VHAs policies and processes for appointment scheduling, and any problems encountered in ensuring veterans access to timely medical appointments; and (3) VHAs initiatives to improve veterans access to medical appointments. To conduct this work, GAO made site visits to 23 clinics at four VAMCs, the latter selected for variation in size, complexity, and location. GAO also reviewed VHAs policies and data, and interviewed VHA officials. Outpatient medical appointment wait times reported by the Veterans Health Administration (VHA), within the Department of Veterans Affairs (VA), are unreliable. Wait times for outpatient medical appointments--referred to as medical appointments--are calculated as the number of days elapsed from the desired date, which is defined as the date on which the patient or health care provider wants the patient to be seen. The reliability of reported wait time performance measures is dependent on the consistency with which schedulers record the desired date in the scheduling system in accordance with VHA's scheduling policy. However, VHA's scheduling policy and training documents for recording desired date are unclear and do not ensure consistent use of the desired date. Some schedulers at Veterans Affairs medical centers (VAMC) that GAO visited did not record the desired date correctly. For example, three schedulers changed the desired date based on appointment availability; this would have resulted in a reported wait time that was shorter than the patient actually experienced. VHA officials acknowledged limitations of measuring wait times based on desired date, and described additional information used to monitor veterans' access to medical appointments, including patient satisfaction survey results. Without reliable measurement of how long patients are waiting for medical appointments, however, VHA is less equipped to identify areas that need improvement and mitigate problems that contribute to wait times. While visiting VAMCs, GAO also found inconsistent implementation of VHA's scheduling policy that impedes VAMCs from scheduling timely medical appointments. For example, four clinics across three VAMCs did not use the electronic wait list to track new patients that needed medical appointments as required by VHA scheduling policy, putting these clinics at risk for losing track of these patients. Furthermore, VAMCs' oversight of compliance with VHA's scheduling policy, such as ensuring the completion of required scheduler training, was inconsistent across facilities. VAMCs also described other problems with scheduling timely medical appointments, including VHA's outdated and inefficient scheduling system, gaps in scheduler and provider staffing, and issues with telephone access. For example, officials at all VAMCs GAO visited reported that high call volumes and a lack of staff dedicated to answering the telephones impede scheduling of timely medical appointments. In January 2012, VHA distributed telephone access best practices that, if implemented, could help improve telephone access to clinical care. VHA is implementing a number of initiatives to improve veterans' access to medical appointments such as expanded use of technology to interact with patients and provide care, which includes the use of secure messaging between patients and their health care providers. VHA also is piloting a new initiative to provide health care services through contracts with community providers that aims to reduce travel and wait times for veterans who are unable to receive certain types of care within VHA in a timely way. GAO recommends that VHA take actions to (1) improve the reliability of its medical appointment wait time measures, (2) ensure VAMCs consistently implement VHA's scheduling policy, (3) require VAMCs to allocate staffing resources based on scheduling needs, and (4) ensure that VAMCs provide oversight of telephone access and implement best practices to improve telephone access for clinical care. VA concurred with GAO's recommendations.
|
Clearly, the acquisition process has produced superior weapons, but it does so at a high price. Weapon systems routinely take much longer time to field, cost more to buy, and require more support than investment plans provide for. These consequences reduce the buying power of the defense dollar, delay capabilities for the war fighter, and force unplanned—and possibly unnecessary—trade-offs in desired acquisition quantities and an adverse ripple effect among other weapons programs or defense needs. Because of the lengthy time to develop new weapons, many enter the field with outdated technologies and a diminished supply base needed for system support. Frequently, this requires upgrades to the capability as soon as the new system is fielded. As previously noted, these inefficiencies have often led to reduced quantities of new systems. In turn, legacy systems remain in the inventory for longer periods, requiring greater operations and support cost that pull funds from other accounts, including modernization. DOD is facing these problems with its tactical air force assets now. We believe DOD can learn lessons from the experiences with the F/A-22 program as it frames the acquisition environment for its many transformational investments. DOD recognizes the need to get better weapon system outcomes, and its newest acquisition policy emphasizes the use of evolutionary, knowledge- based acquisition concepts proven to produce more effective and efficient outcomes in developing new products. It incorporates the elements of a knowledge-based acquisition model for developing new products, which we have recommended in our reviews of commercial best practices. Our body of work focuses on how DOD can better leverage its investments by shortening the time it takes to field new capabilities at a more predictable cost and schedule. However, policy changes alone will not guarantee success. Unless written policies are consistently implemented in practice through timely and informed decisions on individual programs, outcomes will not change. This requires sustained leadership and commitment and attention to the capture and use of key product knowledge at critical decision points to avoid the problems of the past. A key enabler to the success of commercial firms is using an approach that evolves a product to its ultimate capabilities on the basis of mature technologies and available resources. This approach allows commercial companies to develop and produce more sophisticated products faster and less expensively than their predecessors. Commercial companies have found that trying to capture the knowledge required to stabilize the design of a product that requires significant amounts of new technical content is an unmanageable task, especially if the goal is to reduce development cycle times and get the product to the marketplace as quickly as possible. Therefore, product features and capabilities not achievable in the initial development are planned for subsequent development efforts in future generations of the product, but only when technologies are proven to be mature and other resources are available. DOD’s new policy embraces the idea of evolutionary acquisition. Figure 1 compares evolutionary and single step (“big bang”) acquisitions. An evolutionary environment for developing and delivering new products reduces risks and makes cost more predictable. While the customer may not receive an ultimate capability initially, the product is available sooner, with higher quality and reliability, and at lower, more predictable cost. Improvements are planned for future generations of the product. Leading commercial firms expect that their program managers will deliver high-quality products on time and within budgets. Doing otherwise could result in losing a customer in the short term and losing the company in the longer term. Thus, in addition to creating an evolutionary product development environment that brings risk in control, these firms have adopted practices that put their individual program managers in a good position to succeed in meeting these expectations on individual products. Collectively, these practices ensure that a high level of knowledge exists about critical facets of the product at key junctures during its development. Such a knowledge-based process enables decision makers to be reasonably certain about critical facets of the product under development when they need to be. The knowledge-based process followed by leading firms is shown in detail in table 1, but in general can be broken down into three knowledge points. First, a match must be made between the customer’s needs and the available resources—technology, engineering knowledge, time, and funding—before a program is launched. Second, a product’s design must demonstrate its ability to meet performance requirements and be stable about midway through development. Third, the developer must show that the product can be manufactured within cost, schedule, and quality targets and is demonstrated to be reliable before production begins. The following table illustrates more specifically what we have learned about how successful programs gather knowledge as they move through product development. DOD programs often do not employ these practices. We found that if the evolutionary, knowledge-based acquisition concepts were not applied, a cascade of negative effects became magnified in the product development and production phases of an acquisition program. These led to acquisition outcomes that included significant cost increases and schedule delays, poor product quality and reliability, and delays in getting new capability to the war fighter. This is often the case in DOD programs as shown in our past work on systems like F/A-22 fighter, C-17 airlifter, V-22 tiltrotor aircraft, PAC-3 missile, BAT antitank munition, and others. We did find some DOD programs that employed best practice concepts and have had more successful program outcomes to date. These included the Global Hawk unmanned vehicle, AIM-9X missile, and Joint Direct Attack Munitions guided bomb. Figure 3 shows a notional illustration of the different paths and effects of a product development. It is clear that knowledge about the product’s technology, design, and processes captured at the right time can reduce development cycle times and deliver a more cost effective, reliable product to the customer sooner than programs that do not capture this knowledge. In applying the knowledge-based approach, the most leveraged decision point of the three, is matching the customer’s needs with the developer’s resources—technology, design, timing, and funding. This initial decision sets the stage for the eventual outcome—desirable or problematic. The match is ultimately achieved in every development program, but in successful development programs, it occurs prior to program launch. In successful programs, negotiations and trade-offs occur before a product development is launched to ensure that a match exists between customer expectations and developer resources. The results achieved from this match are balanced and achievable requirements, sufficient investment to complete the development, and a firm commitment to deliver the product. Commercial companies we have visited usually limit product development cycle-time to less than 5 years. In DOD, this match is seldom achieved. It is not unusual for DOD to bypass early trade-offs and negotiations, instead planning to develop a product based on a rigid set of requirements that are unachievable within a reasonable development time frame. This results in cost and schedule commitments that are unrealistic. Although a program can take as long as 15 years in DOD, the program manager is expected to develop and be accountable for precise cost and schedule estimates made at the start of the program. Because of their short tenures, it normally takes several program managers to complete product development. Consequently, the program manager that commits to the cost and schedule estimate at the beginning of the program is not the same person responsible for achieving it. Therefore, program accountability is problematic. Ironically, this outcome is rational in the traditional acquisition environment. The pressures put on program managers to get programs approved encourage promising more than can be delivered for the time and money allotted. They are not put in a position to succeed. The differences in the practices employed by successful commercial firms and DOD reflect the different demands imposed on programs by the environments in which they are managed. Specific practices take root and are sustained because they help a program succeed in its environment. The way success and failure are defined for commercial and defense product developments differs considerably, which creates a different set of incentives and evokes different behaviors from managers. Attempts at reforming weapon system acquisitions have not succeeded because they did not change these incentives. All of the participants in the acquisition process play a part in creating incentives. The F/A-22 program, advertised as a flagship of acquisition reform in its early days, failed to establish this match before program launch and today we are discussing the resulting outcomes to-date. The F/A-22 provides an excellent example of what can happen when a major acquisition program is not guided by the principles of evolutionary, knowledge-based acquisition. The program failed to match requirements with resources and make early trade-offs and took on a number of new and unproven technologies. Instead of fielding early capability and then evolving the product to get new capabilities to the war fighter sooner, the Air Force chose a “big bang” product development approach that is now planned to take about 19 years. This created a challenging and risky acquisition environment that has delayed the war fighter the capabilities expected from this new aircraft. Program leaders did not capture the specific knowledge identified as key for each of the three critical knowledge points in product development. Instead, program managers proceeded through the F/A-22’s development without the requisite knowledge necessary for reducing program risk and achieving more successful program outcomes. Now the optimism underlying these decisions has resulted in significant cost increases, schedule delays, trade- offs—making do with less than half the number of originally desired aircraft—and concerns about the capability to be delivered. Since the F/A-22 acquisition program was started in October 1986, the F/A- 22 cost and schedule estimates have grown significantly to where, today, the Air Force estimates the total acquisition unit cost of a single aircraft is $257.5 million. This represents a 74 percent increase from the estimate at the start of development and a commensurate loss in the buying power of the defense dollar. Intended to replace the aging F-15 fighter, the F/A-22 program is now scheduled to reach its initial operational capability in December 2005—making its development cycle about 19 years. During this cycle, the planned buy quantity has been reduced 63 percent from 750 to 276 aircraft. In addition, since fiscal year 2001, funding for F/A-22 upgrades has dramatically increased from $166 million to $3.0 billion, most of which is to provide increased ground attack capability, a requirement that was added late in the development program. The F/A-22 acquisition strategy from the outset was to achieve full capability in a “big bang” approach. By not using an evolutionary approach, the F/A-22 took on significant risk and onerous technological challenges. While the big bang approach may have allowed the Air Force to more successfully compete for early funding, it hamstrung the program with many new undemonstrated technologies, preventing the program from knowing cost and schedule ramifications throughout development. Cost, schedule, and performance problems resulted. The following table summarizes the F/A-22 program’s attainment of critical knowledge and key decision junctures during the development program and the changes in development cost and cycle time at each point. Technology—The F/A-22 did not have mature technology at the start of the acquisition program. The program included new low-observable (stealth) materials, integrated avionics, and propulsion technology that were not mature at this time. The Air Force did not complete an evaluation of stealth technology on a full-scale model of the aircraft until several years into development. It was not until September 2000, or 9 years into development, that the integrated avionics reached a maturity level acceptable to begin product development. During development, the integrated avionics was a source of schedule delays and cost growth. Since 1997, avionics software development and flight-testing have been delayed, and the cost of avionics development has increased by over $980 million dollars. Today, the avionics still has problems affecting the ability to complete developmental testing and begin operational testing, and the Air Force cannot predict when a solution will be found. Design—The effects of immature technologies cascaded into the F/A-22 development program, making it more difficult to achieve a stable design at the right time. The standard measure of design stability is 90 percent of design drawings releasable by the critical design review. The F/A-22 achieved only 26 percent by this review, taking an additional 43 months to achieve the standard. Moving ahead in development, the program experienced several design and manufacturing problems described by the F/A-22 program office as a “rolling wave” effect throughout system integration and final assembly. These effects included numerous design changes, labor inefficiencies, parts shortages, out of sequence work, cost increases, and schedule delays. Production—At the start of production, the F/A-22 did not have manufacturing processes under control and was only beginning testing and demonstration efforts for system reliability. Initially, the F/A-22 had taken steps to use statistical process control data to gain control of critical manufacturing processes by full rate production. However, the program abandoned this best practice approach in 2000 with less than 50 percent of its critical manufacturing processes in control. In March 2002, we recommended that the F/A-22 program office monitor the status of critical manufacturing process as the program proceeds toward high rate production. The reliability goal for the F/A-22 is 3 hours of flying time between maintenance actions. The Air Force estimated that in late 2001, when it entered production, it should have been able to demonstrate almost 2 flying hours between maintenance actions. Instead, it could fly an average of only 0.44 hours between maintenance actions. Since then there has been a decrease in reliability. As of November 2002, development test aircraft have been completing only 0.29 hours between maintenance actions. Additionally, the program was slow to fix and correct problems that had affected reliability. At the time of our review in July 2002, program officials had identified about 260 different types of failures and had identified fixes for less than 50 percent of the failures. To achieve reliability goals will require additional design changes, testing, and modifications. Therefore, additional problems and costs can be expected if the system is fielded with the level of reliability achieved to date. The F/A-22 did not take advantage of evolutionary, knowledge-based concepts up front and now, the best it can hope for is to limit cost increases and performance problems by not significantly increasing its production until development is complete—signified by developmental and operational testing and reliability demonstrations. To that end, we have recommended that the Air Force reconsider its decision to increase the aircraft production rate beyond 16 aircraft per year. The program is nearing the end of developmental testing and plans to start initial operational testing in October 2003. If developmental testing goes as planned, which is not guaranteed, operational testing is expected to be completed around September 2004. By the end of this fiscal year, 51 F/A- 22s will be on contract as low rate production began in 2001. Our March 2003 report identifies various problems still outstanding that could have further impacts on cost, schedule, and delivered performance that are in addition to undemonstrated reliability goals. The problems identified are of particular concern, given Air Force plans to increase production rates and make a full rate production decision in 2004. The problems include: unexpected shutdowns (instability) of the avionics, excessive movement of the vertical tails, overheating in rear portions of the aircraft, separations of the horizontal tail material, inability to meet airlift support requirements, and excessive ground maintenance actions. These problems are still being addressed, and not all of them have been solved as yet. For example, Air Force officials stated they do not yet understand the problems associated with the avionics instability well enough to predict when they would be able to resolve them, and certain tests to better understand the vertical tail problem have not yet begun. Despite remaining testing and outstanding problems, the Air Force plans to continue acquiring production aircraft at increasing annual rates and make the full rate production decision in 2004. This is a very risky strategy, given outstanding issues in the test program and the system’s less than expected reliability. The Air Force may encounter higher production costs as a result of acquiring significant quantities of aircraft before adequate testing and demonstrations are complete. In addition, remaining testing could identify problems that require costly modifications in order to achieve satisfactory performance. In a February 28, 2003 report to Representative John Tierney, we found that F/A-22 production costs are likely to increase more than the latest $5.4 billion cost growth recently estimated by the Air Force and the Office of Secretary of Defense (OSD). First, the current OSD production estimate does not include $1.3 billion included in the latest Air Force acquisition plan. Second, schedule delays in developmental testing could further postpone the start of the first F/A-22 multiyear contract, which has already been delayed until fiscal year 2006. This could result in lower cost savings from multiyear procurement. Last, we found several risk factors that may increase future production costs, including the dependency of certain cost reduction plans on Air Force investments that are not being made to improve production processes, the availability of funding, and a reduction in funding for support costs. In addition, DOD has not informed Congress about the quantity of aircraft that can be procured within existing production cost limits, which we believe could be fewer than the 276 currently planned. Further details on F/A-22 cost growth and the Air Force’s attempt to offset it are provided in appendix I. While DOD’s new acquisition policy is too late to influence the F/A-22 program, it is not too late for other major acquisition programs like the Missile Defense Agency’s suite of land, sea, air, and space defense systems; the Army’s Future Combat Systems; and the Air Force and Navy’s Joint Strike Fighter. DOD’s revised acquisition policy represents tangible leadership action to getting better weapon system acquisition outcomes, but unless the policies are implemented through decisions on individual programs, outcomes are not likely to change. Further, unless pressures are alleviated in DOD to get new acquisition programs approved and funded on the basis of requirements that must stand out, programs will continue to be compromised from the outset with little to no chance of successful outcomes. If new policies were implemented properly, through decisions on individual programs, managers would face less pressure to promise delivery of all the ultimate capabilities of a weapon system in one “big bang.” Both form and substance are essential to getting desired outcomes. At a tactical level, we believe that the policies could be made more explicit in several areas to facilitate such decisions. First, the regulations provide little or no controls at key decision points of an acquisition program that force a program manager to report progress against knowledge-based metrics. Second, the new regulations, once approved, may be too general and may no longer provide mandatory procedures. Third, the new regulations may not provide adequate accountability because they may not require knowledge-based deliverables containing evidence of knowledge at key decision points. At a strategic level, some cultural changes will be necessary to translate policy into action. At the very top level, this means DOD leadership will have to take control of the investment dollars and to say “no” in some circumstances if programs are inappropriately deviating from sound acquisition policy. In my opinion, programs should follow a knowledge- based acquisition policy—one that embraces best practices—unless there is a clear and compelling national security reason not to. Other cultural changes instrumental to implementing change include: Keeping key people in place long enough so that they can affect decisions and be held accountable. Providing program offices with the skilled people needed to craft acquisition approaches that implement policy and to effectively oversee the execution of programs by contractors. Realigning responsibilities and funding between science and technology organizations and acquisition organizations to enable the separation of technology development from product development. Bringing discipline to the requirements-setting process by demanding a match between requirements and resources. Requiring readiness and operating cost as key performance parameters prior to beginning an acquisition. Designing and implementing test programs that deliver knowledge when needed, including reliability testing early in design. Ultimately, the success of the new acquisition policy will be seen in individual program and resource decisions. Programs that are implementing knowledge-based policies in their acquisition approaches should be supported and resourced, presuming they remain critical to national needs and affordable within current and projected resource levels. Conversely, if programs that repeat the approaches of the past are approved and funded, past policies—and their outcomes—will be reinforced with a number of adverse implications. DOD will continue to face challenges in modernizing its forces with new demands on the federal dollar created by changing world conditions. Consequently, it is incumbent upon DOD to find and adopt best product development practices that can allow it to manage its weapon system programs in the most efficient and effective way. Success over the long term will depend not only on policies that embrace evolutionary, knowledge-based acquisition practices but also on DOD leadership’s sustaining its commitment to improving business practices and ensuring that those adopted are followed and enforced. DOD’s new acquisition policy embraces the best practice concepts of knowledge-based, evolutionary acquisition and represents a good first step toward achieving better outcomes from major acquisition programs. The F/A-22 program followed a different path at its beginning, a big bang, high- risk approach whose outcomes so far have been increased cost, quality and reliability problems, growing procurement reductions, and delays in getting the aircraft to the war fighter. Since this program is nearing the end of development and already into production, it is too late to adopt a knowledge approach, but it can limit further cost increases and adverse actions by not ramping up production beyond current levels until developmental and operational testing are completed and reliability goals have been demonstrated. Regardless of the F/A-22’s current predicament, the new policy can and should be used to manage all new acquisition programs and should be adapted to those existing programs that have not progressed too far in development to benefit. At a minimum, the F/A-22 should serve as a lesson learned from which to effect a change in the future DOD acquisition environment. The costs of doing otherwise are simply too high for us to tolerate. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have. Over the last 6 years, DOD has identified about $18 billion in estimated production cost growth during the course of two DOD program reviews. As a result, the estimated cost of the production program currently exceeds the congressional cost limit. The Air Force has implemented cost reduction plans designed to offset a significant amount of this estimated cost growth. But the effectiveness of these cost reduction plans has varied. During a 1997 review, the Air Force estimated cost growth of $13.1 billion. The major contributing factors to this cost growth were inflation, increased estimates of labor costs and materials associated with the airframe and engine, and engineering changes to the airframe and engine. These factors made up about 75 percent of the cost growth identified in 1997. In August 2001, DOD estimated an additional $5.4 billion in cost growth for the production of the F/A-22, bringing total estimated production cost to $43 billion. The major contributing factors to this cost growth were again due to increased labor costs and airframe and engine costs. These factors totaled almost 70 percent of the cost growth. According to program officials, major contractors’ and suppliers’ inability to achieve the expected reductions in labor costs throughout the building of the development and early production aircraft has been the primary reason for estimating this additional cost growth. The Air Force was able to implement cost reduction plans and offset cost growth by nearly $2 billion in the first four production contracts awarded. As shown in table 3, the total offsets for these contracts slightly exceeded earlier projections by about $.5 million. Cost reduction plans exist but have not yet been implemented for subsequent production lots planned for fiscal years 2003 through 2010 because contracts for these production lots have not yet been awarded. If implemented successfully, the Air Force expects these cost reduction plans to achieve billions of dollars in offsets to estimated cost growth and to allow the production program to be completed within the current production cost estimate of $43 billion. However, this amount exceeds the production cost limit of $36.8 billion. In addition, while the Air Force has been attempting to offset costs through production improvement programs (PIPs), recent funding cutbacks for PIPs may reduce their effectiveness. PIPs focus specifically on improving production processes to realize savings by using an initial government investment. The earlier the Air Force implements PIPs, the greater the impact on the cost of production. Examples of PIPs previously implemented by the Air Force include manufacturing process improvements for avionics, improvements in fabrication and assembly processes for the airframe, and redesign of several components to enable lower production costs. As shown in figure 3, the Air Force reduced the funding available for investment in PIPs by $61 million for lot 1 and $26 million for lot 2 to cover cost growth in production lots 1 and 2. As a result, it is unlikely that PIPs covering these two lots will be able to offset cost growth as planned. Figure 4 shows the remaining planned investment in PIPs through fiscal year 2006 and the $3.7 billion in estimated cost growth that can potentially be offset through fiscal year 2010 if the Air Force invests as planned in these PIPs. In the past, Congress has been concerned about the Air Force’s practice of requesting fiscal year funding for these PIPs but then using part of that funding for F/A-22 airframe cost increases. Recently, Congress directed the Air Force to submit a request if it plans to use PIP funds for an alternate purpose. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisitions: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Programs Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Best Practices: DOD Can Help Suppliers Contribute More to Weapon System Programs. GAO/NSIAD-98-87. Washington, D.C.: March 17, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996. Tactical Aircraft: Status of the F/A-22 Program. GAO-03-603T. Washington, D.C.: April 2, 2003. Tactical Aircraft: DOD Should Reconsider Decision to Increase F/A-22 Production Rates While Development Risks Continue. GAO-03-431. Washington, D.C.: March 14, 2003. Tactical Aircraft: DOD Needs to Better Inform Congress about Implications of Continuing F/A-22 Cost Growth. GAO-03-280. Washington, D.C.: February 28, 2003. Tactical Aircraft: F-22 Delays Indicate Initial Production Rates Should Be Lower to Reduce Risks. GAO-02-298. Washington, D.C.: March 5, 2002. Tactical Aircraft: Continuing Difficulty Keeping F-22 Production Costs within the Congressional Limitation. GAO-01-782. Washington, D.C.: July 16, 2001. Tactical Aircraft: F-22 Development and Testing Delays Indicate Need for Low-Rate Production. GAO-01-310. Washington, D.C.: March 15, 2001. Defense Acquisitions: Recent F-22 Production Cost Estimates Exceeded Congressional Limitation. GAO/NSIAD-00-178. Washington, D.C.: August 15, 2000. Defense Acquisitions: Use of Cost Reduction Plans in Estimating F-22 Total Production Costs. GAO/T-NSIAD-00-200. Washington, D.C.: June 15, 2000. Budget Issues: Budgetary Implications of Selected GAO Work for Fiscal Year 2001. GAO/OCG-00-8. Washington, D.C.: March 31, 2000. F-22 Aircraft: Development Cost Goal Achievable If Major Problems Are Avoided. GAO/NSIAD-00-68. Washington, D.C.: March 14, 2000. Defense Acquisitions: Progress in Meeting F-22 Cost and Schedule Goals. GAO/T-NSIAD-00-58. Washington, D.C.: December 7, 1999. Fiscal Year 2000 Budget: DOD’s Procurement and RDT&E Programs. GAO/NSIAD-99-233R. Washington D.C.: September 23, 1999. Budget Issues: Budgetary Implications of Selected GAO Work for Fiscal Year 2000. GAO/OCG-99-26. Washington, D.C.: April 16, 1999. Defense Acquisitions: Progress of the F-22 and F/A-18E/F Engineering and Manufacturing Development Programs. GAO/T-NSIAD-99-113. Washington, D.C.: March 17, 1999. F-22 Aircraft: Issues in Achieving Engineering and Manufacturing Development Goals. GAO/NSIAD-99-55. Washington, D.C.: March 15, 1999. F-22 Aircraft: Progress of the Engineering and Manufacturing Development Program. GAO/T-NSIAD-98-137. Washington D.C.: March 25, 1998. F-22 Aircraft: Progress in Achieving Engineering and Manufacturing Development Goals. GAO/NSIAD-98-67. Washington, D.C.: March 10, 1998. Tactical Aircraft: Restructuring of the Air Force F-22 Fighter Program. GAO/NSIAD-97-156. Washington, D.C.: June 4, 1997. Defense Aircraft Investments: Major Program Commitments Based on Optimistic Budget Projections. GAO/T-NSIAD-97-103. Washington, D.C.: March 5, 1997. F-22 Restructuring. GAO/NSIAD-97-100BR. Washington, D.C.: February 28, 1997. Tactical Aircraft: Concurrency in Development and Production of F-22 Aircraft Should Be Reduced. GAO/NSIAD-95-59. Washington, D.C.: April 19, 1995. Tactical Aircraft: F-15 Replacement Issues. GAO/T-NSIAD-94-176. Washington, D.C.: May 5, 1994. Tactical Aircraft: F-15 Replacement Is Premature as Currently Planned. GAO/NSIAD-94-118. Washington, D.C.: March 25, 1994.
|
Over the next 5 years, DOD's overall investments are expected to average $150 billion a year to modernize and transition our forces. In addition, DOD must modernize its forces amid competing demands for federal funds, such as health care and homeland security. Therefore, it is critical that DOD manage its acquisitions in the most cost efficient and effective manner possible. DOD's newest acquisition policy emphasizes the use of evolutionary, knowledge-based concepts that have proven to produce more effective and efficient weapon systems outcomes. However, most DOD programs currently do not employ these practices and, as a result, experience cost increases, schedule delays, and poor product quality and reliability. This testimony compares the best practices for developing new products with the experiences of the F/A-22 program. GAO's reviews of commercial best practices have identified key enablers to the success of product development programs and focused on how DOD can better leverage its investments by shortening the time it takes to field new capabilities at a more predictable cost and schedule. First, commercial firms use an approach that evolves a product to its ultimate capabilities on the basis of mature technologies and available resources. This approach allows only the product features and capabilities achievable with available resources in the initial development. Further product enhancements are planned for subsequent development efforts when technologies are proven to be mature and other resources are available. Second, commercial firms ensure that a high level of knowledge exists at key junctures during a product's development. The knowledge-based process includes three points: (1) Before a program is launched, successful programs match customer needs and available resources--technology, engineering knowledge, time, and funding; (2) About midway through development, the ability of the product's design is demonstrated to be stable and meet performance requirements; and (3) Before production begins, programs must show that a product can be manufactured within cost, schedule, and quality targets. In contrast, the F/A-22 program illustrates what can happen when a major acquisition program is not guided by the principles of evolutionary, knowledge-based acquisition. When the program was started, several key technologies were not mature. Program managers proceeded through development without the requisite knowledge to effectively manage program risk and, at the start of production, key manufacturing processes were not under control. The F/A-22 program has undergone significant cost increases. Instead of fielding early capabilities to the war fighter, the development cycle has extended to 19 years, so far, and original quantities have been significantly reduced, raising concerns about the capability the program will eventually deliver. DOD recognizes the need to get better weapon system outcomes, and its newest acquisition policy emphasizes the use of evolutionary, knowledge-based acquisition concepts proven to produce better outcomes in developing new products. However, policy changes alone are not enough. Leadership commitment and attention to putting the policy into practice for individual programs is needed to avoid the problems of the past. DOD will have many opportunities to do so over the next several years with its force modernization investments.
|
U.S. agencies have different responsibilities related to international regulatory cooperation. For example, Commerce, State, USTR, OMB, and USDA have government-wide responsibilities. Their roles and responsibilities are determined primarily through statutes and executive orders. U.S. treaty obligations also influence their activities, as shown in table 1. To some extent these agencies bring structure and direction to activities that are in practice pursued in a decentralized manner by multiple agency participants. U.S. regulatory agencies have varying missions, such as protecting public health or safety, and engage in multiple activities to fulfill their missions. Statutes establish agencies’ missions and establish the scope and limits of each agency’s authority. Agencies often implement their statutory missions by developing, issuing, and enforcing regulations. Agencies may also need to comply with multiple procedural and analytical requirements during the rulemaking process that precedes the issuance of regulations, including participation in interagency review and coordination processes summarized in table 1. Regulation is one of the principal tools that the U.S. federal government uses to implement public policy. Underlying federal regulatory actions is the long-standing rulemaking process established by the Administrative Procedure Act (APA). This act establishes broadly applicable federal requirements for informal rulemaking, also known as notice and comment rulemaking.the APA generally include four basic phases: At a high level, domestic rulemaking activities governed by 1. Consideration of regulatory action: The agency gathers information to determine (1) whether a rulemaking is needed and (2) the range of regulatory options. 2. Development and issuance of proposed regulation: The agency drafts a proposed regulation, including the preamble (the portion of the regulation that informs the public of the supporting reasons and purpose of the regulation) and the language in the regulation. The agency also begins to address analytical and procedural requirements and engages in interagency coordination and OMB review, where required. After these are complete, the agency publishes the proposed regulation in the Federal Register and requests comments from the public. 3. Development and issuance of final regulation: The agency responds to public comments, completes analytical and procedural requirements, engages in interagency coordination and OMB review where required, and publishes the final regulation in the Federal Register. 4. Implementation of final regulation: The agency enforces compliance with the final regulation and monitors its performance. Various executive orders and guidance establish agencies’ processes that govern international regulatory cooperation activities. Executive Order 12866 established the basic principles and processes that help guide and coordinate regulatory actions by executive agencies (other than independent regulatory agencies). Three components of the order are especially relevant to current regulatory cooperation efforts. First, the order established general principles for government regulation, including that agencies should assess the costs and benefits of available regulatory alternatives. Second, the order established centralized review and coordination of rulemaking, particularly by (1) requiring agencies to submit draft significant regulations to OMB’s Office of Information and Regulatory Affairs (OIRA) for interagency review before they are published and (2) establishing the RWG to serve as a forum to assist agencies in identifying and analyzing important regulatory issues Third, the order required agencies to compile and make public their regulatory agendas and plans, which include identifying the anticipated effects of forthcoming regulations. Executive Order 13563 reaffirmed the principles, structures, and definitions governing contemporary regulatory review that Particularly relevant to this were established by Executive Order 12866.report, the order states that the regulatory system must promote competitiveness, and it also expanded expectations for agencies to retrospectively review their existing regulations. OMB periodically issues guidance to executive agencies on implementing executive orders. One key example related to the regulatory review orders discussed above is OMB Circular A-4, issued in 2003. The circular provides OMB’s guidance on the development of regulatory analysis as required under Executive Order 12866 and related authorities, defining good regulatory analysis and standardizing the way benefits and costs of federal regulatory actions are measured and reported. The circular includes a brief paragraph about considering the impacts of federal regulation on global markets and trade. In May 2011, USTR and OMB released a joint memorandum restating U.S. trade obligations and provided additional guidance to agencies on how to carry them out. In particular, the joint memorandum stressed the importance of agencies’ attention to regulatory analysis requirements in prior executive orders and OMB Circular A-4, as well as avoiding unnecessary barriers to trade as specified in the Trade Agreements Act. The memo also encouraged agencies to engage in international collaboration activities. Some U.S. international regulatory cooperation efforts occur within the context of trade policy and negotiations. Reducing foreign regulatory barriers to trade is a key U.S. trade objective. In support of this objective, international agreements and U.S. legislation enacting them encourage and guide agencies’ participation in some international regulatory cooperation activities. For example, the Uruguay Round Agreements Act codifies the WTO Agreement on Technical Barriers to Trade (TBT Agreement) and the WTO Agreement on the Application of Sanitary and Phytosanitary Measures (SPS Agreement) and includes additional international regulatory cooperation responsibilities. Several of the most salient obligations are briefly described below. For technical regulations, the TBT Agreement requires members to use international standards or the relevant parts of them as a basis for technical regulations where available and appropriate, and, in certain instances, notify the WTO of proposed regulations with possible trade impacts and consider comments received before finalizing those regulations. Further, the TBT Agreement states that regulations should be no more trade restrictive than needed to fulfill a legitimate objective. Specifically, TBT Agreement art. 2.2, 2.4, and 2.9. For SPS measures (including measures to protect animal or plant life from pests, diseases, or disease-causing organisms as well as to protect human or animal life), the SPS Agreement requires members to base their measures on existing international standards, or where the measure results in a higher level of protection, allows members to maintain or introduce their own standard if there is a scientific justification. Members are also required to ensure that their regulations are applied only to the extent necessary to protect human, animal, or plant life or health. Members are to notify the WTO at an early stage in the rulemaking if a proposed regulation differs from an international standard and may have a significant trade impact on other members, in order to receive comments for consideration. Free Trade Agreements (FTA): According to USTR, FTAs, such as the U.S. Korea Free Trade Agreement, build on the disciplines of the TBT Agreement, by providing for greater transparency. Some U.S. FTAs also provide that interested parties and persons should be given opportunities to comment on proposed measures. According to Commerce officials, most of these bilateral trade agreements also provide for more timely notification mechanisms than multilateral mechanisms such as the TBT Agreement. Most of these bilateral trade agreements also provide for more timely notification mechanisms than multilateral mechanisms. In addition to these finalized agreements, the United States has offered proposals in ongoing Trans-Pacific Partnership negotiations toward a trade agreement among 11 participating nations to promote transparency. More recently, on February 13, 2013, President Obama and European Union (EU) leaders announced their intention to launch negotiations on a Transatlantic Trade and Investment Partnership. According to USTR, the goals of the partnership include reducing the cost of differences in regulation and standards by promoting greater compatibility, transparency, and cooperation. All agencies in our study reported that they engage in a range of international regulatory cooperation activities. These activities include U.S. agencies and foreign counterparts sharing scientific data, developing and using the same international regulatory standards, and recognizing each other’s regulations as equivalent. Cooperation can address both existing and avoid future regulatory differences. These activities generally fall into six broad categories, as shown in table 2 below. See appendix II for details on the illustrative examples. International regulatory cooperation activities involve bilateral and multilateral governmental relationships and participation in third-party organizations, such as standards-setting bodies. For example, some agencies in our study participate in international organizations, such as the World Organization for Animal Health (OIE) or the International Organization for Standardization (ISO). International cooperation activities may be formal or informal, ranging from participation in international organizations established by international agreements to informal regulatory information sharing and dialogues. International regulatory cooperation activities may also occur on a government-wide basis and address multiple sectors. For example, the U.S.-Canada Regulatory Cooperation Council (RCC) is an effort to increase regulatory transparency and coordination between the two countries. Action plans exist in the areas of agriculture and food, transportation, health and personal care products and workplace chemicals, the environment, and cross-sectoral issues. Similarly, OMB, Commerce, and other federal agencies also participated in the Asia-Pacific Economic Cooperation (APEC) effort to share and promote good regulatory practices, such as transparency and centralized review of regulations, among APEC economies. Agency officials said they engage in international regulatory cooperation activities primarily because they are operating in an increasingly global environment and many products that agencies regulate originate overseas. For example, according to FDA’s Global Engagement Report, the United States imports 80 percent of active pharmaceutical ingredients and imports of FDA-regulated products have grown dramatically in recent years. FDA reported that the agency engages in international cooperation activities to ensure products produced overseas are safe for U.S. consumers. Similarly, CPSC operates in an increasingly global environment. According to CPSC, the value of U.S. imports under CPSC’s jurisdiction has skyrocketed in recent years, with imports from China more than quadrupling from $62.4 billion in 1997 to $301.0 billion in 2010. Moreover, in fiscal year 2012, 4 out of every 5 consumer product recalls or 345 of 439 recalls involved imported products, making imports a critical focus for CPSC. Agencies also cooperate with foreign counterparts in an effort to gain efficiencies. For example, EPA participates in an initiative on pesticides through the Organisation for Economic Co-operation and Development (OECD) that has resulted in regulatory efficiencies. OECD also reported that, by accepting the same test results OECD-wide, unnecessary duplication of testing is avoided, thereby saving resources for industry and society as a whole. A 2007 study for the OECD Working Group on Pesticides estimated resource savings of 33 to 40 percent as a result of joint review by three to five countries, compared with each country working alone. The study noted that the savings from reducing duplicative expert evaluation work significantly outweighed the marginal increase for project management, coordination, and travel. These tools and approaches facilitate work sharing for regulators and help avoid costly, duplicative testing by ensuring that the data developed and submitted in one country can be used by other countries in reaching their regulatory decisions. Agencies’ efforts to cooperate on regulatory programs through cooperative activities may also have the effect of facilitating trade and supporting the competitiveness of U.S. businesses. FDA officials said that international regulatory cooperation and harmonization has public health benefits, promotes regulatory efficiency, and both also have indirect competitiveness advantages for companies. FDA officials said that public health regulatory and competitiveness goals are often complementary: by upholding and enforcing scientifically valid standards, public health is protected and promoted at the same time that companies benefit from a level playing field that should make their products more competitive. Moreover, bringing a quality, safe, effective new drug to market faster yields health benefits for individuals because they have access to the drug sooner as well as trade benefits for industry, which has access to more markets. In addition, U.S. agency officials said that when they participate in international standards development, an existing U.S. regulation or policy approach may be used as the basis for the international standard. When other countries adopt U.S. approaches to regulations, it can lower compliance costs and support competitiveness for U.S. businesses. For example, EPA’s Office of Air and Radiation (OAR) officials said that OAR worked within the World Forum for Harmonization of Vehicle Regulations (WP.29) to urge the use of a U.S. regulation as the basis for a global regulation on test procedures for off-highway construction vehicle engines. OAR officials said U.S. manufacturers supported this effort because U.S. manufacturers sell equipment internationally, and complying with one set of regulations reduces their fixed costs. There are four interagency review processes routinely used to identify and review regulations that could have trade or competitiveness impacts and to encourage international regulatory cooperation. OMB officials said that a process for interagency coordination with OMB, USTR, State, and Commerce on regulations is the centralized regulatory review process under Executive Orders 12866 and 13563. USTR officials said they work with agencies as needed on regulatory issues that have an international impact prior to the interagency regulatory review process. However, the interagency review process ensures OMB, USTR, State, and Commerce another opportunity to provide input on any proposed significant regulation from agencies whether or not international impacts were raised earlier. Independent agencies are not required to participate in the interagency review process. The May 2012 Executive Order 13609 on promoting international regulatory cooperation establishes processes for agencies to report on efforts in this area. The order requires agencies that are required to submit a regulatory plan to report a summary of their international regulatory cooperation activities that are reasonably anticipated to lead to significant regulations in their regulatory plans. It also requires agencies to identify regulations with significant international impacts in the Unified Agenda, on Reginfo.gov, and on Regulations.gov. Generally, all U.S. federal agencies are required to consult with State before concluding international agreements. State is responsible for ensuring that any proposed international agreement is consistent with U.S. foreign policy. State officials said that the Secretary of State must be consulted on international regulatory cooperation issues involving the negotiation or signing of international agreements or arrangements. The Trade Agreements Act, as amended, requires U.S. agencies to coordinate in specified circumstances, standards-related trade measures as part of their overall statutory responsibilities. 22 C.F.R. § 181.4(a). For example, USTR is required to coordinate international trade policy issues that arise as a result of implementation of the WTO TBT agreement. USTR is also required to inform and consult any federal agencies having expertise in the matters under discussion or negotiation in coordinating U.S. discussions and negotiations with foreign countries for the purpose of establishing mutual arrangements with respect to standards-related activities. USTR also must consult with the cited agency and members of the interagency trade organization if a foreign government makes a representation to the USTR alleging that a U.S. standards-related activity violates U.S. TBT obligations. Commerce and USDA must coordinate with USTR with respect to TBT international standards-related activities that may substantially affect the commerce of the United States. Furthermore, with regard to TBT obligations, the Secretaries of Commerce and USDA have a role in assuring adequate representation of U.S. interests in international standards organizations, and encouraging cooperation among federal agencies so as to facilitate development of a unified U.S. position. processes, such as activities related to information sharing and scientific collaboration, capacity building, or the use of international standards in regulations that are not significant. OMB and USTR also lead interagency forums on regulations and trade that have different responsibilities related to international regulatory cooperation. Executive Order 13609 assigns responsibilities to the Regulatory Working Group (RWG), chaired by OMB’s Administrator of OIRA, to serve as a forum to discuss, coordinate, and develop a common understanding among agencies of U.S. government priorities for international regulatory cooperation. According to OMB officials, the RWG provides a forum to foster greater cooperation and coordination of U.S. government strategies, including those for promoting regulatory transparency, sound regulatory practices, and U.S. regulatory approaches abroad. OMB officials also said that the RWG is developing guidance to implement the executive order. USTR chairs the policy-level Trade Policy Staff Committee (TPSC), which maintains U.S. interagency mechanisms for trade policy coordination among State, Commerce, the Department of Labor, USDA, and other appropriate agencies. The TPSC identifies and addresses foreign government trade measures among other duties. USTR officials said USTR coordinates with agencies on trade issues related to regulations at the working level through the TPSC subcommittees on technical barriers to trade and sanitary and phytosanitary barriers to trade. USTR explained that these subcommittees are involved in supporting international regulatory cooperation by anticipating and resolving potential regulatory conflicts that could impair trade. USTR officials also noted that at the TPSC subcommittee level, USTR coordinates with officials from regulatory agencies in preparing for participation in international cooperation activities, such as APEC meetings, as well as regulators’ involvement in international standards development. Nevertheless, some agency officials reported that greater coordination between regulatory forums and trade forums could improve outcomes. USTR officials also said there is uncertainty about the implementation of Executive Order 13609 and how it will relate to USTR’s trade responsibilities. According to OMB officials, one of the main objectives of Executive Order 13609 is to improve coordination of international regulatory cooperation. They anticipate that forthcoming guidance on Executive Order 13609 will address collaboration with the RWG and other interagency groups, particularly the TPSC. Beyond these forums for interagency coordination, regulatory agency officials we interviewed said the current processes could benefit from better information sharing among agencies on the implementation of international regulatory cooperation activities and lessons learned. We have previously found that it is important to ensure that the relevant participants have been included in the collaborative effort, including those with the knowledge, skills, and abilities to contribute to the outcomes of the collaborative effort.level, government-wide policy discussions, and participants in the RWG and TPSC are higher level management or policy officials who may be somewhat removed from the technical activities that underpin rulemaking. The RWG and TPSC are designed for high- Regulatory agency officials we interviewed pointed out that additional ways to facilitate exchanges about best practices and day-to-day implementation would be helpful. An agency official said that there may be a benefit to having an interagency dialogue, working group, or other forum through which officials can share information on challenges and successes in implementing international regulatory cooperation. For example, officials said EPA and FTC both have regulations related to labeling and there may be opportunities that could result from sharing information and best practices with international regulators. Agency officials we interviewed identified another example illustrating the potential benefits of staff-level exchanges and information sharing during a multiagency meeting on this report. The officials that we interviewed said it is challenging to measure the outcomes of international regulatory cooperation activities and there is a need for an appropriate metric to show the value of funds spent on these activities. EPA officials we interviewed stated that in one case they successfully quantified the benefits from work with OECD’s Mutual Acceptance of Data program. According to EPA, the implementation of this decision has saved both governments of 34 member countries and industry nearly $225 million annually and also generated many nonquantifiable benefits, such as promoting animal welfare in chemical testing. Officials attending a GAO multiagency meeting said similar practices would be helpful to justify investments in international regulatory cooperation activities. Agency officials we interviewed said they found a multiagency meeting on this report useful in part because the meeting involved discussions of day-to- day implementation of these issues. Further, Commerce officials suggested that enhanced coordination among participants in these forums would also benefit from including existing interagency standards policy groups, such as the Interagency Committee on Standards Policy and the National Science and Technology Council’s Subcommittee on Standards. Without some enhancements to the current forums for regulators and trade officials to collaborate, opportunities to share practices and improve safety and regulatory efficiencies and to reduce trade barriers could be missed. Agency officials said there is currently not a forum to meet this need. Although nonfederal stakeholder input into regulatory processes is important, the stakeholders we spoke with said it can be challenging for them to provide input into agencies’ international regulatory cooperation activities because of the required resources and the difficulty of becoming aware of such activities. Congresses and Presidents have required agencies to comply with multiple procedural requirements in an effort to promote public participation in rulemaking, among other goals. For formal international regulatory cooperation, such as standards setting, according to nonfederal stakeholders, they can directly observe international meetings and provide input in some cases. However, nonfederal stakeholders told us that high levels of resources are required to participate in international meetings, which can limit participation in practice. For informal international cooperation activities, nonfederal stakeholders said it is even more challenging to track and provide input into the agencies’ activities because some activities described to us by regulatory agencies precede the decision to regulate and therefore may not be transparent to the public. While it is generally challenging for nonfederal stakeholders to provide input into U.S. agencies’ international regulatory cooperation activities, it is particularly important that stakeholders at least have the opportunity to participate and advise agencies when those activities are anticipated to lead to the development of regulations. However, further complicating nonfederal stakeholders’ efforts, there is no single source of public information on anticipated U.S. and foreign rulemakings with an international impact. For example, the Unified Agenda and OMB Regulatory Review Database both identify U.S. regulations that have an international impact. The Unified Agenda includes regulations under development or review, while the OMB Regulatory Review Database includes significant regulations submitted to OMB for review. In addition, the WTO maintains databases on certain member countries’ proposed regulations related to technical barriers to trade and sanitary and phytosanitary measures—namely those self- identified as having potential trade impacts or involving divergence from international standards. Agency officials we interviewed agreed that stakeholder involvement is important and nonfederal stakeholders are uniquely positioned to identity and call attention to unnecessary differences among U.S. regulations and those of its trading partners. Agencies and nonfederal stakeholders told us that the U.S.-Canada RCC has implemented practices to engage nonfederal stakeholders. For example, the 29 work plans that make up the RCC were developed in part from the response to a Federal Register request for public comments concerning regulatory cooperation activities that would help eliminate or reduce unnecessary regulatory divergences in North America that disrupt U.S. exports. Stakeholder outreach activities are also included in the work plans. OMB is also taking steps to increase the transparency of agencies’ international regulatory cooperation activities and included new reporting requirements for agencies in Executive Order 13609. The order directs agencies that are required to submit a regulatory plan to include summaries of their international regulatory cooperation activities that are reasonably anticipated to lead to significant regulations. An agency official also cautioned it may not be realistic for agencies to report all international regulatory cooperation activities as many are informal in nature. Agency officials we interviewed reported that the outcomes from international regulatory cooperation can inform all phases of the rulemaking process, from affecting an agency’s decision whether or not to regulate in a particular area to implementing and enforcing regulations. According to an agency official, there is no bright line that separates international regulatory cooperation activities from regulatory programs. For example, U.S. agencies share scientific and technical information with their foreign counterparts, which can inform all stages of the rulemaking process. In addition, information sharing can help inform an agency’s decision on whether or not to regulate a product. When countries have differences in regulations in a particular area, there are opportunities to coordinate on the science underlying regulatory decisions in a particular area. EPA Office of Chemical Safety and Pollution Prevention (OCSPP) officials said that for chemical safety regulations, countries are working within different statutory and regulatory frameworks and different levels of acceptance of risk that can make it difficult to reach full agreement on a regulatory approach. In such cases, sharing information with foreign counterparts can facilitate agreement on a common understanding of the issue or on underlying technical or scientific issues. According to officials that we interviewed, OCSPP also focuses on transparency and good regulatory practices, which lead to commonality between policies, work sharing on scientific reviews, and greater harmonization in the long term. Some international regulatory cooperation activities, such as the development of international standards or practices, can inform and contribute to the development and issuance of a proposed regulation. Certain U.S. agencies reported that they coordinate with organizations that develop international standards and may use these standards when developing domestic regulations. For example, DOT’s Pipeline and Hazardous Materials Safety Administration (PHMSA) participates in the United Nations (UN) Transport of Dangerous Goods (TDG) Subcommittee, which develops UN Model Regulations for the transportation of hazardous materials. In an effort to align with any changes to the UN Model Regulations, PHMSA considers these model regulations in a rulemaking every 2 years. As a result, related U.S. regulations are more closely aligned with trading partners and there are fewer country-unique regulations for businesses to comply with, which leads to improved safety results. According to PHMSA officials that we interviewed, when regulations are the same in different countries it enhances compliance and improves the efficiency of the transportation system by minimizing regulatory burdens and facilitating effective oversight. Similarly, Commerce officials pointed out that regulators often use common technical standards as the basis for regulation, which can reduce the burden on the regulated community. Other international regulatory cooperation activities are related to the implementation of regulations, such as equivalency agreements that assure compliance with U.S. requirements and capacity building. For example, USDA’s Agricultural Marketing Service (AMS) manages equivalency agreements for organic food labeling. The U.S. equivalence arrangement with the EU allows organic products certified in Europe or the United States to be sold as organic in either region. According to AMS officials, equivalency agreements result in expanded market access, fewer duplicative requirements, and lower certification costs for organic products. Previously, businesses that wanted to trade organic products had to obtain separate certifications for both the United States and EU, which meant a second set of fees, inspections, and paperwork. Agencies also engage in capacity building and provide technical assistance to countries to help foreign businesses comply with U.S. regulations when exporting to the United States. For example, FDA developed a comprehensive international food safety capacity-building plan in response to a requirement in the FDA Food Safety Modernization Act. The plan establishes a strategic framework for the FDA, describes an approach that is based on prioritizing risks to U.S. consumers, and focuses on addressing system weaknesses working with foreign government and industry counterparts and other stakeholders. Agencies also engage in work-sharing arrangements with their foreign counterparts to gain efficiencies in the implementation of regulatory programs. For example, under the United States-Canada Beyond the Border Initiative, USDA’s Animal and Plant Health Inspection Service (APHIS) conducted a joint foot and mouth disease site visit in Colombia as part of the evaluation of Colombia’s request to export fresh beef. Coordinated inspections allow agencies to leverage resources with their foreign counterparts to fulfill their regulatory responsibilities. OIRA also engages in activities to strengthen the capacity of developing countries in several contexts, including APEC and work with Brazil, Vietnam, and Morocco. Some international regulatory cooperation activities that U.S. agencies shared with us are on products that are not regulated by U.S. agencies. Agencies do not issue regulations through programs where participation is voluntary but still may coordinate with foreign counterparts. For example, DOE is working with other countries through the Efficient Electrical End-use Equipment (4E) Implementing Agreement on efficiency and performance criteria and metrics, test methods, and qualified testing laboratories for new technology for solid state lighting. DOE officials said coordination on solid state lighting is important, because without a common agreement, it would be more difficult for products to enter the world market. Standardized labeling also helps customers understand the product they are buying and how its efficiency compares with other products. For regulations deemed significant under Executive Orders 12866 and 13563, U.S. agencies are required to assess the costs and benefits, but there is no requirement for agencies to conduct a separate analysis on competitiveness impacts when developing regulations. Among the general principles of regulation under Executive Order 13563 is that the U.S. regulatory system should promote economic growth, innovation, competitiveness, and job creation. Moreover, according to executive orders on regulatory review, among the possible effects that agencies should consider are the significant adverse effects on the ability of U.S. companies to compete in domestic and foreign markets. Moreover, OMB Circular A-4’s discussion on global competitiveness states: “The role of Federal regulation in facilitating U.S. participation in global markets should also be considered. Harmonization of U.S. and international rules may require a strong Federal regulatory role. Concerns that new U.S. rules could act as non-tariff barriers to imported goods should be evaluated carefully.” Further, these executive orders and related guidance do not apply to independent agencies. The concept of competitiveness is a general one, referring to the set of institutions, policies, and human and natural endowments that allow a country to remain productive. Depending on the circumstances, the focus of analysis could vary. Here, in the context of international regulatory cooperation, improvements to competitiveness might arise from lowering the cost of a firm’s compliance with other countries’ standards or expanding access of U.S. products to foreign markets. However, documenting the effect of the removal of barriers on firm cost and sales presents challenges because data on individual firm performance may not be available and because the effect of the regulatory action may be difficult to isolate. Still, in some cases, it may be possible to describe effects in terms of magnitude and direction. When agencies develop regulations related to international activities, officials from five of the seven agencies in our study told us that they consider competitiveness as needed. Officials from two agencies in our study provided examples of analysis of competitiveness impacts in the rulemaking record. Agency officials said competitiveness impacts for some rulemakings are likely to be indirect and may not rise to the level of inclusion in the rulemaking record. For example, according to officials, APHIS’s regulations focus on preventing the introduction and spread of pests and diseases of livestock and plants. The officials explained it is difficult to point to any APHIS regulations that can be said to have a direct effect on the ability of U.S. businesses to compete in the marketplace. In another example, officials from DOT’s PHMSA said their regulations related to pipeline safety are for pipelines within the United States. When included in the rulemaking record, competitiveness is likely to be a secondary or tertiary effect in rulemaking analysis. For example, according to OAR officials, most OAR rulemakings have few if any direct impacts on competitiveness. These impacts, if any, would likely be secondary or indirect. They said that competitiveness analysis, when appropriate, might examine whether increased production costs for a U.S. business may put it at a competitive disadvantage compared with a similar company in a different country that is not required to comply with a similar environmental regulation. Some agency officials we interviewed said competitiveness impacts can be challenging to identify, difficult to quantify, and resource intensive to complete and that they do not have tools to consider competitiveness during rulemaking. According to DOT’s National Highway and Traffic Safety Administration (NHTSA) officials, NHTSA has never addressed the competitiveness of U.S. businesses in any of its analyses. NHTSA does not have tools for analyzing the effects of its safety standards on the competitiveness of U.S. businesses. For at least 10 years, NHTSA and DOT’s Volpe Center have attempted to create a consumer-marketing model to help estimate the impact of the fuel economy program on sales and have been unsuccessful to date.the impact on competition of a relatively small safety standard, when NHTSA cannot do it for the enormous fuel economy standard, does not seem to be a good use of resources. They said that trying to determine However, officials from one agency we interviewed said that competitiveness impacts are assumed to exist when they are aligning regulations with trading partners, but agencies do not do a separate analysis. For example, according to PHMSA officials, PHMSA’s harmonization rulemakings are premised on the assumption that harmonized standards reduce costs for businesses and therefore reduce barriers to trade. Specific cost-benefit analysis, however, is generally associated with comparing the estimated costs of a regulation with the safety and efficiency benefits associated with a specific change and not directly associated with competitiveness of U.S. businesses. Further, the TBT Agreement explains that using international standards as the basis of a technical regulation adopted for a specified legitimate objective shall be rebuttably presumed to not create unnecessary obstacles to international trade. Illuminating Engineering Society, and others. Many of these test standards are referenced in, or used as the basis for, standards developed by organizations, such as the International Electrotechnical Commission, ISO, or other international standards-setting organizations. Agencies may also consider approaches taken by other countries. For example, in the development of a crib safety regulation, CPSC staff reviewed requirements of existing voluntary and international standards related to cribs. The primary standards currently in effect are CPSC standards for full-size cribs, which reference the ASTM voluntary standard; a Canadian standard; a European standard; and an Australian and New Zealand standard. ASTM considered the existing international standards in the development of the current ASTM voluntary standard. The TBT Agreement includes requirements to use international standards or their relevant parts as the basis for technical regulation where available and appropriate; to participate in international standards development, within the limits of their resources; and to avoid unnecessary obstacles to trade. Similarly, under the National Technology Transfer and Advancement Act of 1995, agencies are required to use technical standards that are developed or adopted by voluntary consensus standards bodies unless they are inconsistent with applicable law or otherwise impracticable. If using standards other than voluntary consensus standards, agencies are also required to provide an explanation to OMB. Further, Executive Order 13609 on promoting international regulatory cooperation includes a requirement that for significant regulations that the agency identifies as having significant international impacts, agencies consider, to the extent feasible, appropriate, and consistent with law, any regulatory approaches by a foreign government that the United States has agreed to consider under a regulatory cooperation council work plan. DOT, CPSC, FDA, and USDA have some additional agency-specific documents related to considering international standards during rulemaking. Agency officials that we interviewed identified seven factors that have the greatest impact on improving the effectiveness of international regulatory cooperation. Some of these factors can facilitate agencies’ efforts if present in international regulatory cooperation activities while others can also act as a barrier when absent. In an environment of constrained budgets, agencies may not be able to address the factors equally, so it is particularly important for agencies to focus on the factors that facilitate their efforts. Therefore, as part of our evaluation, we ordered the factors in table 3 below based on discussions and written responses from agencies. As another part of our evaluation of these factors, we found that they align with each of the key features important for agencies to consider when implementing collaborative mechanisms. In September 2012, we identified features that agencies could benefit from considering when implementing interagency collaborative mechanisms. For example, we found that: (1) resources are a key feature because collaborative efforts can take time and resources in order to accomplish such activities as building trust among the participants, setting up the ground rules for the process, attending meetings, conducting project work, and monitoring and evaluating the results of work performed; (2) establishment of agreements in formal documents can strengthen an agency’s commitment to working collaboratively; and (3) leadership is important to all collaborative efforts, but agencies have said that transitions within agencies or inconsistent leadership can weaken the effectiveness of any collaborative mechanism. We used those features as criteria to determine whether the seven main factors that agencies and stakeholders identified as affecting international regulatory cooperation reflected consideration of each of those issues. We applied these criteria by comparing agencies’ characterizations of the seven key factors affecting international regulatory cooperation to the specific questions identified in our 2012 report for agencies to consider when implementing collaborative mechanisms. That comparison demonstrated that one or more of the seven key factors corresponded to each of the features of effective collaborative mechanisms. GAO-12-1022. reoccurring cooperation after implementation of regulations may be less resource intensive to maintain through monitoring of developments in foreign countries and by directly participating in formal and informal meetings. USDA Foreign Agricultural Service (FAS) officials also pointed out that it can take a long time before payoffs or results from resources invested in international regulatory cooperation become apparent. Agency officials that we interviewed also identified some challenges to securing and sustaining resources for international regulatory cooperation activities. For example, officials said that international cooperation may be viewed as too resource intensive to inform each individual regulatory activity. Officials also said that investment in international regulatory cooperation is viewed in some agencies as optional if it conflicts with other priorities and responsibilities when the same staff members are needed for other regulatory activities. One FAS official said that one of the greatest resource constraints is securing the availability of regulators in his department. Agency officials said that their foreign counterparts also face resource constraints that may affect their participation in two ways. First, resource constraints may limit their ability to participate in international regulatory cooperation activities. Second, such constraints may encourage foreign counterparts to leverage their limited resources with the United States and other partners when the issues line up with their own priorities. Officials identified some opportunities for leveraging funds from other agencies to participate in international activities on an ad hoc basis. To encourage compliance with the TBT Agreement, U.S. law authorizes the United States Trade Representative and the Secretary concerned to make grants to and enter into contracts with any other federal agency to assist that agency in implementing programs and activities such as participating in international standards-related activities. For instance, one industry official said that agencies are going to have fewer resources and therefore should be interested in leveraging their resources with other countries as early as possible. Agency officials confirmed that there are opportunities for them to leverage funds from United States Agency for International Development (USAID) and State to participate in international meetings. In addition, USTR officials said that their agency is able to leverage funds that are not available to other U.S. agencies and can match funds for regulators to meet with their foreign counterparts in an international setting, such as through APEC meetings. U.S. participation in international regulatory cooperation can be a multi-agency effort. However, officials from a different agency also cautioned that such funds tend to be limited to efforts that involve developing countries and expressed concern that they are unlikely to be used to support regulators’ participation with the EU. With reductions to the federal budget, the money available to support regulatory cooperation may shrink. Established processes. According to agency officials, having defined long-term processes and accountability mechanisms in place for working with foreign counterparts can facilitate international regulatory cooperation. Officials also said that such established processes can increase transparency for stakeholders and better enable input. Agencies said that defined processes developed through international agreements, including forums, international procedures, and other international mechanisms, are helpful. Agreements, such as the WTO SPS Agreement, require members to consider international standards during their process to develop regulations. The WTO SPS Agreement generally obligates members to base their regulations on sanitary or phytosanitary measures on international standards from Codex, OIE, or the International Plant Protection Convention unless they have scientific justification or have determined a different level of protection through a risk assessment. In our September 2012 report, we concluded that the establishment of agreements in formal documents can strengthen an agency’s commitment to working collaboratively. Similarly, officials from DOT’s PHMSA said established processes for the UN TDG Subcommittee facilitate their cooperative efforts. The OECD also has established processes on chemicals in their rules. The binding nature of OECD rules ensures all countries abide by the requirements to accept data from other OECD members, which helps advance its international regulatory coordination efforts. High-level leadership. Agency officials told us in our interviews that high-level leadership within an agency and leadership from outside the agency can facilitate international regulatory cooperation, but a perceived lack of high-level commitment or changing priorities can serve as barriers. One academic expert said that the only way that international regulatory cooperation will work is with high-level attention from the White House, OMB, USTR, and the State Department. In addition, OMB officials we interviewed said high-level support and leadership is essential to the success of international regulatory cooperation. They also stressed that regulatory agencies must have buy-in themselves, rather than be coerced into international regulatory cooperation by outside agencies. Similarly, Commerce’s International Trade Administration (ITA) officials said that executive orders and presidential initiatives, such as Executive Order 13609, the U.S.-Canada RCC, the U.S.-Mexico High Level Regulatory Cooperation Council, APEC leaders’ meetings, and the North American Leaders Summit, have increased visibility, encouraged action from the regulatory community, and prioritized events related to international regulatory cooperation. Agency officials said that commitment of resources is an indicator of top-level support. Agencies also said that active participation by agency leadership with foreign counterparts can expedite and facilitate progress at key points. FDA officials said that, in their experience, when the heads of agencies have an ongoing active relationship with their counterparts in foreign countries, international regulatory cooperation is more likely to produce results. Agencies told us that it can be challenging when leadership priorities change, such as when a new administration establishes different priorities, because international regulatory cooperation activities are long- term efforts. Shifting political priorities can lead to short-term commitments that can make it difficult for agencies to see projects through to the end. Officials said that agencies need high-level commitment, but if it wanes agencies can be left part way into a long-term project. In our September 2012 report, we concluded that, given the importance of leadership to collaborative efforts, transitions and inconsistent leadership can weaken the effectiveness of any collaborative mechanism. Scientific and technical exchanges. Sharing scientific and technical information facilitates international regulatory cooperation and includes coordination on testing, enforcement, and compliance issues, but, as explained later, can also be restricted by statutory authority. The FTC provides technical assistance to other countries in developing their regulatory policies. When countries disagree on the appropriate policy or standards, they can sometimes find agreement on the underlying scientific and technical basis for regulations. According to FDA officials, the regulations for medical products are more science based, while those for food are more culture based, so FDA has more success with international coordination on medical products. Collaboration and sharing of data can lay the groundwork for future coordination. An independent advisory agency developed a report that stated that the mutual trust between regulators is an opportunity for work sharing because agencies do not have to duplicate tests or science which allows them to share their workload with foreign counterparts, move limited inspectors or transfer other resources to areas of greater need. However, some statutes may restrict scientific and technical exchanges because of limits on the disclosure of information with foreign counterparts which is further discussed within the section on statutory authority. Stakeholder involvement. Agencies we interviewed identified coordination with nonfederal stakeholders, such as industry groups, academic experts, and consumer groups, as a facilitator of international regulatory cooperation. An FDA official said that nonfederal stakeholders may be uniquely positioned to identify unnecessary differences in regulations and standards between countries and help agencies prioritize which differences would be most meaningful to address from their perspective. For example, FTC officials said that in developing the work products of the International Competition Network (ICN) a significant number of business users and nongovernmental advisors bring attention to issues, provide outside perspectives, help produce work products, and encourage implementation, even though government agencies are the members that ultimately accept the work by consensus. Some agency officials and nonfederal stakeholders reported challenges to stakeholder involvement. Regulatory cooperation can be more difficult to resolve when nonfederal stakeholders have conflicting viewpoints about regulations. For example, USDA officials said there can be challenges when consumer advocacy groups and business advocacy groups have different views that lead to lawsuits to prevent international regulatory alignment. USDA officials said that the support for a U.S.- Canada pilot project for meat inspection was divided between businesses that supported it and consumer groups that did not. In addition, one industry group found that some regulatory agencies were unwilling to actively engage foreign counterparts and U.S. industries to discuss U.S. regulatory requirements that are adopted by other countries. A different industry representative said that a regulatory agency he works with independently created a division dedicated to international telecom issues to work with foreign counterparts and developed a modular approval, which gives industry more flexibility and shortens the time for product approvals. In addition, a consumer advocacy stakeholder said that it would be helpful to set government-wide policies and definitions through a notice and comment period. For example, federal agencies do not employ the same definition of “equivalency,” and it would be helpful if there was a specific government-wide policy that stated that the result of international regulatory cooperation cannot lower domestic standards. Statutory authority. Agencies we interviewed said that statutory authority may facilitate or limit their international regulatory cooperation activities. For example, DOT PHMSA officials said that statutory authority may mandate agency participation in international standards organizations. An industry stakeholder said it would facilitate cooperation if the underlying statutory authorities of agencies clearly permitted them to engage in trade activities. However, when statutes are prescriptive regarding domestic or rulemaking requirements, they can limit agencies’ ability to make changes to regulations that align with a foreign trading partner. For example, agency officials said that statutes mandating use of specific technologies can remove the flexibility to coordinate with foreign counterparts. EPA officials also said that, in many instances, the Clean Air Act requirements may limit the degree to which domestic regulations can be altered to accommodate or conform to foreign or international standards or approaches. Statutes that mandate completion of rulemakings within short time frames can also limit agencies’ ability to engage in harmonization. For example, CPSC officials said it was challenging to work with other countries to reach consensus when CPSC had been mandated by the Consumer Product Safety Improvement Act of 2008 to issue a large number of regulations in a short time frame, which limited the amount of time they had to work with foreign counterparts. Some agency statutes may limit disclosure of company-specific information with foreign counterparts. This can prevent U.S. agencies from sharing certain reports and scientific information with trusted foreign counterpart agencies. In a previous report, we stated that, although the addition of section 29(f) to the Consumer Product Safety Information Act was intended to encourage information sharing, CPSC expressed concern that restrictive language in this section hindered its ability to share information.first step to scientific and technical exchanges with foreign counterparts is removing existing legal, regulatory, or policy hurdles that limit or prohibit data sharing between governments. For example, NHTSA officials we interviewed said that they have many research, testing, and enforcement activities that include restrictions on the transfer of information, which has been a barrier to international regulatory cooperation. They said that when a company discovered defects in tires in Germany, the information was not immediately available in the United States to prevent injuries because of an information-sharing restriction. Agency officials also noted that, in addition to the removal of U.S. agency information-sharing restrictions, it is essential that the hurdles that exist in other countries also be removed. An official from EPA OCSPP said that an important Early and ongoing coordination. Early and ongoing coordination with foreign governments in emerging areas before regulations are in place may facilitate international regulatory cooperation. Agency officials we interviewed said early and ongoing efforts are important to maintain progress. OMB officials said it is easier to prevent unnecessary differences than remove existing differences in regulations. For example, CPSC attends multilateral forecasting sessions with other countries to engage foreign counterparts before the rulemaking and standards setting process begins. According to agency officials we interviewed, it is more efficient for CPSC to align and prevent different regulatory approaches with other jurisdictions before the U.S. notice and comment rulemaking process begins. In another example, State officials we interviewed said there is a need for international regulatory coordination to take place as early as possible, before too many regulations are established in each country. They said there are opportunities to avoid unnecessary differences in regulations for nanotechnology, which can be applied to many types of products. Currently, there are no entrenched regulatory systems that would hinder cooperation on developing new standards. Industry officials also said that it is important to coordinate on requirements early by reviewing countries’ regulatory differences, because fundamental differences between countries may require changes on an issue-by-issue basis. They also urged early coordination because regulatory agencies in other countries are establishing standards when the manufacturing process has already been developed in the United States, which does not work well for them within today’s markets. One academic representative we interviewed said it is much easier for agencies to coordinate with trading partners on new regulations than on existing regulations. According to agency officials we spoke with, early and ongoing coordination with foreign counterparts also can identify issues that are not ready for international regulatory cooperation. It is important to coordinate early with their counterparts when there are differences between the openness of the United States’ and other countries’ rulemaking processes. Officials noted that, while other countries have the opportunity to comment whenever a U.S. regulation is proposed, U.S. agencies and nonfederal stakeholders may not have similar opportunities to comment on foreign regulations. With trade expanding and regulatory challenges growing, in recent years the President and U.S. agencies have undertaken multiple initiatives to focus attention on the importance of international regulatory cooperation. While the executive order on promoting international regulatory cooperation focuses on reducing trade barriers by reducing unnecessary differences in regulations with U.S. trading partners, we found in our review that U.S. agencies carry out numerous and diverse international regulatory cooperation activities to improve the effectiveness of regulations, gain efficiencies, and avoid duplicating work. The examples agencies shared with us show that their efforts often achieve both trade and regulatory efficiency goals. Ultimately it is clear that international regulatory cooperation requires interagency coordination. No one U.S. agency has the expertise or processes to effectively conduct these activities. Not only must regulatory agencies collaborate with other U.S. agencies, but they need to effectively collaborate with their foreign counterparts and affected nonfederal stakeholders. Overall coordination of international regulatory cooperation activities is now handled by discrete processes with somewhat different focuses. U.S. regulatory agencies focus primarily on their missions to protect public health and safety and the environment, while USTR and Commerce, among others, focus on trade. Therefore, it is important for the U.S. government to effectively coordinate these interagency activities. Our work at agencies engaged in regulatory cooperation efforts shows there are opportunities to augment existing guidance and mechanisms that could further promote and improve international regulatory outcomes. For example, U.S. regulatory agency officials emphasized the benefits of sharing information on lessons learned and best practices with their peers. However, they believe the current processes are designed for top- level collaboration and do not sufficiently address the day-to-day implementation of international regulatory cooperation. U.S. agencies and nonfederal stakeholders also noted the importance of stakeholder input in the success of international regulatory cooperation. Yet it is challenging for stakeholders to stay apprised of agencies’ activities and therefore provide input to agencies. Key next steps could focus on identifying tools to measure outcomes as well as to document savings from more efficient use of government resources. In an environment of constrained resources it is even more important for agencies to share knowledge on the effective implementation of international regulatory cooperation. To ensure that U.S. agencies have the necessary tools and guidance for effectively implementing international regulatory cooperation, we recommend that the Regulatory Working Group, as part of forthcoming guidance on implementing Executive Order 13609, take the following action: Establish one or more mechanisms, such as a forum or working group, to facilitate staff level collaboration on international regulatory cooperation issues and include independent regulatory agencies. We provided a draft of this report to Commerce, CPSC, DOE, DOT, EPA, FTC, HHS, OMB, State, USDA, and USTR for their review and comment. We received written comments on the draft report from DOE, and CPSC in which they agreed with the recommendation to the RWG. Their comments are reprinted in Appendices III and IV. In an email received on July 30, 2013, the Deputy General Counsel, Office of Management and Budget, stated that OMB had no comments on the recommendation in this report. However, OMB provided technical comments which we incorporated as appropriate. Commerce, CPSC, DOE, FTC, HHS, State, USDA, and USTR also provided technical comments which we incorporated as appropriate. We are sending copies of this report to OMB (which chairs the RWG), Commerce, CPSC, DOE, DOT, EPA, FTC, HHS, State, USDA, USTR, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to (1) provide an overview of regulatory agencies’ international cooperation activities, (2) examine ways that agencies incorporate outcomes from international regulatory cooperation activities and consider competitiveness during rulemaking, and (3) examine factors identified by agencies and nonfederal stakeholders that act as facilitators or barriers to international regulatory cooperation and considering competitiveness. To address these objectives, we selected seven U.S. regulatory agencies out of 60 U.S. agencies that are included in the Unified Agenda of Federal Regulatory and Deregulatory Actions (Unified Agenda), that issued regulations with international impacts and four U.S. agencies with government-wide international coordination responsibilities. Based on several sources we identified likely regulatory agencies that issue regulations related to international trade. For example, we reviewed the 2010 and 2011 Unified Agenda and data from the 2011 World Trade Organization (WTO) Technical Barriers to Trade (TBT) Information Management System. We also reviewed all major regulations from 2011. We categorized the regulations with an international impact into regulatory subject areas such as product safety, environmental, energy, transportation of products, food, medical devices, drugs, and aviation. The reason we categorized the regulations was to select groups of regulations that affect global trade in products. In addition, we excluded categories from our scope, such as taxation/taxes, patents, arms trade, international waters, and trade agreements. We also tested the databases used in agency selection by reviewing related documentation, interviewing knowledgeable agency officials, and tracing a sample of entries to source documents. We concluded the data were sufficiently reliable for the purposes of this report. We also considered recommendations from federal agency officials in selecting regulatory agencies. From these varied efforts, for our review we selected the Department of Energy (DOE), Food and Drug Administration (FDA), Department of Transportation (DOT), Environmental Protection Agency (EPA), and Department of Agriculture (USDA) as well as two independent regulatory agencies including the Consumer Product Safety Commission (CPSC) and the Federal Trade Commission (FTC). These views are not generalizable to all U.S. agencies. Based on our background research and suggestions from federal agencies we selected four agencies with government-wide international coordination responsibilities: Office of Management and Budget (OMB), Office of the United States Trade Representative (USTR), Department of Commerce (Commerce), and Department of State (State). Furthermore, using criteria based on our September 2012 report on interagency collaborative efforts, we also compared agencies’ documents and testimonial evidence about their international regulatory cooperation activities to the seven key features that we found agencies should consider when implementing collaborative mechanisms to corroborate the agencies’ findings. To obtain viewpoints outside of government, we chose 11 U.S. nonfederal stakeholders which consisted of academics, organizations representing businesses, consumer advocacy groups, standards setting organizations and industry representatives, based on their recent reports or from comments they made on international regulatory cooperation. We originally selected one of each type of nonfederal stakeholder group based on published views on international regulatory cooperation and recommendations from agencies in our study but decided to add more nonfederal stakeholders to our selection criteria to represent a diverse range of members that represent business promotion, consumer advocacy perspectives and neutral parties. These views are not generalizable, but provided insights on international regulatory cooperation. For federal agencies and nonfederal stakeholders chosen for this engagement, we conducted interviews and gathered documentation such as concrete examples, facilitators, barriers, goals, outcomes, and stakeholder involvement related to international regulatory cooperation activities, rulemaking and global competiveness. We used this documentary and testimonial evidence to identify government-wide and agency-specific requirements related to rulemaking outcomes for international regulatory cooperation and global competitiveness and determined how these selected agencies consider related issues. After analyzing our evidence for common themes and patterns, we developed a summary document of factors that are facilitators or barriers to international regulatory cooperation and held two meetings for agency officials to reflect upon the meaning of the factors, and confirm their importance. We summarized information gathered at these group meetings to better describe the agencies’ perspectives. Throughout this report, we use specific, selected examples to illustrate agency processes and practices. The scope of our inquiry was not comprehensive, generalizable, or designed to be a complete catalog of international regulatory activities. We conducted this performance audit from March 2012 to August 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Agencies provided us with examples of their international regulatory cooperation activities. The examples below illustrate the types of activities that agencies engage in to fulfill their regulatory missions and are not meant to be a comprehensive catalog of agency activities in this area. Agencies share information with their foreign counterparts on scientific data and regulatory approaches. Agency: Environmental Protection Agency’s (EPA) Office of Chemical Safety and Pollution Prevention (OCSPP) Description: OCSPP shares information with the North American Free Trade Agreement (NAFTA) partners and the international organization Codex on its Pesticide Tolerance Crop Grouping Revisions Program. EPA regulates pesticides by setting limits on the amount of pesticides that remain in or on foods marketed in the United States under the Federal Food, Drug, and Cosmetic Act. The Pesticide Tolerance Crop Grouping Revisions Program enables the establishment of tolerances for a group of crops based on residue data for certain crops that are representative of the group. Representatives of a crop group or subgroup are those crops whose residue data can be used to establish a tolerance on the entire crop group or subgroup. The project involves several interrelated multiyear efforts, including (1) one with NAFTA partners in Canada and Mexico to revise the existing crop groups in EPA’s regulations (40 CFR 180.41) to add new crops and create new groups and subgroups; and (2) one in which NAFTA partners are working with international stakeholders to modify the Codex crop groups, to support global trade and the use of data extrapolation. Petitions to revise the NAFTA crop grouping regulations are developed by the International Crop Grouping Consulting Committee, a group of more than 180 crop, agrichemical, and regulatory experts representing more than 30 countries and organizations. NAFTA partners also are working cooperatively with international stakeholders to revise the Codex system of classification of foods and animal feeds and to revise the Codex crop groups. Involvement by NAFTA member countries in the Codex process should help standardize commodity terminology and crop groupings within the global context. Outcomes: Approved revisions to crop group regulations are formalized in the United States through rulemaking. EPA is currently working on its fourth crop grouping proposed regulation. Crop groupings also facilitate international trade, including the market for pesticide products and the crops treated. Pesticides with established tolerances in the United States can be sold for use on crops grown in other countries that intend to import those crops to the United States. Crops imported in the United States with pesticide residues that do not have an established U.S. tolerance are subject to enforcement action. Agency: EPA’s Office of Chemical Safety and Pollution Prevention (OCSPP) Description: The United States has participated in the OECD Joint Meeting of the Chemicals Committee and Working Party on Chemicals, Pesticides and Biotechnology, an organization with over 30 member countries, for more than 30 years. Specific information sharing activities include: OECD eChem Portal: OCSPP shares information on industrial chemicals and various data systems. The OECD eChem Portal allows simultaneous searching of reports and datasets by chemical name and number and by chemical property. The portal provides direct links to collections of chemical hazard and risk information prepared for government chemical review programs at the national, regional, and international levels. The portal also provides, when available, classification results according to national/regional hazard classification schemes or to the Globally Harmonized System of Classification and Labeling of Chemicals. OECD (Quantitative) Structure-Activity Relationships Toolbox: OCSPP’s (Q)SARs are methods for estimating properties of a chemical from its molecular structure. The toolbox is a software application for governments, chemical industry, and other nonfederal stakeholders to fill gaps in (eco)toxicity data needed for assessing the hazards of chemicals. Outcomes: According to EPA, these tools and approaches reduce compliance costs for nonfederal stakeholders, facilitate work sharing for regulators, and help avoid costly, duplicative testing by ensuring that the data developed and submitted in one country can be used by other countries in reaching their regulatory decisions. These activities do not directly result in rulemakings, but can inform rulemaking activities. Agency: Consumer Product Safety Commission (CPSC) Description: CPSC participates in an international pilot alignment initiative with staff from the central consumer product safety authorities of Australia, Canada, the European Union, and the United States. This ad hoc group is not aligned formally with any existing multilateral forum. The participants are to seek consensus positions on the hazards to children and their potential solutions for three products: corded window coverings, chair-top booster seats, and baby slings. The goal of this initiative is to bring about effective, aligned safety requirements for these products to reduce injuries and save lives. The consensus positions could be considered and developed for implementation in each jurisdiction, according to the jurisdiction’s preferred model, whether through regulation or voluntary standards. Officials said that the consensus papers for baby slings and chair-top booster seats are in progress. According to CPSC officials, the PAI jurisdictions worked for 18 months to reach consensus positions on corded window coverings, but the project fell short of CPSC’s expectations. Officials said that the technical teams from five jurisdictions agreed in principle that “no exposed cords” was the best solution to the strangulation hazard, but the European Commission had already publicly expressed an opposing position regarding the elimination of cords. CPSC officials said that when the PAI work began, the European Commission had already moved into policy development and soon thereafter issued a mandate to the European Committee on Standardization explicitly permitting safety devices to keep exposed cords out of reach of children. As a result, the consensus paper recognized “no exposed cords” as the best solution but did not call for their elimination as a consensus approach. Outcomes: According to CPSC officials, the PAI can result in similar product safety requirements at a high level of safety among the jurisdictions participating in the initiative. Agencies participate in international standards-setting bodies and incorporate international standards into rulemaking as appropriate. Agencies: Department of Transportation’s (DOT) National Highway Traffic Safety Administration (NHTSA) and Environmental Protection Agency’s (EPA) Office of Air and Radiation (OAR) Description: WP.29 is a permanent working party created more than 50 years ago in the United Nations (UN) that administers three international agreements on motor vehicles: (1) the 1958 Agreement concerning the adoption of uniform technical prescriptions for wheeled vehicles, equipment, and parts which can be fitted and/or be used on wheeled vehicles and the conditions for reciprocal recognition of approvals granted on the basis of these prescriptions, (2) the 1997 Agreement concerning the adoption of uniform conditions for periodical technical inspections of wheeled vehicles and the reciprocal recognition of such inspections, and (3) the 1998 Agreement concerning the establishing of global technical regulations for wheeled vehicles, equipment, and parts which can be fitted and/or be used on wheeled vehicles. The WP.29 develops Global Technical Regulations that are used in member countries’ regulations and works as a global forum allowing open discussions on motor vehicle regulations. NHTSA and OAR participate in the development of global technical regulations. Nongovernmental organizations may also participate in a consultative capacity in WP.29 or in its working groups. Outcomes: NHTSA officials said WP.29 participation contributes to safety in the United States because NHTSA leverages research with other countries. Global Technical Regulations increase alignment between countries. As a result, manufacturers have fewer country-specific regulations to comply with when participating in foreign markets. NHTSA uses Global Technical Regulations in rulemaking. For example, NHTSA issued a final rule in August 2012 on motorcycle brake systems safety standards to add and update requirements and test procedures and to harmonize standards with a global technical regulation for motorcycle brakes. OAR officials said that OAR participated in an effort that focused on test procedures for off-highway construction vehicle engines. According to officials, this effort was undertaken after the completion of a domestic regulation. U.S. manufacturers supported using U.S. regulation as the basis of the Global Technical Regulation because U.S. manufacturers sell equipment internationally, and complying with one set of regulations reduces their fixed costs. Over 5 years OAR successfully worked within the WP.29 to make the U.S. regulation the basis of the WP.29 Global Technical Regulations. As a result, it has become the de facto standard around the world. Agency: Department of Transportation’s (DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA) Description: PHMSA participates in the TDG Subcommittee, which, according to PHMSA, is facilitated by two treaties: the Chicago Convention on International Civil Aviation and the International Convention for the Safety of Life at Sea. Officials said the TDG Subcommittee was established because there was a need for international coordination on the transport of dangerous goods. Participants in the TDG Subcommittee include 29 countries with voting status and numerous countries and nongovernmental organizations with observer status. The TDG Subcommittee reviews proposals from voting member countries and observers in relation to amendments to the UN Model Regulations and issues relevant to its work program. PHMSA represents the United States at these meetings and formulates U.S. positions based on feedback from U.S. industry, the public, and other government agencies. PHMSA ensures coordination on U.S. positions, taking into account the interests of the DOT administrations and other government agencies. PHMSA’s staff provides the technical support and resources to ensure that the positions taken are sound and justified based on pertinent data, technical analyses, and safety rationales. Outcomes: PHMSA considers the standards developed by the TDG Subcommittee in a rulemaking every 2 years in an effort to harmonize with international changes. For example, in January 2013, PHMSA issued a final regulation on harmonization with international standards for hazardous materials. PHMSA amended the Hazardous Materials Regulations to maintain alignment with international standards by incorporating amendments, including changes to proper shipping names, hazard classes, packing groups, special provisions, packaging authorizations, air transport quantity limitations, and vessel stowage requirements. The resulting cooperation leads to aligned regulations with trading partners, fewer differences in regulations businesses must comply with, and improved safety results (e.g., common labels for hazardous materials). Harmonization of international and domestic standards enhances compliance and improves the efficiency of the transportation system by minimizing regulatory burdens and facilitating oversight. International harmonization of hazardous materials regulations plays a significant role in enhancing safe transportation through improved regulatory consistency. Agency: Federal Trade Commission (FTC) Description: In October 2001, the FTC, Department of Justice (DOJ), and 13 other antitrust agencies founded the ICN to provide a venue for agencies that regulate competition issues worldwide to work on competition issues of mutual interest. The ICN has a broad membership—127 agencies from 111 jurisdictions, which includes most of the world’s competition agencies. The ICN works exclusively on competition issues; develops consensual, nonbinding recommendations and reports to bring about procedural and substantive convergence; and provides a significant role for nongovernmental advisors from the business, legal, consumer, and academic communities, as well as experts from other international organizations. The ICN is organized into working groups composed of agencies and nongovernmental advisors. Current working groups address unilateral conduct, mergers, cartels, agency effectiveness, and competition advocacy. The FTC led the merger working group’s work on notification and procedures, which developed a set of eight guiding principles and 13 recommended practices for merger notification and review. Outcomes: A major accomplishment of the ICN is that numerous members adopted key aspects of ICN recommended practices, such as those concerning merger thresholds. According to FTC officials, the objective was to enhance the effectiveness of each jurisdiction’s merger review practices and processes and promote procedural convergence, thereby reducing unnecessary private and public costs and burdens associated with merger review. FTC officials said FTC has not done any rulemaking to implement the ICN recommendations because the recommendations are consistent with U.S. approaches to merger notification and review processes. In some cases, the United States may enter an agreement with another country to recognize other’s regulations and deem them equivalent to those of the United States. Agency: United States (USDA) Agricultural Marketing Service (AMS) Description: The AMS manages equivalency agreements for organic food labeling. For example, the United States has an equivalency arrangement with the European Union (EU), generally referred to as the Partnership, under which organic products certified in Europe or the United States may be sold as organic in either region. For retail products, labels or stickers must state the name of the U.S. or EU certifying agent and may use the USDA organic seal or the EU organic logo. Under the Partnership, according to USDA, the EU and the United States agreed to work on a series of technical cooperation initiatives to promote organic production and establish common practices for assessing and recognizing organics programs of third countries. Outcomes: According to USDA officials, the EU-U.S. organic equivalency arrangement reduces the cost of certification for organic producers and handlers because producers and handlers only need to be certified under one standard (either USDA organic regulations or EU organic regulations) but can now access and sell in both markets. Another outcome is that considering the respective countries’ standards as “equivalent” facilitates international trade of organic products. According to AMS officials, equivalency agreements will result in expanded market access; reduce duplicative requirements and lower certification costs for the trade in organic products; and decrease the burden of administration. The agreements are also expected to open new possibilities for trade. Previously, operations that wanted to trade organic products on both sides of the Atlantic had to obtain separate certifications to meet both standards, which meant a second set of fees, inspections, and paperwork. Additionally, in most cases, the Partnership will provide exporters the opportunity to serve both the U.S. and EU markets from a single inventory of organic products. Agency: USDA Food Safety and Inspection Service (FSIS) Description: Food safety equivalency evaluations are based on provisions in the Agreement on the Application of Sanitary and Phytosanitary Measures, which appears in the Final Act of the Uruguay Round of Multilateral Trade Negotiations, signed in Marrakech April 15, 1994. Under the agreement, World Trade Organization (WTO) member countries shall accord acceptance to the sanitary and phytosanitary measures of other countries (even if those measures differ from their own or from those used by other member countries trading in the same product) if the exporting country demonstrates to the importing country that its measures achieve the importer’s appropriate level of sanitary and phytosanitary protection. FSIS makes determinations of equivalence by evaluating whether foreign food regulatory systems meet the level of protection provided by the U.S. domestic system. FSIS evaluates foreign food regulatory systems for equivalence through document reviews, on-site audits, and port-of-entry re-inspection of products at the time of importation. FSIS regulations list 46 countries as eligible to export meat, 9 countries as eligible to export poultry, and 2 countries as eligible to export egg products to the United States. Outcomes: According to FSIS officials, the equivalency determination program has several benefits. One benefit is that the equivalence process requires communication and participation by U.S. regulators with the regulators in the country seeking (or already having) equivalence, which usually leads to positive relationships between the two countries and other intangible benefits. Another benefit to U.S. businesses is that it gives them more market capacity where they obtain raw materials, finished products, or both, which provides for potential costs savings through the use of these additional choices for eventual sale to U.S. and other consumers. U.S. consumers benefit because countries determined to be equivalent are providing meat, poultry, and egg products that are as safe as domestic products because the products meet U.S. appropriate levels of protection. These additional products may also be less expensive than products produced with U.S.-sourced ingredients. Most agencies in our study provide technical assistance to developing countries. Agency officials said they work with countries to strengthen their regulatory systems, among other reasons, to improve safety of products imported into the United States. Description: FDA undertakes activities to improve the capacity of governments to manage, assess, and regulate products within increasingly complex supply chains. According to FDA officials, FDA works to strengthen the global regulatory system and is a source of expertise that engages in global dialogue and initiatives with regulatory counterparts, development agencies, and global health partners. FDA is developing an operating model that relies on building a global safety net using four principles: global coalitions, global data systems, enhanced risk analysis capacities, and leveraging the efforts of public and private third parties. FDA’s Global Engagement Report outlines how FDA supports and collaborates with regulatory systems around the globe. While neither mandated nor funded as an international development or training organization, FDA works with bilateral and multilateral partners, domestically and internationally, to strengthen regulatory systems capacities and competencies in various parts of the world in an effort to ensure products that will be imported into the United States will be made safer and supply lines more secure. Examples of some of FDA’s efforts include development of information-sharing platforms and the provision of evidence tools and expertise that contribute to strengthening regulatory systems. In response to Section 305 of the FDA Food Safety Modernization Act (FSMA), the FDA developed an international food-safety capacity- building plan. The plan establishes a strategic framework for the FDA and presents an approach based on prioritizing risks to U.S. consumers. It focuses on addressing weaknesses in a food safety system in partnership with foreign governments, industry counterparts, and other stakeholders. FDA supported the World Health Organization (WHO) in developing a global monitoring and surveillance system for substandard, falsified, and counterfeit medical products. The system was piloted in 10 countries over 3 months in 2012. This system will be scaled up globally in the coming year. FDA is actively involved in efforts to strengthen regulatory capacity through its joint efforts with the World Bank, the WHO, the Gates Foundation, Asia-Pacific Economic Cooperation (APEC), the African Union, and others in the private and public sectors. By bringing its regulatory and scientific expertise to these efforts, FDA can better leverage the expertise of its partners to engage more efficiently and broadly in enhancing regulatory capacity globally. Examples of such initiatives include the World Bank/APEC initiative on food-safety capacity building and the World Bank/Gates Foundation/WHO/African Union efforts to enhance and rationalize regional regulatory capacity in various African economic communities, starting with the East African community. Description: According to FTC officials, the FTC, in coordination with, among others, USAID, U.S. Trade and Development Agency, and the Department of Commerce, establishes relationships with developing countries and provides technical assistance. FTC helps countries develop and enhance their regulatory frameworks by encouraging convergence with international standards. FTC’s technical assistance program helps explain how competition, truthful advertising and marketing, and sensible privacy frameworks advance economic efficiency, consumer welfare, and consumer choice. To this end, FTC assists developing countries in their transition to market-based economies and their development of competition and consumer protection agencies and sharing approaches to enforcement that are consistent with this goal. As part of its efforts, the agency routinely provides input to its foreign counterparts about the drafting and adopting of domestic legislative frameworks regarding competition, consumer protection, and privacy. FTC also works to build the capacity of its foreign counterparts to implement these frameworks and promote their proper enforcement. Description: APHIS participates in international regulatory capacity building to help other regulatory entities meet U.S. standards and protect health. Officials said APHIS actively builds international partners and meets with foreign regulatory officials bilaterally and multilaterally. For example, APHIS runs six to seven courses a year where it invites foreign officials to the United States for training on U.S. processes. APHIS officials said that APHIS annually trains 100 to 150 individuals from other countries. The officials said these trainings provide education and resources to foreign counterparts and build a network of individuals to support U.S. efforts worldwide and help other countries comply with U.S. regulations. APHIS also participates in multilateral capacity building on SPS. Officials said, under the SPS agreement, there is a responsibility to work with developing countries and APHIS has officials located overseas who informally work with partners on a daily basis. APHIS also has formal training programs overseas and in the United States. Agencies work with foreign counterparts on projects to share resources to implement regulations and avoid duplicating efforts. Description: FDA partners with foreign counterparts to coordinate on inspection activities. Foreign counterparts include: European Medicines Agency (EMA): Significant opportunities exist for FDA and EMA to leverage their inspection resources, and they are exploring this potential through a series of activities. They observed each other’s inspections and jointly inspected manufacturing sites in the United States and the European Union (EU). Through this work, FDA and EMA are building a foundation for understanding, trust, and data-driven decisions in the area of inspections. EMA and Australia’s Therapeutic Goods Administration: In 2009, FDA joined the EMA and Australia’s Therapeutic Goods Administration to conduct a pilot program—the Active Pharmaceutical Ingredient Inspection Pilot—to demonstrate the potential for leveraging their inspection resources. Before the pilot, these agencies had been conducting separate inspections at the same overseas manufacturing sites—often within just months of one another—to assure that the safety and quality of the drugs were not jeopardized by poor manufacturing practices. Under the pilot, the three agencies planned and conducted joint inspections at participating foreign facilities and shared information from inspections they had conducted over the past 2 to 3 years. These exchanges have allowed FDA to redeploy inspection resources and alerted FDA to sites requiring heightened scrutiny. Since then, FDA has engaged in similar projects with additional counterparts. Health Canada: FDA also works with Canada on Third-Party Inspection/Audits. To enable closer regulatory cooperation, FDA and Health Canada (HC) initiated the Pilot Multi-purpose Audit Program in 2006. The pilot explored the potential benefits to medical device manufacturers and the agencies of using a single third party for inspections audits to simultaneously meet FDA and HC regulatory requirements for systems quality. It was anticipated that a multipurpose audit could reduce the overall time spent on site by an official agency audit/inspection team, thus reducing the regulatory burden for industry. FDA and HC conducted 11 joint audit/inspections under the pilot; 10 of these were assessed for program benefits. The results showed that the joint approach reduced the time-in-facility spent at participating manufacturers by about one-third, on average, compared with the estimated time required for separate FDA and HC audits/inspections. In addition, FDA and HC gained a better understanding of their auditing/inspection approaches, providing a foundation for leveraging inspection resources in the future. New Zealand’s Ministry for Primary Industries: In December 2012, FDA signed an international arrangement with New Zealand’s Ministry for Primary Industries recognizing each other’s food safety systems as providing comparable degrees of food safety assurance. This arrangement was reached after a significant amount of time was spent by both parties working on regulatory systems recognition assessments. Systems recognition involves reviewing a foreign country’s food safety regulatory system to determine if it provides a similar set of protections to that of FDA and that the food safety authority provides similar oversight and monitoring activities for food produced under its jurisdiction. Outcomes of these reviews may be used by FDA to make risk-based decisions regarding foreign inspections, admitting product into the U.S., and follow-up actions when food safety incidents occur . Outcomes: Coordinated inspections allow FDA to leverage resources with their foreign counterparts to fulfill their regulatory responsibilities. Agency: USDA’s Animal and Plant Health Inspection Service (APHIS) Description: As part of the United States-Canada Beyond the Border Initiative, APHIS and Canada conducted a joint site visit in Colombia for a foot and mouth disease evaluation and produced a joint report as part of the evaluation of Colombia’s request to export fresh beef in October 2011. The United States and Canada are developing procedures for conducting future joint site visits and the exchange of information related to animal health evaluations. APHIS and Canada will also be identifying other opportunities to share evaluation results. Outcomes: According to APHIS officials, outcomes could involve the United States and Canada developing risk evaluations that are based in part on a joint site visit. Agencies cooperate with foreign counterparts on voluntary programs that are not part of agencies’ regulations. Agency: Department of Energy (DOE) Description: DOE’s international coordination on solid state lighting is done in large part through the International Energy Agency (IEA) Efficient Electrical End-Use Equipment (4E) Implementing Agreement, which was launched in 2008 and undertakes a range of analytical and information gathering and dissemination activities related to government regulation and labeling of appliances and equipment. The IEA was established under the Agreement on International Energy Program. Thirteen countries from the Asia-Pacific, Europe, North America, and Africa have joined together under the forum of 4E to share information and transfer experience to support good policy development in the field of energy efficient appliances and equipment. 4E also initiates projects designed to meet the policy needs of participants, enabling better informed policy making. Officials said they worked with the 4E Annex on Solid State Lighting for several years on performance characteristics and testing procedures during which time they developed a network of laboratories that would perform independent testing that could be voluntarily adopted by foreign governments. Solid state (or LED) lighting is a new technology that has cost and performance characteristics that are developing rapidly. The goal of the annex is to develop simple tools to help government and consumers worldwide identify which solid state lighting products have the necessary efficiencies and quality levels to reduce the amount of energy currently consumed by artificial lighting. DOE is working with other countries to identify efficiency and performance criteria and metrics, test methods, and qualified testing laboratories that might be used in product labeling or standards activities related to these products. Outcomes: According to DOE officials, this coordination is important because the adoption of performance standards and test procedures will help determine the products that can be marketed and sold around the world. They said without a common agreement on key characteristics for this new technology, it would be difficult for products to enter the world market. Standard labeling helps customers understand the product they are buying and how its efficiency compares with other products. The results of cooperation on solid state lighting will not necessarily be reflected in DOE’s regulations. DOE does not regulate this product at this time, although it has proposed a test procedure that might be used to support the Energy Star program or other initiatives. In addition to the contact named above, Tim Bober (Assistant Director), Claude Adrien, Melissa Emrey-Arras, Lynn Cothern, Kim Frankena, Joseph Fread, Debra Johnson, Barbara Lancaster, Andrea Levine, Grace Lui, Susan Offutt, and Cynthia Saunders made key contributions to this report.
|
Trade has increased as a share of the economy for several years, but U.S. companies can face difficulties competing in foreign markets when countries apply different regulatory requirements to address similar health, safety, or other issues. GAO was asked to examine what U.S. agencies are doing to engage in international regulatory cooperation. This report (1) provides an overview of U.S. regulatory agencies' international cooperation activities; (2) examines ways that U.S. agencies incorporate outcomes from international regulatory cooperation activities and consider competitiveness during rulemaking; and (3) examines factors identified by U.S. agencies and stakeholders that act as facilitators or barriers to international regulatory cooperation. GAO analyzed documents and interviewed officials from seven U.S. agencies that regulate products traded internationally and four U.S. agencies with government-wide roles and responsibilities. GAO also interviewed officials from 11 organizations representing business and consumer advocacy perspectives that reported or publicly commented on international regulatory cooperation. The scope of this study is not intended to be a complete catalog of agencies' activities and is not generalizable to all entities that have interests in this area. All seven U.S. regulatory agencies that GAO contacted reported engaging in a range of international regulatory cooperation activities to fulfill their missions. These activities include the United States and its trading partners developing and using international standards, recognizing each other's regulations as equivalent, and sharing scientific data. U.S. agency officials GAO interviewed said they cooperate with foreign counterparts because many products they regulate originate overseas and because they may gain efficiencies--for example, by sharing resources or avoiding duplicative work. Cooperation can address both existing and avoid future regulatory differences. Officials also explained how cooperative efforts enhance public health and safety, facilitate trade, and support competitiveness of U.S. businesses. Several U.S. interagency processes require or enable interagency collaboration on international cooperation activities. The Regulatory Working Group (RWG), chaired by OMB and the Trade Policy Staff Committee (TPSC) are forums that have different responsibilities related to the regulatory and trade aspects of international regulatory cooperation. U.S. regulatory agency officials said the current processes could benefit from better information sharing among agencies on the implementation of international cooperation activities and lessons learned. Without enhancements to current forums, opportunities to share practices and improve outcomes could be missed. Executive Order 13609, issued in May 2012, tasked the RWG with enhancing coordination and issuing guidance on international regulatory cooperation, which the RWG is developing. Nonfederal stakeholders GAO interviewed reported challenges to providing input on U.S. agencies' international regulatory cooperation activities, in particular that they are not always aware of many of these activities and participation can be resource intensive. Officials GAO interviewed said the outcomes from international regulatory cooperation inform all phases of the rulemaking process, from helping an agency decide whether to regulate to implementing and enforcing regulations. U.S. agencies are not required to conduct a separate analysis on the competitiveness impacts on U.S. businesses when developing regulations. However, five of the seven U.S. agencies told GAO they do consider competitiveness. Officials we interviewed also pointed out that any analysis of impacts may not rise to the level of inclusion in the rulemaking record. In addition, U.S. agencies' use of international standards in regulations can lower costs for U.S. businesses and reduce barriers to trade. Officials from all of the U.S. agencies GAO interviewed said they consider international standards during rulemaking partly in response to requirements in trade agreements, U.S. statutes, and executive orders. Officials from all of the U.S. agencies GAO interviewed identified seven key factors that affect the success of international regulatory cooperation activities: (1) dedicated resources, (2) established processes, (3) high-level leadership, (4) scientific and technical exchanges, (5) stakeholder involvement, (6) statutory authority, and (7) early and ongoing coordination. When present, these factors can facilitate U.S. agencies' efforts, but they can also act as barriers when absent. GAO found that these factors also reflect the seven key features for implementing collaborative mechanisms previously identified in its September 2012 report on interagency collaboration. GAO recommends the RWG include in forthcoming guidance on Executive Order 13609 tools to enhance collaboration, such as mechanisms to facilitate staff level dialogues. The Office of Management and Budget (OMB) did not have comments on the recommendation.
|
Increased demand for consumer goods worldwide has intensified competition between private express carriers and postal services for providing international parcel delivery services. Private carriers have expressed long-standing concerns about trade barriers, such as foreign customs’ clearance requirements that they say hinder their ability to provide cost-effective and timely delivery of parcels. More recently, the carriers have raised competitive concerns about the U.S. Postal Service’s (USPS) Global Package Link (GPL) service, which USPS established in 1995 to provide mailers of catalog merchandise, such as apparel, with an economical and simplified means of shipping goods internationally. In the summer of 1997, representatives from Federal Express Corporation (FedEx) and United Parcel Service (UPS) raised concerns to Congress about GPL, alleging that USPS has used its governmental status with foreign governments to give GPL parcels preferential treatment. In particular, the carriers indicated GPL parcels received reduced customs fees and faster customs clearance over private express parcels. USPS officials replied that GPL service was designed to provide direct marketers with an economical and simplified means of shipping goods internationally, particularly with respect to the automation of customs information. USPS indicated that it had not made any special arrangements with foreign governments to give GPL parcels preferential customs treatment over private express parcels. This report responds to a request from the Chairman of the Subcommittee on the Postal Service, House Committee on Government Reform and Oversight, that we review whether differences existed in the customs treatment for GPL and private express carrier parcels sent to Canada, Japan, and the United Kingdom—the three countries where USPS was primarily providing GPL service in 1997. GPL was designed as a bulk delivery service that would make it easier and more economical for companies to ship parcels containing merchandise internationally. During 1997, GPL customers were primarily direct marketers—mailers of catalog merchandise. First introduced under the name International Package Consignment Service (IPCS) to Japan in 1995, and renamed Global Package Link in 1997, the service is now available for parcel shipments to 10 countries. However, in fiscal year 1997, GPL was operating primarily in only three countries—Canada, Japan, and the United Kingdom. Responsibility for implementing GPL and other international mail services lies with the USPS’ International Business Unit (IBU). According to USPS, IBU was started in 1995 with the vision of becoming within the next few years—and no later than 2005—the “leading global supplier of direct marketing and package delivery services and related business transactions to business customers worldwide.” According to IBU officials, GPL destinations were selected after customers expressed an interest in shipping there or USPS decided that certain shipping opportunities existed. Although GPL currently operates only as an outbound delivery service for U.S. companies, USPS also plans to offer inbound services to foreign companies in GPL countries that want to ship products to the United States. GPL is one of several international mailing services offered by USPS. In fiscal year 1997, USPS shipped about 2 million parcels via GPL service, almost all of which were shipped to Japan. GPL parcels represented less than 1 percent of USPS’ total outgoing international mail volume of almost 1 billion pieces in fiscal year 1997. GPL gross revenues for fiscal year 1997 were $33.5 million, an increase of about 13.5 percent over fiscal year 1996, when GPL generated $29.5 million in gross revenue. The number of GPL parcels being sent to different countries may be affected by several key factors, including currency fluctuations and cultural preferences for U.S. goods. Exports of U.S. goods sold by direct marketers to Japan, for example, increased substantially in recent years but have leveled off recently as the U.S. dollar has increased in strength against the Japanese yen, according to the American Chamber of Commerce in Japan. The U.S. Department of Commerce reports that U.S. merchandise represents 80 to 90 percent of the value of total personal imports in Japan. Direct marketers have estimated the value of personal imports in Japan to represent sales of $1 billion to $1.5 billion annually. On the basis of catalog requests at direct marketing promotions in Tokyo and Osaka, Japan, Japanese consumers prefer goods such as apparel; sports/outdoor equipment; videos, cassettes, compact discs, and books; and hobby merchandise. Generally, GPL mailers said that their success in overseas markets depended on their ability to offer unique and high-quality goods at favorable prices, including shipping charges. As shown in figure 1.1, Canada, Japan, and the United Kingdom together account for about one-third of the value of merchandise goods exported from the United States, according to 1996 Commerce Department figures. A new GPL customer is required, among other things, to sign an agreement with USPS that it will (1) mail at least 10,000 parcels a year to 1 or more destination countries; (2) agree to link its information systems with those of USPS, enabling the customer and USPS to generate reciprocal data transmissions concerning the parcels; (3) meet certain shipping preparation requirements; and (4) designate USPS as its carrier of choice for each country to which it sends GPL parcels. For new GPL customers, USPS creates an electronic data link between it and the customer and installs proprietary software known as the Customs Pre-Advisory System (CPAS) to capture shipping data. For new GPL countries, USPS creates an electronic data link with its delivery agents—usually the foreign postal services. USPS may determine the harmonized tariff codes for the mailers, depending upon the destination country’s customs clearance requirements. GPL mailers provide data into CPAS about product origin, description, and value for the system to generate mailing labels, and in some countries, calculate applicable duties and taxes. USPS offers different levels of GPL service to Canada, Japan, and the United Kingdom, depending upon customers’ needs regarding delivery speed, parcel tracking, and insurance. Customers are also bound to parcel weight and size limitations, depending on destination. Shipping rates vary by country, and customers are eligible for certain volume discounts. As shown in table 1.1, GPL service includes two to three levels for each of the three countries (e.g., premium, standard, and economy), with parcels generally scheduled for delivery within 2 to 10 business days, depending upon destination; time-definite delivery is not guaranteed to any GPL country. GPL parcels are processed at and exported from USPS’ GPL processing centers located in New York, Chicago, Dallas, Miami, and San Francisco as well as the Air Mail Center in Seattle. GPL parcels sent to Canada are also processed at a USPS facility in Buffalo, NY. According to USPS, it is required by law to use only U.S. commercial airlines for transporting parcels overseas. In 9 of 10 GPL countries, foreign postal services deliver GPL parcels for USPS. Private express carriers have many different types of customers and offer various delivery services both domestically and internationally, depending upon shippers’ needs. For the purposes of this review, we focused on private express services involving the shipment of parcels of a size and weight similar to GPL parcels sent to the three countries in our review.Nevertheless, differences may exist between some features of the international delivery services provided by postal and private express carriers, such as time-definite delivery guarantees and door-to-door service, which make an exact comparison impossible. For example, private express services generally provide for guaranteed scheduled delivery within 1 to 4 business days, compared to 2 to 10 business days for delivery of GPL parcels, depending on the destination. Also, private express carriers generally have responsibility for their parcels throughout the international delivery process, but foreign postal services deliver most GPL parcels for USPS in other countries. Differences may also exist in the tracking services available for GPL and private express parcels. Private express carrier officials said that no published data exist on their market shares to Canada, Japan, and the United Kingdom. However, at our request, DHL, FedEx, and UPS provided data on the combined number of parcels that they shipped to Canada, Japan, and the United Kingdom in 1997, excluding documents and freight. USPS provided similar data on GPL parcels shipped to those countries in 1997. DHL, FedEx, and UPS provided data indicating that they sent a total of about 8 million parcels to Canada, Japan, and the United Kingdom in 1997. GPL parcels represented less than 1 percent of the total number of parcels sent to Canada by the three major carriers and USPS via GPL, about 60 percent of those sent to Japan, and about 2 percent of those sent to the United Kingdom. USPS and the three carriers also reported differences in the average weight and value of GPL and private express parcels. USPS reported that the average weight of GPL parcels sent to Japan, for example, was about 3 pounds; the private express carriers reported that the average weight of their parcels to Japan was about 21 pounds. Further, USPS reported that the average value of GPL parcels to Japan was about $120; the average value of parcels shipped by the private express carriers to Japan was about $900. Governments generally establish export and import control laws for national security and foreign policy purposes, to generate revenue, and to protect domestic industries and their citizens. Customs organizations are typically charged with ensuring that all goods and persons entering and exiting their countries comply with customs laws and regulations, as well as with facilitating the prompt and efficient movement of international goods. To ensure compliance, customs services monitor the arrival and departure of shipments of goods through their clearance processes. The export process normally consists of seeking permission from customs services to export goods. This may involve a process of listing goods on a manifest for presentation to the customs services for export clearance. The import process involves the inspection of goods for admissibility and the assessment and collection of any applicable duties and fees. Duties, also known as tariffs, are charges that a government imposes on the goods that are brought into the country. Using a Harmonized Tariff Schedule, each country can establish its own rates, which may vary with the type of goods and sometimes with the country of origin. In addition, fees or taxes, such as the Value Added Tax (VAT) in European Union countries, also may be assessed. Table 1.2 shows the duties and taxes applicable to imported parcels in the three countries and the United States. Historically, customs clearance requirements and procedures have developed along separate tracks for postal and cargo shipments. In the United States, customs requirements for clearance of postal items are affected by a postal law (39 U.S.C. 3623) that protects certain mail of domestic origin from inspection without a search warrant. Customs treatment of international mail parcels originated decades ago, when the need to handle large volumes of international mail prompted customs and postal administrations to work closely together to simplify forms and streamline their procedures for handling mail and parcels. According to customs officials in the United States and the countries in our review, different requirements and processes for postal and commercial imports evolved over time, and the requirements were not intended to be the same. They said that historically, more requirements have been imposed on commercial cargo than on postal parcels. Customs clearance for international mail parcels was intended to be simple for individuals sending parcels to other individuals overseas. International mail primarily consisted of written letters and low-value packages containing items for personal consumption. International express cargo, on the other hand, tended to be time-sensitive shipments being sent from a company in one country to a company in another country for the purpose of generating revenue. Because of the differences in the nature and value of the items entering a country by mail versus commercial cargo, most countries established different customs requirements and procedures for these two different types of shipments. However, with the development of the direct marketing industry through catalog sales and, more recently, through on-line computer orders, the historical distinctions between mail parcels and cargo have been blurred, as consumers increasingly purchase merchandise goods directly from businesses and have the goods delivered to their residences. The Universal Postal Union (UPU), an agency of the United Nations that governs international postal service, also established customs procedures for international mail. Under a UPU international agreement, the Universal Postal Convention, member countries are provided with a standard declaration form to prepare parcels for international shipment. On the declaration form, mailers provide information about a parcel’s contents, weight, and value, as well as the mailer’s and recipient’s names and addresses. Private express carriers began importing and exporting shipments into and out of the United States in the late 1960s and early 1970s as a small industry responding to the geographic dispersion of industries and organizations. Initially, the industry provided door-to-door service for documents. Cargo shipments were not part of express consignment shipments during the early years because regulatory barriers prevented rapid effective movement of packages. During the latter 1970s, private express carriers began the practice of importing courier-accompanied parcels into the United States via commercial airlines. Initially, the U.S. Customs Service did not recognize the private express industry as a separate entity and treated private express shipments as passenger baggage or normal air cargo. In 1987, after repeated requests by the private express carriers to be treated as a separate and special industry, the Customs Service recognized the need to address the growing private express industry. In May 1989, U.S. Customs published regulations (19 C.F.R. Part 128) recognizing the special needs of the private express industry. These regulations provided definitions and guidelines addressing private express procedures, which included application procedures and requirements that the carriers provide advance manifest information and reimburse U.S. Customs for expedited clearances, and required the express facility to be highly automated. Although we will discuss differences between foreign customs clearance requirements and processes for postal and private express carrier shipments in chapter 2, figure 1.2 is a general overview of the major steps involved in U.S. Customs clearance. This general overview is intended to explain the basic steps in the customs clearance process. The differences between customs clearance requirements for postal and private carriers in the United States was the subject of a report being prepared by the U.S. Customs Service in 1998 for the House and Senate Appropriations Committees. Customs-approved postal facility. shipment arrival. Parcel shipment arrives in the United States and is moved to bonded and inspects for admissibility. facility pending Customs clearance. taxes on parcel. completes customs declaration, using shipping documentation, and calculates duties and taxes. to USPS for delivery and collection customs broker and inspects for of duties and taxes owed. content admissibility. Customs verifies duties and taxes owed on parcel. duties and taxes. USPS remits to Customs duties and taxes collected from recipient. payment of applicable duties and taxes to Customs. parcel to carrier, which delivers parcel to U.S. recipient. In 1996, we reported to the Chairman of the Subcommittee on the Postal Service, House Committee on Government Reform and Oversight, the major unresolved issues in the international mail market, including concerns about unfair competition by private carriers. During the summer of 1997, representatives from the largest U.S. private express carriers expressed concerns to Congress about GPL, alleging that USPS (1) received preferential customs treatment, (2) used its governmental status to negotiate special arrangements with other governments, and (3) charged shipping rates that did not cover all of its operational and administrative costs for GPL service. In response to these concerns, the Chairman of the House Subcommittee on the Postal Service requested that we review several issues related to the competitiveness of the international mail market. To accommodate resource limitations, we agreed to address these issues in a series of reviews. In this first review, our primary objective was to determine whether differences existed in the customs requirements for the portion of the international mail market involving GPL and private express carrier parcels. We agreed to review the requirements for and customs treatment of GPL and private express parcels being sent to Canada, Japan, and the United Kingdom—the three primary countries where GPL service was being provided in 1997. In this report, we also discuss some issues related to addressing concerns about GPL’s perceived competitive advantages. We did not review customs treatment of other, non-GPL international postal services, which may have differed from customs treatment of GPL parcels. Further, although other government requirements that are related to both imports and exports may apply, such as those regarding airline security and shipments of restricted and prohibited goods, the focus of this review included only customs requirements. In a separate review, we are examining issues related to the Postal Service’s role and U.S. representation in UPU. In a future review, we plan to look at issues related to the Postal Service’s pricing and allocation of costs for its GPL service. To identify whether customs treatment differed for GPL parcels and similar private express parcels, we compared (1) the customs statutory and regulatory requirements and (2) the operational practices and processes for importing merchandise through GPL or through private express carriers into each of the three foreign countries. We were assisted in our analysis of the legal requirements in Japan by Dr. Sung Yoon Cho, Assistant Chief of the Far Eastern Law Division at the Library of Congress. Some of the differences in customs treatment could not be linked to written requirements. Rather, officials from the private express carriers, USPS, and foreign customs and postal services described them to us as the operational practices and processes that were followed. To obtain detailed information about GPL service to the three countries in our review, we interviewed USPS officials responsible for implementing and administering GPL. We met with officials at the U.S. Customs Service to better understand differences in U.S. Customs clearance of postal and commercial shipments. We interviewed government officials at the Japan Postal Bureau and the Japan Customs Bureau in Tokyo and Osaka, Japan; Parcelforce and H.M. Customs and Excise officials in London; and Revenue Canada officials in Ottawa to understand the customs clearance processes and requirements in each country. We talked to officials at Purolator and PBB Global Logistics, respectively, which handle delivery and customs clearance of GPL parcels shipped to Canada. We also asked these officials to verify our descriptions of the processes and the legal citations for foreign customs clearance of imported GPL and private express parcels. The information on private express clearance processes discussed in this report was obtained from three private express carriers (DHL, FedEx, and UPS) because they were identified as being the largest competitors with USPS for parcel delivery services to Canada, Japan, and the United Kingdom. We interviewed officials of the three private express carriers in the United States, as well as their employees involved in customs processing in Japan and the United Kingdom, to better understand their shipping and clearance processes and their concerns related to the competitiveness of these processes. In addition, we talked to representatives of other private international delivery companies, including Airborne and Global Mail Ltd., as well as the Air Courier Conference of America (ACCA), a trade association whose members are domestic and international air courier and air express companies operating in the United States, to determine if any additional concerns by USPS competitors about GPL customs treatment existed. We also interviewed several U.S. direct marketers, including GPL customers and customers of the private express companies, to learn what factors were important to them in determining how they would export their shipments. Finally, we interviewed officials at the U.S. Department of Commerce, the U.S. Embassy in Japan, the U.S. Chamber of Commerce in Japan, the Direct Marketing Association, and the Mail Order Association to better understand the direct marketing industry’s concerns about the competitiveness of international delivery services. To better understand customs clearance processing for GPL and commercial carrier shipments, we observed the various stages of the customs clearance processes, including the procedures involved before parcel shipments leave the United States and the procedures involved in foreign customs clearance, for both GPL and private express carrier shipments. Our visits to observe pre-export activities that occur in the United States included a U.S. mailer’s facility where catalog orders are processed; a GPL processing facility at John F. Kennedy International Airport in New York; as well as the processing centers of the three major private express carriers located in Memphis, TN; Louisville, KY; and at Kennedy Airport in New York. In addition, our observations of customs clearance processes included the postal and commercial clearance facilities at Kennedy Airport in New York; Heathrow, Stansted, and East Midlands Airports in the United Kingdom; and the New Tokyo International (Narita) and Osaka Kansai International Airports in Japan. The purpose of these visits was to obtain a basic understanding of the customs clearance process, but the visits were not intended to serve as an independent verification of whether the foreign customs clearance processes were appropriately implemented as described by foreign customs officials. Further, we did not have audit authority that would have provided access to records of foreign customs services and would have allowed us to verify the collection of duties and taxes on imported parcels from the United States. We considered options to address concerns raised by USPS competitors about whether duties and fees were being appropriately assessed on GPL packages. However, such options as sending comparable GPL and private express parcels as a test to measure differences in customs treatment or examining foreign governments’ customs records were not deemed feasible for a variety of reasons, including methodological and resource limitations. Further, because GPL service currently involves only the export of parcels from the United States to other countries, we did not assess U.S. import customs clearance processes. We did not verify data provided by USPS or the carriers. USPS provided data on the number of GPL parcels shipped to Canada, Japan, and the United Kingdom in 1997; the number of which were dutiable; and their average weight and value. USPS also provided documentation on the payment of duties and taxes in Canada and the United Kingdom for GPL parcels in 1997. The carriers provided 1997 data on their costs of complying with requirements for shipping parcels from the United States to Canada, Japan, and the United Kingdom. In addition, the carriers provided data on the number of parcels that they sent to those three countries, as well as average parcel weight and value in 1997. We requested comments on a draft of this report from USPS, the Department of the Treasury and U.S. Customs Service; the governments of Canada, Japan, and the United Kingdom; ACCA; DHL; FedEx; and UPS. We received written comments from three organizations—ACCA, USPS, and Revenue Canada; Treasury and U.S. Customs chose not to provide comments. The private express carriers chose to submit their comments together through ACCA’s written comments. The written comments are reprinted in appendixes VI through VIII. A summary of the comments and our response are provided at the ends of chapters 2 and 3. The customs services of Canada, Japan, and the United Kingdom provided technical comments, which are incorporated throughout the report where appropriate. We did our work primarily in Washington, D.C., Japan, and the United Kingdom, as well as other locations identified in this chapter, from August 1997 through May 1998, in accordance with generally accepted government auditing standards. Legal differences in foreign customs treatment of postal and private express parcels existed in all three countries. Differences in foreign customs treatment of GPL and private express parcels were greatest in Japan, where private express carriers were subject to requirements regarding the preparation of shipping documentation and payment of duties and taxes for their parcels that did not apply to GPL parcels. In the United Kingdom, USPS was providing certain shipping data to the customs service on GPL parcels that were similar to the information that the carriers were required to provide. However, differences remained in the requirements applicable to importing postal and private express parcels into the United Kingdom. In Canada, GPL and private express parcels were subject to the same requirements because GPL parcels were being delivered for USPS by a private express carrier there. Regarding two major areas of concern to the carriers, we found no evidence that GPL parcels received preferential treatment over private express parcels in terms of (1) the speed of customs clearance in all three countries and (2) the assessment and collection of applicable duties and taxes in Canada and the United Kingdom. On behalf of individual importers, USPS was paying duties and taxes on GPL parcels shipped to Canada and the United Kingdom. We were unable to determine whether duties and taxes were assessed on dutiable GPL parcels shipped to Japan because (1) USPS did not have records on payment of duties and taxes on GPL parcels shipped to Japan, because the recipients of postal parcels in Japan are responsible for paying applicable duties and taxes; and (2) Japan Customs did not provide statistics on the amount of duties and taxes that recipients paid on GPL parcels. The delivery and customs clearance processes for GPL and private express parcels in Canada, Japan, and the United Kingdom were based primarily on the domestic import requirements applicable to mail and goods imported by private carriers in those countries. U.S. law subjected private express parcels to customs inspection prior to export, but outbound postal parcels were not subject to this requirement. Private express carriers must file manifests on outbound parcels with U.S. Customs, which the agency uses to select certain parcels for inspection. We found that the private express carriers followed similar delivery processes for shipments from the United States to the three countries in our review. Generally, the laws of the importing countries required the carriers to provide detailed data and supporting documentation on their shipments, such as air waybills, manifests, harmonized tariff codes, and invoices. The carriers were also responsible for paying applicable duties and taxes on their imported parcels. However, USPS’ delivery and customs clearance processes for GPL parcels differed among the three countries. The differences reflected USPS’ use of different types of GPL delivery agents, which were subject to different sets of requirements within the three countries. In Japan and the United Kingdom, GPL parcels were delivered by those countries’ postal services and were treated as mail under their customs laws. In Canada, GPL parcels were delivered by a private express company and were thus subject to Canadian customs laws that applied to private carriers for importing goods. The importing requirements for postal and private express parcels in the three countries are detailed in appendices III through V. Different foreign legal requirements also affected how USPS handled GPL parcels being shipped from the United States to Canada, Japan, and the United Kingdom. In Japan, for example, USPS was only required to affix GPL parcels with labels containing basic customs declaration information as prescribed by UPU; the labels were generated by USPS’ automated data information system, CPAS. Further, USPS was not required to calculate or pay duties and taxes on GPL parcels shipped to Japan because, under Japanese law, duties and taxes on postal parcels were calculated by Japan Customs and paid by the parcel recipients. By contrast, in the United Kingdom, USPS was providing customs information similar to that provided by private express carriers as well as calculating and paying duties and taxes owed on GPL parcels. In providing comments on a draft of this report, Japan Customs said that it was planning to introduce an import information system in cooperation with USPS similar to that used in the United Kingdom. In Canada, where GPL parcels were being delivered by a private express carrier and cleared through customs by a broker, USPS was providing customs information similar to that required from private express carriers, in addition to calculating and paying duties and taxes. We reviewed the applicability of customs requirements to private express and GPL parcels shipped from the United States to Canada, Japan, and the United Kingdom by interviewing customs officials and reviewing the relevant laws and regulations. In the process, we identified 11 major categories of requirements that potentially differed between the carriers and postal services. The 11 categories, only the first of which involved U.S. law, included (1) U.S. Customs inspection of outbound parcels, (2) preparation of import shipping documentation, (3) electronic submission of shipping data, (4) use of licensed customs brokers, (5) calculation of duties and taxes, (6) the timing of payment of duties and taxes, (7) payment for customs clearance outside of regular business hours, (8) posting of bonds or other security to customs services for storage facilities, (9) retention of shipping records, (10) liability for importation of restricted or prohibited parcel contents, and (11) liability for incorrect or missing customs declarations. Table 2.1 summarizes our findings regarding the requirements and practices in shipping GPL and private express parcels to Canada, Japan, and the United Kingdom in the 11 major categories. Customs treatment of GPL and private express parcels shipped from the United States to Japan was determined largely by Japanese law, which prescribed different sets of requirements for postal and private express parcels. Under Japanese law, postal parcels were exempt from the major requirements that applied to private express parcels. Also affecting customs treatment were the carriers’ different valuations of certain imported goods, which provided the basis for determining the amount of duties and taxes owed. Private express carriers or their brokers were subject to significantly more requirements than were USPS and the Japan Postal Bureau in shipping their parcels from the United States to Japan. U.S. law subjected private express parcels to customs inspection prior to export, but outbound postal parcels were not subject to this requirement. Under Japanese law, the carriers or their brokers were required to provide detailed shipping documentation, calculate duties and taxes, pay or secure payment of duties and taxes before Customs’ release to the delivery agent, and retain shipping records. In addition, the carriers or their brokers could be liable for fines and penalties regarding the importation of restricted or prohibited parcel contents and for incorrect or missing customs declarations. They also paid for customs clearance outside of regular business hours to expedite parcel clearance. Although not required to by law, the carriers entered most of their import shipping data into Japan Customs’ computer system. By contrast, USPS and the Japan Postal Bureau were not subject to these requirements and practices with regard to GPL parcels, with the exception of the postal services’ potential liability for restricted or prohibited parcel contents. In shipping parcels from the United States to Japan, USPS and the private express carriers followed different delivery and customs clearance processes. A major process difference involved the delivery agents used by USPS and the carriers in Japan. USPS paid the Japan Postal Bureau to deliver GPL parcels within Japan. In comparison, employees of the three major private express carriers, or their Japanese business partners, delivered their parcels from the United States to recipients within Japan. Further, although private express parcels were typically cleared at airport facilities, customs clearance at the Japan Postal Bureau facility that received the most GPL parcels was located in downtown Tokyo, about 2 hours from the New Tokyo International (Narita) Airport, where the parcels arrived from the United States. Differences in the delivery and customs clearance processes for GPL and private express parcels reflected different sets of requirements contained in Japanese law applicable to postal and private express parcels. Appendix III summarizes the Japanese laws and regulations that provide the basis for the different requirements. Figures 2.1 through 2.2 show the preparation, delivery, and customs clearance processes for GPL and private express parcels to Japan. A comparison of the delivery processes for postal and private express parcels in Japan illustrates the differences in the roles of brokers, who handled customs clearance for the carriers; and of the Japan Postal Bureau, which presented GPL parcels to the Japan Customs Bureau for customs clearance (see GPL steps 9 through 17 and carrier steps 12 through 19 of fig. 2.2). The carriers were not required by law in Japan to use licensed customs brokers. However, in practice, the carriers believed it was necessary to use brokers to comply with the requirements for preparing the information needed for customs clearance, including air waybills, manifests, harmonized tariff codes, and invoices. Japanese law did not allow the Postal Bureau to act as a licensed broker. Further, Japanese law exempted the Postal Bureau from submitting documentation on its imported mail that the private carriers were required to provide. Under Japanese law, the carriers, on behalf of individual importers, were required to pay or secure payment of duties and taxes on imported parcels before Customs’ release to the delivery agent. In comparison, Japanese law exempted postal parcels from the requirement that importers pay duties and taxes before customs clearance. However, recipients of postal parcels were required to pay duties and taxes upon delivery at their doors or at the post office before receiving their parcels from the Postal Bureau (see step 16 of fig. 2.2 for both GPL and carriers). In addition, recipients of dutiable GPL parcels were charged a 200-yen fee by the Japan Postal Bureau when it collected applicable duties and taxes, but recipients of private express parcels were not subject to this fee. According to Japan Customs officials, the basis for charging recipients of dutiable postal parcels a 200-yen fee is contained in provisions of the Universal Postal Convention and the International Postal Rules. The carriers’ brokers in Japan were not required by law to enter parcel data into the Nippon Automated Cargo Clearance System (NACCS), Japan Customs’ computerized customs clearance system (step 12 of fig. 2.2 for carriers). However, to speed customs clearance, the three carriers said they entered between 70 percent to 100 percent of their import data into NACCS, which automatically calculated duties and taxes. With respect to GPL parcels, Japanese law did not require submission of postal data into NACCS. Instead, Japan Customs employees entered data on dutiable postal parcels (values and tariff codes) into a separate computer system called the Customs Overseas Mail Tax Information System (COMTIS), which determined the duties and taxes. Brokers were required to hold the carriers’ goods in facilities under the control of or approved by Japan Customs before customs clearance, but they were not required to post bonds in connection with those facilities (step 13 of fig. 2.2 for carriers). GPL parcels were held in postal facilities for customs clearance (step 10 of fig. 2.2 for GPL), and Japan Post was not required to post bonds to secure the parcels. Japanese law also required private carriers’ brokers to pay for customs clearance outside of regular business hours. The carriers indicated that charges for customs clearance outside of regular business hours were often incurred because of limits on the number of shipments that could be cleared within an hour in Japan. Japan Customs officials said that GPL parcels were not cleared outside of regular business hours. Further, Japanese law did not subject the Japan Postal Bureau to payment for customs clearance of mail outside of regular business hours. In addition, under Japanese law, carriers’ brokers were subject to a requirement to maintain customs clearance records on their parcels for 3 years. No similar provision of law applied to records of postal items. According to Japan Customs, Japanese law imposes fines and penalties against any persons, whether postal services or private carriers, for importing prohibited goods if they are knowledgeable about the illegal parcel contents. A Japanese law implemented in October 1997 subjected importers in Japan to an additional 10- to 15-percent tax for filing an incorrect customs declaration, or failing to file one, without a proper reason. This law did not apply to the postal services. Japan Customs is responsible for calculating duties and taxes on imported postal parcels, including GPL parcels. On the other hand, private carriers or their brokers must calculate duties and taxes on their imported parcels based on applicable law. These calculations of duties and taxes by the carriers are later verified by Japan Customs. The carriers indicated that because they or their brokers calculated duties and taxes on parcels imported into Japan, their records prove they pay 100 percent of applicable duties and taxes. The carriers were concerned that they have lost direct marketers as customers because of a perception that duties and taxes were not always assessed on dutiable postal parcels in Japan. Japan Customs officials, however, said that GPL and private express parcels received the same customs treatment. In addition, the officials said that duties and taxes were being assessed on all dutiable parcels from the United States. In providing comments on a draft of this report, Japan Customs emphasized that postal parcels, including GPL parcels, were subject to full inspection. We were unable to determine whether duties and taxes were assessed on dutiable GPL parcels shipped to Japan because (1) USPS did not have records on payment of duties and taxes on GPL parcels shipped to Japan, because the recipients of postal parcels in Japan are responsible for paying applicable duties and taxes; and (2) Japan Customs did not provide statistics on the amount of duties and taxes that recipients paid on GPL parcels. On the basis of information provided by USPS and Japan Customs and our observations, we found no evidence that GPL parcels received preferential treatment over private express parcels with respect to the speed of customs clearance. However, because of different valuations of imported goods by private carriers or their brokers in Japan, differences existed in the amounts of duties and taxes paid on some postal and private express parcels. Finally, we were unable to determine the significance of USPS’ sorting GPL parcels destined for Japan by value—a practice that we observed at USPS’ GPL facility at John F. Kennedy International Airport in New York. Japan Customs indicated that it collected 1.1 trillion yen in customs duties and 2.1 trillion yen in internal consumption taxes in 1996, but it did not maintain specific statistics reflecting the amount of duties collected on GPL parcels or the number of dutiable parcels. USPS data indicated that about 44 percent of the GPL parcels shipped to Japan in 1997 would have been dutiable. During our tour of the International Post Office in Tokyo, where many GPL parcels are processed, we observed Japan Customs officials inspecting and assessing duties on some GPL parcels. Japanese law allowed imported goods to be valued at their wholesale, rather than retail, values if the goods were deemed to be for the personal use of the importer or were a gift to a person who is a resident in Japan and the goods were deemed to be for the personal use of the recipient of the gift. In assessing the customs value of goods, Japan Customs officials said that imported parcels from direct marketers that are addressed to an individual, in many cases, qualified as goods deemed to be for the importers’ personal use and could be valued at their wholesale, rather than retail, values. The officials said that the government’s objective in assessing duties and taxes on mail-order goods based on their wholesale values was to benefit Japanese consumers. Japan Customs officials said wholesale valuations could be applied for both GPL and private express parcels containing goods from direct marketers for the recipients’ personal use, and they applied a standard 60-percent valuation of GPL parcels’ retail value to calculate wholesale values. Japan Customs officials said that their formula for calculating the wholesale value of mail-order goods was not a written policy, regulation, or law. The officials said that this formula for calculating the wholesale values of goods from direct marketers was based on their review of catalogs from mail-order companies that sell similar goods. In addition, the officials said that if the information was not available, the carriers or their brokers could consult with them to calculate the wholesale value using standard profit margins. Japan Customs officials said that because most, if not all, GPL parcels are shipped by direct marketers, after inspection they considered GPL parcels to be for personal use and assessed duties and taxes on them on the basis of their wholesale values. With regard to private express parcels, we found that the carriers were valuing certain imported goods differently, which could affect the amount of duties and taxes owed on their imported parcels. Of the three major private express carriers we contacted for this study, one carrier indicated that it was calculating duties and taxes on only imported mail-order goods on the basis of wholesale values of the goods. Another said that it was using wholesale valuations for both imported mail-order goods and gifts. The third carrier was not using wholesale valuations for any of its imported goods as a basis for calculating duties and taxes. The carriers’ valuation of imported goods was important because their parcel values provided the basis for calculating duties and taxes; however, Japan Customs assessed the value and calculated duties and taxes on postal parcels. The carriers indicated that Japan Customs rarely, if ever, adjusted the carriers’ calculations of duties and taxes to reflect the wholesale values of imported goods. In providing comments on a draft of this report, Japan Customs indicated that under the carriers’ “self-assessment” system, private express parcels were exempt from full inspection. Further, Japan Customs indicated that under the self-assessment system, the carriers’ import declarations were considered to be correct and would be adjusted only at the time of inspection. We found no evidence that GPL parcels received preferential treatment by Japan Customs in terms of the speed of clearance. Indeed, it appeared that private express parcels were cleared significantly faster than were GPL parcels. According to Japan Customs, private express parcels were normally cleared in Japan within 2 hours. They also said that carriers’ import requirements made it possible to clear private express parcels at the fastest possible speed. The carriers reported that customs clearance in Japan generally took between 2 and 5 hours for their parcels not held for inspection. Japan Customs officials said that it did not maintain records on customs clearance times for GPL parcels, but they also said that most GPL parcels were released to the Postal Bureau within the same day that they were received. Data provided by USPS indicated that in 1997, clearance of GPL parcels in Japan took an average of 2.17 days. During a tour of USPS’ GPL facility at John F. Kennedy International Airport in New York, where GPL parcels are prepared for shipment to Japan, we observed USPS employees sorting GPL parcels into two categories: those with a value of $300 or less and those with a value of more than $300. USPS officials said that they were sorting the parcels by value at the request of Japan Post on behalf of Japan Customs, but Japan Customs officials said that they had not requested USPS to do so. In commenting on a draft of this report, Japan Customs indicated that it had requested USPS to sort the parcels by destination, but not by value. If GPL parcels arrived in Japan sorted into those less than and those more than $300 in value, they would not, however, be separated into those that were dutiable and nondutiable. At an exchange rate of about 125 yen per dollar, the dutiable threshold for GPL parcels would be $133, on the basis of their wholesale valuation. We were unable to determine the significance of this sorting practice. The customs treatment of GPL and private express parcels being shipped from the United States to the United Kingdom was governed primarily by legal requirements applicable in the United Kingdom, which included U.K. and European Union (EU) laws and regulations. USPS’ provision of certain shipping data to the customs service in the United Kingdom on GPL parcels, although not required by law, served to lessen the extent of operational differences in the treatment of postal and private express parcels in the United Kingdom. However, differences remained in the requirements applicable to imported postal and private express parcels. As in Japan, private express carriers in the United Kingdom were subject to requirements that did not apply to postal services. In the United Kingdom, carriers or their brokers were required to pay or secure duties and taxes before customs clearance, provide security to the customs service for storage facilities, and retain shipping records. They also paid for customs clearance outside of regular business hours to expedite parcel clearance. By contrast, USPS and its delivery agent for GPL parcels in the United Kingdom were not required to pay or secure duties and taxes before customs clearance, post bonds or other security to the customs service for storage facilities, or retain shipping records, and did not normally have GPL parcels cleared outside of regular business hours. Both the carriers and the postal service in the United Kingdom were subject to liabilities for illegal parcels contents and for incorrect or missing customs declarations. USPS was providing electronic shipping data on GPL parcels to its delivery agent in the United Kingdom for access by Her Majesty’s Customs and Excise (H.M. Customs) officials. The content of USPS’ shipping data on GPL parcels was similar to that provided to H.M. Customs by the carriers on their parcels. Further, USPS was paying duties and taxes on GPL parcels shipped to the United Kingdom. USPS officials said that they offered to follow these procedures in establishing GPL service to the United Kingdom. USPS and the private express carriers followed different processes for delivering parcels from the United States to the United Kingdom, reflecting the use of different delivery agents. Within the United Kingdom, USPS paid Parcelforce, a for-profit subsidiary of Royal Mail, the United Kingdom’s postal service, to deliver GPL parcels. By contrast, employees of the three major private express carriers delivered their parcels or contracted with local delivery companies within the United Kingdom. Carrier receives parcels at its processing facility and scans barcodes. received from the shipper--in electronic form or hard copy. Carrier employees ensure that required documents have been completed and parcel contents can be into the U.K. documents, U.S. Customs reviews documents to U.S. Customs pending them to determine whether contents of export clearance. parcels are exportable. shipment (parcels), carrier's employees load it on an international whether parcels are (1) cleared for export or (2) subject to inspection. aircraft to the U.K. shipment to the U.K. While shipment is in transit, carrier transmits customs clearance data to its U.K. brokerage operations electronically and by fax. USPS officials said that as a result of its electronic submission of shipping data to H.M. Customs, which included the values and contents of GPL parcels, as well as applicable duties and taxes, all duties and taxes were being paid on dutiable GPL parcels being shipped to the United Kingdom. USPS officials provided data indicating that over 90 percent of the GPL parcels shipped from the United States to the United Kingdom in 1997 were dutiable. In addition, USPS provided documentation indicating that it paid duties and taxes on GPL parcels shipped to the United Kingdom in 1997. The only apparent difference in customs treatment of postal and private express parcels in the United Kingdom related to the speed of customs clearance. USPS did not have exact data indicating how long customs clearance took in the United Kingdom, but USPS officials said that GPL parcels were normally cleared within the same day that they arrived in the United Kingdom. An H.M. Customs official said that private express parcels were cleared, on average, in 2 hours. However, the carriers said that under new simplified procedures, customs clearance occurred immediately upon arrival for certain imported goods. The treatment of GPL and private express parcels being shipped from the United States to Canada was determined by Canadian law applicable to imports of goods by private carriers into Canada. Although Canadian law prescribed different sets of requirements for postal and private express carrier parcels, USPS chose to have its GPL parcels delivered by a private express carrier in Canada. According to Revenue Canada, GPL parcels were therefore treated as goods imported by private express carriers. Although GPL parcels shipped from the United States to Canada originated as mail in the United States, they were treated as private express parcels in Canada and were subject to the same requirements. These requirements included the preparation of shipping documentation, calculation of duties and taxes, posting of security for storage facilities, retention of shipping records, payment for customs clearance outside of regular business hours, and potential liability for restricted or prohibited parcel contents and for incorrect or missing customs declarations. Appendix V summarizes the Canadian laws and regulations that provide the basis for the requirements. Figures 2.5 and 2.6 show the preparation, delivery, and customs clearance processes for GPL and private express parcels to Canada. According to Purolator, the private express company that delivered GPL parcels for USPS in Canada, the only difference in the process between GPL and its other parcels was in the level of automated shipping data provided to Revenue Canada. Purolator indicated that USPS provided GPL data electronically through CPAS, but the carrier’s other commercial customers provided shipping documentation in both electronic and paper format. In addition, Purolator indicated that CPAS provided harmonized tariff codes on GPL parcels, but some of the carrier’s other commercial customers did not provide the codes, requiring the broker to determine them. Revenue Canada normally charges a $5 per parcel fee for Canada Post to collect duties and taxes on imported postal parcels, but USPS avoids this fee on GPL parcels because they are imported by a private carrier. Purolator said that GPL parcels were being cleared through Revenue Canada’s low-value shipment program, which allowed Purolator’s broker to pay duties and taxes and provide harmonized codes after customs clearance on goods valued at less than $1,600 Canadian as long as security for payment was provided. Under this program, Revenue Canada used a manifest to clear goods immediately, and Purolator’s broker could pay duties and taxes later. USPS provided documentation indicating that it paid duties and taxes on GPL parcels shipped to Canada in 1997. According to Revenue Canada, the recipients of imported parcels in Canada were subject to liabilities for importation of restricted or prohibited contents, and the importers or their brokers were liable for missing or incorrect customs declarations. USPS indicated that because it was submitting detailed shipping documentation to Revenue Canada, including data on duties and taxes owed, all duties and taxes on GPL parcels being shipped to Canada were being paid. According to USPS, over 95 percent of GPL parcels shipped to Canada in 1997 were dutiable. Purolator said that customs clearance for GPL and private express parcels is the same, ranging from 1/2 hour to 3 hours. USPS did not have data on how much time customs clearance took for GPL parcels in Canada. The comments we received from USPS, Revenue Canada, and ACCA indicated general agreement with the facts presented in the report on the differences in the requirements and procedures for customs clearance of GPL and private express parcels in the three countries in our review. Revenue Canada said that the report correctly described the key features of its processing of GPL parcels. Although USPS and ACCA generally agreed with the differences in customs requirements and procedures we described, they had different interpretations of the differences in requirements and procedures. ACCA also believed that the review’s limitations should have been more carefully explained. In its comments, USPS said the report confirmed that no preferential customs arrangements (“sweetheart deals”) benefitting the Postal Service existed and that USPS enjoys no customs clearance advantage over private express parcels. USPS said that rather than identifying sweetheart deals, the report identified different, but not better, customs processes for postal parcels, compared to private parcels. In addition, USPS said that its competitors’ allegations that GPL service is fraught with unfair advantages are erroneous. USPS also agreed with the concerns of its customers that they would lose a competitive and attractive shipping option if GPL service were made unavailable. We did not confirm that sweetheart deals did not exist, as USPS indicated in its comments. Instead, we reported that we found no evidence that GPL parcels received preferential treatment over private express parcels in two specific areas. However, we did observe some unexplained activities such as the sorting of GPL parcels bound to Japan by value at USPS’ GPL processing center in New York. Neither Japan Customs nor Japan Postal officials acknowledged requesting this practice. Although ACCA generally agreed with the stated differences in customs treatment between GPL and privately transported parcels described in the report, it suggested that additional clarification of the commercial implications of the differences in customs treatment would be helpful, such as the benefits from cost savings traceable to differences in customs treatment. As we noted in the report, differences in customs requirements for GPL and private express parcels existed; and in some cases, there were additional requirements for the private carriers. However, we did not assess the advantages or disadvantages of these differences. ACCA also noted that although the study’s focus on only the outbound leg of GPL’s shipments was too limited and thus would not illuminate all aspects of the issue related to whether USPS has an unfair competitive advantage in providing international parcel delivery service, it was a good place to start. ACCA raised four areas of concern. First, it noted that the customs treatment of GPL parcels may not be entirely representative of all customs treatment of international postal shipments. Second, ACCA said that the report did not include customs treatment of parcels entering the United States, particularly return merchandise, which ACCA indicated was significant in terms of costs and service. Third, it stated that we did not address the extent to which differences in customs treatment may result from manipulation of international law by USPS. Further, ACCA noted several points in this chapter that it believed deserved additional clarification or research, including (1) the incidence of duty collection, (2) a simplified classification system available to postal shipments in Japan, (3) private carrier fees for customs clearance, and (4) legal requirements for GPL in Canada. On the first issue, we were asked by the requester to focus specifically on GPL in response to concerns that the private carriers had raised regarding GPL. Thus, we did not review customs procedures related to international postal shipments other than GPL parcels, and we were not in a position to comment on the relationship between GPL shipments and other international postal shipments. On the second issue, we asked the foreign customs officials and private carriers for information related to merchandise returns, but we did not receive enough information to discuss this issue in a meaningful way. Thus, we did not discuss this issue. Regarding the third issue, as ACCA indicated, we are currently reviewing issues related to U.S. representation at UPU. However, we do not anticipate the review will address ACCA’s assertion that UPU customs procedures are the basis for most of the differences in customs treatment identified in this report or whether USPS uses its position within UPU to manipulate customs law and practices to its advantage. We also had some scoping limitations related to some of the areas in this chapter where ACCA desired additional clarification. For example, the private carriers believed that the incidence of duty collection for GPL shipments is “substantially less than 100 percent for shipments entering Japan.” As we explain in the Objectives, Scope, and Methodology section in chapter 1, we did not have audit authority to examine the records of foreign governments to verify the incidence of duty collection. We obtained records from USPS indicating that it paid duties and taxes on GPL parcels shipped to Canada and the United Kingdom, but USPS was not responsible for duty payments in Japan. Further, as explained in chapter 1, we did not conduct a limited test of customs treatment in Japan, as suggested by ACCA, because of resource and methodological limitations. From a methodological perspective, a limited test would not produce reliable results that would be generalizable to the universe of all GPL parcels sent from the United States to Japan. ACCA also wanted the report to call more attention to the simplified classification system in Japan, which it said was available only for postal shipments. We reported the position of Japan Customs officials that the simplified classification system applies to both postal and private carrier parcels and cited the relevant provision of Japanese law. ACCA also stated that private carriers pay for customs clearance services outside of regular business hours in part because Customs authorities generally refuse to provide dedicated staff at private carriers’ facilities during normal business hours. We added information to this chapter that the carriers indicated that Japan Customs limits the number of parcels that can be cleared per hour. However, because we were not informed by the carriers during our visit to Japan of their concern about insufficient staffing for customs clearance, we did not discuss this matter with Japan Customs. Finally, ACCA suggested that table 2.1 show the application of customs procedures to GPL in Canada as “practices” rather than “requirements.” We discussed this issue with Revenue Canada officials, who said that the designation should be “requirements” because, if USPS uses a private carrier as its delivery agent, it must meet the requirements for private carriers. Although ACCA is correct that USPS could choose to use Canada Post to deliver GPL parcels in Canada, we relied on Revenue Canada officials in this regard. The private express industry has commented that differences in customs clearance requirements for postal and privately shipped parcels result in more work and higher costs for the carriers, placing them at a disadvantage in competing with USPS to provide international parcel delivery service. However, USPS officials noted that they also incur costs that the private carriers do not, such as meeting their obligations to provide delivery service to persons in all communities of the United States and to member countries of UPU. In addition, businesses that ship their goods internationally, as well as USPS and the carriers, stressed the importance of having competitive choices that provide alternatives in the cost and speed of international shipping services for consumers. The carriers have urged Congress to protect fair competition by enacting legislation that would require USPS to compete on the same terms, particularly for customs treatment, as private carriers. This proposal raises several questions related to GPL, such as (1) whether international parcels delivered by postal services and private carriers should be subject to the same requirements and customs treatment; (2) if so, what requirements might be appropriate to apply to international parcels; and (3) how those requirements should be implemented. Although we do not take a position on whether the existing requirements or a change in policy would be desirable, we discuss some potential implications that have been raised and may be relevant to considering how these options could be implemented. Regarding the issue of whether the same customs requirements should be applied to postal and privately shipped parcels, USPS and private carrier officials have conflicting views about whether that would achieve a more “level playing field.” Additional implications of applying the same requirements include the potential effects on the costs and choices that would be available to businesses and consumers, as well as the potential impact on the workloads of U.S. and foreign postal and customs services. If the same requirements were to be applied, many ongoing national and international efforts aimed at streamlining customs procedures worldwide could benefit both USPS and the private carriers in terms of how they provide international parcel delivery services. Regarding how to implement the same requirements, the U.S. government does not have jurisdiction over foreign customs requirements; so changes involving foreign requirements might need to be negotiated through bilateral or multilateral agreements. Moreover, potential conflicts with current international agreements would have to be considered. The carriers have urged Congress to enact legislation aimed at applying the same customs clearance requirements for USPS and the private express industry. This could be achieved in several ways, such as (1) applying the carriers’ requirements to competitive international postal products, (2) allowing the carriers to follow the same requirements that apply to competitive international postal products, (3) applying simplified procedures to both postal and private express parcels, or (4) other options. In this report, we do not attempt to provide a comprehensive analysis of potential changes; but we present some of the issues in this regard that were raised by stakeholders in the course of our review. The carriers are concerned they must incur substantial costs to comply with requirements that do not apply to USPS. At our request, DHL, FedEx, and UPS calculated that they incurred over $110 million in costs to comply with customs requirements in connection with delivering over 8 million parcels to Canada, Japan, and the United Kingdom in 1997. The carriers indicated that they incurred these costs in the following areas: (1) preparation, transmission, and submission of shipping documentation; (2) use of licensed customs brokers; (3) bonding or other security requirements; (4) customs clearance services outside of regular business hours; (5) records retention; (6) fines and penalties; (7) liability expenses; and (8) expenses related to the payment of duties, taxes, and fees in the three countries. We did not verify the cost data provided by the private express carriers or obtain data on the costs that USPS incurred to provide GPL service. We plan to evaluate the cost issues in a later review. In response to the carriers’ concerns about the costs of complying with customs clearance requirements, USPS officials said that they must incur costs for public service obligations that the private carriers do not, such as meeting their universal service delivery obligations. USPS’ universal service obligations include delivering mail to persons in all communities in the United States and mailing to the 189 countries of UPU. USPS also cited the costs of maintaining a universal delivery infrastructure, including a large number of unprofitable post offices, and offering customers uniform prices. An official from the Direct Marketing Association (DMA), which represents direct marketers who ship their goods overseas, said that DMA members want a choice of international carriers. In addition, the DMA official said that GPL serves as an important means of simplifying the shipment of goods internationally. The DMA official also said that the advantages enjoyed by GPL customers—low-cost shipping and various delivery speeds and automated customs clearance data—also should be available to private carriers’ customers. An official from the Mail Order Association said that GPL was essential to serving overseas markets and that his organization would like to see GPL expanded to additional countries. Moreover, officials from several GPL customers said that simplification of the customs process and lower shipping costs were the primary reasons they used GPL to ship internationally. USPS and the carriers also stressed the importance of competitive choices for shippers. Determining whether and/or how to make customs requirements more similar would involve considering the implications of any changes on the postal and customs services, private express carriers, businesses, and consumers. From the perspective of customs services, these implications include the potential impact on their workload and efforts to implement simplified procedures that facilitate timely and cost-effective customs clearance while also allowing them to meet their law enforcement and revenue collection responsibilities. Also, postal services would want to ensure that they would be able to continue providing universal mail service. Implementation of the same requirements would need to address potential limitations due to lack of U.S. jurisdiction over importing requirements of other countries and existing international agreements. Depending upon the requirements and what types of competitive international postal products were to be treated the same as private express parcels, applying the same requirements to international postal and privately shipped parcels could affect the workload of postal and customs services worldwide, as well as individuals and businesses sending mail to the United States. If the same requirements were applied, one option would be to apply the private carriers’ requirements to USPS’ competitive international postal products; another option would be to apply USPS’ requirements to the private carriers. Currently, private express carriers must provide U.S. Customs with manifests and supporting documentation, such as invoices, on goods imported into the United States. According to USPS, requiring foreign postal services to provide manifests and supporting documentation on parcels being shipped to the United States under the first option could be a very burdensome task. In 1996, USPS received about 714 million pieces of incoming international mail, including about 4 million parcels, most of which were sent from household to household. USPS officials also said that most countries do not require that they be provided with manifests and supporting documentation on incoming international mail. Further, under the first option, U.S. Customs would need to determine the potential impact on its resource allocation if all inbound and outbound international postal parcels were subject to inspection, as are all private express carriers’ parcels. When we asked U.S. Customs officials what impact this could have on the agency’s workload, they said this issue had not yet been fully analyzed. In a recently issued GAO report, we said that U.S. Customs does not have an agencywide process for annually determining its need for inspectional personnel for all of its cargo operations and for allocating these personnel to commercial ports of entry. We recommended that such a process should include conducting annual assessments to determine the appropriate staffing levels for its operational activities related to processing cargo at commercial ports. Depending on the option selected and the terms of applying the same requirements on competitive international postal products and private express carrier shipments, the implications regarding the procedures for individuals and businesses who send mail to the United States are also a consideration. Currently, senders of postal parcels to the United States affix a simple customs declaration label to the parcels. If the same requirements were imposed on competitive international postal products incoming into the United States as currently apply to privately shipped imports, individuals and businesses in other countries could be required to provide additional data not currently provided on the customs declaration labels. This data could include harmonized tariff codes, as well as supporting documentation, such as invoices, for postal parcels sent to the United States. In providing comments on a draft of this report, the carriers said they would like to have “postal-like simplicity” applied to the customs treatment of privately transported low-value parcels. We did not receive views on the feasibility of this option during the course of our review. However, many of the same workload implications would need to be considered, such as how U.S. Customs could expeditiously clear private express parcels while applying the requirements that apply to postal parcels. As it plans to expand GPL service, USPS is engaged in discussions with the U.S. Customs Service regarding future incoming service. Issues being discussed include whether USPS would be required to present manifests of incoming GPL parcels and how duties and taxes would be paid. With regard to future incoming GPL service, USPS officials said that they were willing to comply with many of the same requirements that private express carriers must follow, such as preparing manifests and prepaying duties and taxes. However, they indicated that providing invoices would be unnecessary because CPAS data included parcel values. In addition, Postal Service officials did not believe that USPS, as a government entity, could be subject to the same liabilities and associated penalties as are private express carriers. USPS officials said that they would like to see their procedures for providing shipping data electronically and paying duties and taxes in Canada and the United Kingdom applied to GPL service in other countries, including Japan. U.S. Customs’ strategic plan for 1997 through 2002 indicates that the United States is experiencing a period of unprecedented growth in world trade and the value of trade imports is expected to double over the next 5 years. To handle the increased volume of trade, U.S. Customs is planning to expand the electronic transmission of data needed by Customs, as well as permit the electronic payment of duties and taxes, and take other measures. Costs and workload burdens are a concern to all parties. Therefore, efforts to find more efficient and cost-effective customs clearance procedures could benefit all parties. International organizations and national governments are attempting to simplify and standardize customs procedures worldwide. These international efforts are relevant to the debate over whether the same and which requirements should apply to GPL and private express parcels because, as explained in chapter 2, most of the requirements that apply to parcels exported from the United States are imposed by the importing countries. All three countries in our review have initiated or implemented procedures to speed the customs clearance process and reduce the paperwork burdens on the carriers for low-value imports. In 1993, Canada implemented its courier/low-value shipment program to streamline the reporting, release, and accounting procedures for certain goods valued at less than $1,600 Canadian. Under the program, couriers report the goods to Revenue Canada on a cargo/release list, which reduces paperwork burden by eliminating the need to present a separate manifest for each parcel. Revenue Canada said it usually receives the list before the goods arrive. Prior to the arrival of goods, customs inspectors are to review the cargo/release list and select any parcels to be examined. Customs entry and accounting documentation must be presented by the 24th day of the month following the month of release, and the duties and taxes must be paid by the last business day of that month. Revenue Canada said it verifies compliance with customs laws through periodic audits of importers and customs brokers and other checks. In 1996, Japan Customs began allowing shippers to declare imported shipments valued at no more than 100,000 yen without preparing an invoice as long as the shipper maintains import records. In addition, carriers have the option of calculating duties on imports into Japan that are valued at no more than 100,000 yen using single duty rates that are selected from six categories, eliminating the time needed to determine specific duty rates applicable to each item. Another new program in Japan allows shippers to clear some low-value imports on one air waybill. We asked Japan Customs officials whether they would like USPS to provide certain shipping documentation on GPL parcels, such as harmonized tariff codes, as private carriers do. In providing comments on this report, Japan Customs indicated that it is preparing for the introduction of an import information system with the cooperation of USPS that is similar to that used in the United Kingdom. In April 1997, the United Kingdom initiated a pilot program to simplify customs clearance procedures for some types of private express shipments. H.M. Customs indicated that the program is aimed at reducing paperwork requirements by allowing electronic submission of customs declarations. In addition, under the simplified procedures, carriers may pay duties and taxes the following month. According to the carriers operating in the United Kingdom, the new program has reduced customs clearance times for qualifying imports from about 2 hours to immediate clearance upon arrival. At the international level, organizations such as the World Customs Organization are examining the issue of simplifying customs procedures worldwide. Similarly, a project was initiated at the 1996 G-7 summit in Lyon, France, to standardize and simplify customs procedures scheduled to be completed in 1998. Two key proposals raised at the international forums to streamline customs procedures included (1) reducing paperwork requirements on imported goods and (2) increasing the dutiable de minimis in various countries. In the three countries in our review, imported nondutiable goods were subject to reduced paperwork requirements for customs clearance. In U.S. dollars, the dutiable de minimis was the equivalent of about $14 in Canada, about $30 in the United Kingdom, about $80 in Japan, and $200 in the United States.According to the U.S. Embassy officials in Japan, increasing the dutiable de minimis on imported goods has been the subject of ongoing negotiations between the United States and Japan. One carrier said that raising the dutiable de minimis in Japan from 10,000 yen to 30,000 yen, for example, would increase its nondutiable imports from 40 to 80 percent. Guidelines issued by the International Chamber of Commerce in 1996 regarding best practices recommended that customs services regularly review dutiable de minimis levels to take into account such factors as inflation. The ability of the United States to apply the same international customs requirements to both USPS and the private carriers may have some limitations, due to the lack of U.S. jurisdiction over importing requirements imposed by foreign governments and potential conflicts with current international agreements on customs clearance. For example, a UPU agreement prescribes specific procedures for member postal services regarding customs declarations on postal parcels. These procedures differ from the customs procedures that the private carriers are required to follow. Further, the UPU agreement provides that “postal administrations shall accept no liability for customs declarations in whatever form these are made or for decisions taken by the Customs on examination of parcels submitted to customs control.” If USPS were subject to the same requirements as the private carriers, this provision of the UPU agreement could conflict with a requirement to subject USPS to liabilities for customs declarations. Efforts to apply similar customs requirements may require bilateral or multilateral agreements. The private express industry has commented that it wants Congress to establish a “level playing field” with USPS in providing international parcel delivery service by applying the same customs requirements on USPS and the carriers. Issues related to fair competition involve weighing how USPS and its private sector competitors can compete, given that different sets of requirements and obligations currently exist. The potential implications of whether to apply the same requirements, under what terms, and how to implement the same requirements for both USPS and the carriers may include a number of factors, including those raised by the U.S. and foreign postal and customs services, private express carriers, shippers, and consumers. USPS officials noted that they incur costs that the private carriers do not, such as meeting their obligations to provide delivery service to persons in all communities of the United States and to UPU member countries. The carriers noted the benefits that simplification of customs formalities for low value shipments could have for all international commerce. Moreover, businesses that ship their goods internationally stressed their need to have competitive choices that provide alternatives in the cost and speed of international shipping services for consumers. In urging that the same international customs clearance requirements should be applied to USPS and the private carriers, the carriers have raised fundamental questions about the fairness of competing with a government entity that is providing a businesslike service. The carriers believe that competing with a government entity that is subject to fewer customs requirements and lower associated costs distorts the competitive marketplace. However, depending upon what types of competitive international postal products would be subject to the same requirements, postal services are concerned that requiring USPS and the private carriers to follow the same requirements could affect the simplified process that was intended for mail sent from household to household internationally. Another consideration is the potential impact on shippers, such as the direct marketing industry, who want to have a choice of different types, costs, and speeds of delivery services to respond to their customers’ demands for their goods. Determining how to make customs requirements the same would involve several considerations. Changes in U.S. law by themselves would not equalize customs treatment for postal and private express parcels under foreign law. Bilateral or multilateral agreements with other countries may also be necessary. Further, additional analysis would be needed to determine whether making customs requirements the same would conflict with current international agreements, such as those involving UPU service obligations, and if such changes would impose additional workload burdens on postal and customs services worldwide. With respect to U.S. law, opportunities may exist to change customs treatment of parcels imported into the United States. Negotiations between USPS and the U.S. Customs Service regarding the treatment of future GPL service incoming to the United States involve discussions of such issues as manifesting requirements and payment of duties and taxes. Moreover, in considering what requirements might be appropriate, additional opportunities may exist to build on national and international proposals discussed earlier in this chapter to simplify and expedite customs clearance procedures worldwide. Such opportunities include reducing paperwork and increasing the dutiable de minimis, which could benefit both USPS and the private express carriers. In its comments, ACCA said that this chapter did not develop a sound and objective basis for evaluating the policy implications of the differences in customs treatment. It appears that ACCA may have misinterpreted our discussion of several issues in this chapter. We did not intend to take a position on the policy issues that are discussed in this chapter or make assumptions about the implications of changes in policy. Rather, our intent was to identify some of the key issues that are being considered by policymakers in Congress and that were raised during our review to provide some perspective on the significance of the issues related to differences in customs treatment. We modified this chapter to address ACCA’s specific concerns. ACCA raised four primary areas of concern about this chapter. First, it said our report implied that the principle of equal application of the customs laws, as advocated by the carriers, could result in adverse consequences, such as eliminating the simplified clearance process that currently benefits U.S. shippers. In discussing the potential implications of the principle of equal application of customs laws, we were not taking a position on whether existing requirements or a change in policy would be desirable. Thus, we changed our discussion to make this clear where appropriate. Second, ACCA said that our report assumed that a U.S. policy of equal application of the customs laws could lead foreign customs authorities to subject all postal shipments—or at least all GPL parcels—to the customs procedures now applied to privately carried shipments. In this report, we discussed options that were raised during the course of our review; however, other options could be considered. One of the possibilities raised in draft legislation was to apply the customs requirements for private carriers to all competitive international postal products. ACCA indicated in its comments that foreign governments would discover strong incentives to extend simplified customs procedures to all U.S. direct marketing shipments tendered by all U.S. carriers, provided that the carriers tender the shipments in the same manner now employed by USPS. In providing informal comments on a draft of this report, H.M. Customs said that simplified procedures already could be used by anyone fulfilling its requirements, but that customs inspectors would need shipment data in advance, or at least at the time of importation, for inspection purposes. In addition, the World Customs Organization is currently reviewing this issue. Third, ACCA was concerned that by presenting USPS’ views on its universal service obligations, our report helped to militate against a U.S. policy of nondiscrimination in customs treatment for U.S. carriers. Although we noted both USPS’ stated service obligations and the carriers’ business choices, we did not take positions on their respective arguments. We sought to provide a fair and balanced presentation of the often conflicting interests and opinions associated with this issue. Finally, ACCA said that our report exaggerated the legal difficulties associated with implementing a U.S. policy requiring equal application of customs procedures to U.S. based carriers. We disagree. We only pointed out that potential limitations may exist in applying equal customs requirements, including the lack of U.S. jurisdiction over foreign customs laws. This discussion was presented not as obstacles, but as legal considerations for implementing these policies.
|
Pursuant to a congressional request, GAO reviewed the United States Postal Service's (USPS) Global Package Link (GPL) service, focusing on whether differences existed in customs treatment for GPL and private express carrier parcels by foreign customs services in Canada, Japan, and the United Kingdom. GAO noted that: (1) the delivery and customs clearance processes for GPL and private express parcels in Canada, Japan, and the United Kingdom were based primarily on the domestic import requirements applicable to mail and parcels imported by private carriers in those countries; (2) all three countries had separate customs clearance processes and requirements for mail and parcels imported by private express carriers; (3) under U.S. law, the private express carriers were required to submit their parcels to U.S. Customs for inspection prior to export, but USPS was not subject to this requirement for its outbound parcels; (4) differences in foreign customs treatment of GPL and private express parcels were greatest in Japan, where private express carriers were subject to requirements regarding the preparation of shipping documentation and payment of duties and taxes on their parcels that did not apply to GPL parcels; (5) in the United Kingdom, USPS was providing certain shipping data to the Customs Service on GPL parcels that was similar to the information that carriers were required to provide; (6) in Canada, GPL and private express parcels were subject to the same requirements because GPL parcels were being delivered for USPS by a private express carrier there; (7) regarding two major areas of concern to the carriers, GAO found no evidence that GPL parcels received preferential treatment over private express parcels in terms of: (a) the speed of customs clearance in any of the three countries; or (b) the assessment of duties and taxes in Canada and the United Kingdom; (8) on behalf of individual importers, USPS was paying duties and taxes on GPL parcels shipped to Canada and the United Kingdom; (9) GAO was unable to determine whether duties and taxes were assessed on dutiable GPL parcels shipped to Japan because: (a) USPS did not have records on payment of duties and taxes on GPL parcels shipped to Japan, because the recipients of postal parcels in Japan are responsible for paying applicable duties and taxes; and (b) Japan Customs did not provide statistics on the amount of duties and taxes that recipients paid on GPL parcels; (10) GAO found that the private express carriers followed similar delivery and customs clearance processes for parcels shipped from the United States to the three countries in its review; and (11) the private express industry has commented that differences in customs clearance requirements for postal and privately shipped parcels result in more work and higher costs for the carriers, placing them at a disadvantage in competing with USPS to provide international parcel delivery service.
|
As computer technology has advanced, federal agencies have become dependent on computerized information systems to carry out their operations and to process, maintain, and report essential information. Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions without these information assets. Information security is thus critically important. Conversely, ineffective information security controls can result in significant risks. Examples of such risks include the following: Resources, such as federal payments and collections, could be lost or stolen. Sensitive information, such as national security information, taxpayer data, Social Security records, medical records, and proprietary business information, could be inappropriately accessed and used for identity theft or espionage. Critical operations, such as those supporting critical infrastructure, national defense, and emergency services could be disrupted. Agency missions could be undermined by embarrassing incidents that result in diminished confidence in the ability of federal organizations to conduct operations and fulfill their responsibilities. Threats to federal information systems and cyber-based critical infrastructures are evolving and growing. Government officials are concerned about attacks from individuals and groups with malicious intent, such as criminals, terrorists, and foreign nations. Federal law enforcement and intelligence agencies have identified multiple sources of threats to our nation’s critical information systems, including foreign nations engaged in espionage and information warfare, criminals, hackers, virus writers, and disgruntled employees and contractors. These groups and individuals have a variety of attack techniques at their disposal. Furthermore, as we have previously reported, the techniques have characteristics that can vastly enhance the reach and impact of their actions, such as the following: Attackers do not need to be physically close to their targets to perpetrate a cyber attack. Technology allows actions to easily cross multiple state and national borders. Attacks can be carried out automatically, at high speed, and by attacking a vast number of victims at the same time. Attackers can easily remain anonymous. The connectivity between information systems, the Internet, and other infrastructures creates opportunities for attackers to disrupt telecommunications, electrical power, and other critical services. As government, private sector, and personal activities continue to move to networked operations, the threat will continue to grow. Consistent with the evolving and growing nature of the threats to federal systems, agencies are reporting an increasing number of security incidents. These incidents put sensitive information at risk. Personally identifiable information about U.S. citizens has been lost, stolen, or improperly disclosed, thereby potentially exposing those individuals to loss of privacy, identity theft, and financial crimes. Reported attacks and unintentional incidents involving critical infrastructure systems demonstrate that a serious attack could be devastating. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. When incidents occur, agencies are to notify the Department of Homeland Security’s (DHS) federal information security incident center—the United States Computer Emergency Readiness Team (US-CERT). As shown in figure 1, the number of incidents reported by federal agencies to US-CERT has increased dramatically over the past 4 years, from 5,503 incidents reported in fiscal year 2006 to about 30,000 incidents in fiscal year 2009 (over a 400 percent increase). The four most prevalent types of incidents and events reported to US- C ERT during fiscal year 2009 were: (1) malicious code (software that infects an operating system or application), (2) improper usage (a violation of acceptable computing use policies), (3) unauthorized acce (where an individual gains logical or physical access to a system wi permission), and (4) investigation (unconfirmed incidents that are potentially malicious or anomalous activity deemed by the reporting entity to warrant further review). The growing threats and increasing number of reported incidents highlight the need for effective information security policies and practices. However, serious and widespread information security control deficiencies continue to place federal assets at risk of inadvertent o deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropr disclosure, and critical operations at risk of disruption. GAO hasdesignated information security as a high-risk area in the federal government since 1997. In their fiscal year 2009 performance and accountability reports, 21 of 24 major federal agencies noted that inadequate information system controls over their financial systems and information were either a material weakness or a significant deficiency. Similarly, our audits have identified control deficiencies in both financial and nonfinancial systems, including vulnerabilities in critical federal systems. For example, we reported in September 2008 that, although the Los Alamos National Laboratory—one of the nation’s weapons laboratories—implemented measures to enhance the information security of its unclassified network, vulnerabilities continued to exist in several critical areas. Similarly, in October 2009 we reported that the National Aeronautics and Space Administration (NASA)—the civilian agency that oversees U.S. aeronautical and space activities—had not always implemented appropriate controls to sufficiently protect the confidentiality, integrity, and availability of the information and systems supporting its mission directorates. Over the past several years, we and agency inspectors general have made hundreds of recommendations to agencies for actions necessary to resolve prior significant control deficiencies and information security program shortfalls. For example, we recommended that agencies correct specific information security deficiencies related to user identification and authentication, authorization, boundary protections, cryptography, audit and monitoring, physical security, configuration management, segregation of duties, and contingency planning. We have also recommended that agencies fully implement comprehensive, agencywide information security programs by correcting weaknesses in risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. The effective implementation of these recommendations will strengthen the security posture at these agencies. Agencies have implemented or are in the process of implementing many of our recommendations. In addition, the White House, OMB, and certain federal agencies have undertaken several governmentwide initiatives that are intended to enhance information security at federal agencies. However, these initiatives face challenges that require sustained attention: Comprehensive National Cybersecurity Initiative (CNCI): In January 2008, President Bush initiated a series of 12 projects aimed primarily at improving the Department of Homeland Security’s (DHS) and other federal agencies’ efforts to protect against intrusion attempts and anticipate future threats. The initiative is intended to reduce vulnerabilities, protect against intrusions, and anticipate future threats against federal executive branch information systems. As we recently reported, the White House and federal agencies have established interagency groups to plan and coordinate CNCI activities. However, the initiative faces challenges in achieving its objectives related to securing federal information, including better defining agency roles and responsibilities, establishing measures of effectiveness, and establishing an appropriate level of transparency. Until these challenges are adequately addressed, there is a risk that CNCI will not fully achieve its goals. Federal Desktop Core Configuration (FDCC): For this initiative, OMB directed agencies that have workstations with Windows XP and/or Windows Vista operating systems to adopt security configurations developed by the National Institute of Standards and Technology, the Department of Defense, and DHS. The goal of this initiative is to improve information security and reduce overall information technology operating costs. We recently reported that while agencies have taken actions to implement FDCC requirements, none of the agencies has fully implemented all configuration settings on their applicable workstations. In our report we recommended that OMB, among other things, issue guidance on assessing the risks of agencies having deviations from the approved settings and monitoring compliance with FDCC. Einstein: This is a computer network intrusion detection system that analyzes network flow information from participating federal agencies and is intended to provide a high-level perspective from which to observe potential malicious activity in computer network traffic. We recently reported that as of September 2009, fewer than half of the 23 agencies reviewed had executed the required agreements with DHS, and Einstein 2 had been deployed to 6 agencies. Agencies that participated in Einstein 1 cited improved identification of incidents and mitigation of attacks, but determining whether the initiative is meeting its objectives will likely remain difficult because DHS lacks performance measures that address how agencies respond to alerts. Trusted Internet Connections (TIC) Initiative: This is an effort designed to optimize individual agency network services through a common solution for the federal government. The initiative is to facilitate the reduction of external connections, including Internet points of presence. We recently reported that none of the 23 agencies we reviewed met all of the requirements of the TIC initiative, and most agencies experienced delays in their plans for reducing and consolidating connections. However, most agencies reported that they have made progress toward reducing and consolidating their external connections and implementing security capabilities. Critical infrastructures are systems and assets, whether physical or virtual, so vital to the nation that their incapacity or destruction would have a debilitating impact on national security, national economic security, national public health or safety, or any combination of those matters. Federal policy established 18 critical infrastructure sectors: agriculture and food; banking and finance; chemical; commercial facilities; communications; critical manufacturing; dams; defense industrial base; emergency services; energy; government facilities; information technology; national monuments and icons; nuclear reactors, materials and waste; postal and shipping; public health and health care; transportation systems; and water. timely and actionable threat and mitigation information, and (4) responding to the threat. For example, US-CERT provided warnings by developing and distributing a wide array of notifications; however, these notifications were not consistently actionable or timely. As a result, we recommended that the department address shortfalls associated with the 15 attributes in order to fully establish a national cyber analysis and warning capability as envisioned in the national strategy. DHS agreed in large part with our recommendations and has reported that it is taking steps to implement them. Similarly, in September 2008, we reported that since conducting a major cyber attack exercise, called Cyber Storm, DHS had demonstrated progress in addressing eight lessons it had learned from these efforts. However, its actions to address the lessons had not been fully implemented. Specifically, while it had completed 42 of the 66 activities identified, the department had identified 16 activities as ongoing and 7 as planned for the future. Consequently, we recommended that DHS schedule and complete all of the corrective activities identified in order to strengthen coordination between public and private sector participants in response to significant cyber incidents. DHS concurred with our recommendation. Since that time, DHS has continued to make progress in completing some identified activities but has yet to do so for others. Because the threats to federal information systems and critical infrastructure have persisted and grown, efforts have recently been undertaken by the executive branch to review the nation’s cybersecurity strategy. In February 2009, President Obama directed the National Security Council and Homeland Security Council to conduct a comprehensive review to assess the United States’ cybersecurity-related policies and structures. The resulting report, Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure, recommended, among other things, appointing an official in the White House to coordinate the nation’s cybersecurity policies and activities, creating a new national cybersecurity strategy, and developing a framework for cyber research and development. In response to one of these actions, the president appointed a cybersecurity coordinator in December 2009. We recently initiated a review to assess the progress made by the executive branch in implementing the report’s recommendations. We also testified in March 2009 on needed improvements to the nation’s cybersecurity strategy. In preparation for that testimony, we obtained the views of experts (by means of panel discussions) on critical aspects of the strategy, including areas for improvement. The experts, who included former federal officials, academics, and private sector executives, highlighted 12 key improvements that are, in their view, essential to improving the strategy and our national cybersecurity posture. The key strategy improvements identified by cybersecurity experts are listed in table 2. These recommended improvements to the national strategy are in large part consistent with our previous reports and extensive research and experience in this area. Until they are addressed, our nation’s most critical federal and private sector cyber infrastructure remain at unnecessary risk of attack from our adversaries. In summary, the threats to federal information systems are evolving and growing, and federal systems are not sufficiently protected to consistently thwart the threats. Unintended incidents and attacks from individuals and groups with malicious intent have the potential to cause significant damage to the ability of agencies to effectively perform their missions, deliver services to constituents, and account for their resources. To help in meeting these threats, opportunities exist to improve information security throughout the federal government. The prompt and effective implementation of the hundreds of recommendations by us and by agency inspectors general to mitigate information security control deficiencies and fully implement agencywide security programs would strengthen the protection of federal information systems, as would efforts by DHS to develop better capabilities to meets its responsibilities, and the implementation of recommended improvements to the national cybersecurity strategy. Until agencies fully and effectively implement these recommendations, federal information and systems will remain vulnerable. Mr. Chairman, this completes my prepared statement. I would be happy to answer any questions you or other Members of the Committee have at this time. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Other key contributors to this statement include John de Ferrari (Assistant Director), Michael Gilmore (Assistant Director), Anjalique Lawrence (Assistant Director), Marisol Cruz, Nick Marinos, Lee McCracken, and David Plocher. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Pervasive and sustained cyber attacks continue to pose a potentially devastating threat to the systems and operations of the federal government. In recent testimony, the Director of National Intelligence highlighted that many nation states, terrorist networks, and organized criminal groups have the capability to target elements of the United States information infrastructure for intelligence collection, intellectual property theft, or disruption. In July 2009, press accounts reported attacks on Web sites operated by major government agencies. The ever-increasing dependence of federal agencies on information systems to carry out essential, everyday operations can make them vulnerable to an array of cyber-based risks. Thus it is increasingly important that the federal government carry out a concerted effort to safeguard its systems and the information they contain. GAO is providing a statement describing (1) cyber threats to federal information systems and cyber-based critical infrastructures, (2) control deficiencies that make federal systems vulnerable to those threats, and (3) opportunities that exist for improving federal cybersecurity. In preparing this statement, GAO relied on its previously published work in this area. Cyber-based threats to federal systems and critical infrastructure are evolving and growing. These threats can come from a variety of sources, including criminals and foreign nations, as well as hackers and disgruntled employees. These potential attackers have a variety of techniques at their disposal, which can vastly enhance the reach and impact of their actions. For example, cyber attackers do not need to be physically close to their targets, their attacks can easily cross state and national borders, and cyber attackers can easily preserve their anonymity. Further, the interconnectivity between information systems, the Internet, and other infrastructure presents increasing opportunities for such attacks. Consistent with this, reports of security incidents from federal agencies are on the rise, increasing by over 400 percent from fiscal year 2006 to fiscal year 2009. Compounding the growing number and kinds of threats, GAO--along with agencies' internal assessments--has identified significant deficiencies in the security controls on federal information systems, which have resulted in pervasive vulnerabilities. These include weaknesses in the security of both financial and non-financial systems and information, including vulnerabilities in critical federal systems. These deficiencies continue to place federal assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, and critical operations at risk of disruption. Multiple opportunities exist to improve federal cybersecurity. To address identified deficiencies in agencies' security controls and shortfalls in their information security programs, GAO and agency inspectors general have made hundreds of recommendations over the past several years, many of which agencies are implementing. In addition, the White House, the Office of Management and Budget, and certain federal agencies have undertaken several governmentwide initiatives intended to enhance information security at federal agencies. While progress has been made on these initiatives, they all face challenges that require sustained attention, and GAO has made several recommendations for improving the implementation and effectiveness of these initiatives. Further, the Department of Homeland Security also needs to fulfill its key cybersecurity responsibilities, such as developing capabilities for ensuring the protection of cyber-based critical infrastructures and implementing lessons learned from a major cyber simulation exercise. Finally, a GAO-convened panel of experts has made several recommendations for improving the nation's cybersecurity strategy. Realizing these opportunities for improvement can help ensure that the federal government's systems, information, and critical cyber-based infrastructures are effectively protected.
|
MSPB is an independent, quasijudicial executive agency created by the Civil Service Reform Act of 1978. Its mission is to ensure that (1) federal employees are protected against abuses by their agencies’ management, (2) executive branch agencies make employment decisions in accordance with merit system principles, and (3) federal merit systems are kept free of prohibited personnel practices. In large part, MSPB is to pursue its mission by hearing and deciding appeals by federal employees of actions taken against them by their agencies. Initially, an employee files his or her appeal at one of MSPB’s regional or field offices. The appeal is then to be heard and decided by an administrative judge. The decision becomes final in 35 days unless one of the parties to the appeal files a petition for review to the board members or MSPB reopens the appeal on its own motion. In such cases, the petition for review is to be processed at MSPB headquarters. The board members’ decision constitutes the final administrative action unless the appeal involves allegations of discrimination, in which case the Equal Employment Opportunity Commission may become involved. An employee who is dissatisfied with the board members’ final decision may appeal it to the U.S. Court of Appeals for the Federal Circuit or, if allegations of discrimination are involved, bring the case before a U.S. district court. MSPB’s three-member bipartisan Board consists of a chairman, a vice chairman, and a member, all of whom are appointed by the president with the advice and consent of the Senate to serve overlapping, nonrenewable 7-year terms. MSPB’s fiscal year 1994 appropriation was $24.7 million. As of the end of fiscal year 1994, it had 289 employees. Under Public Law 103-424, signed by the president on October 29, 1994, MSPB was reauthorized for a period of 3 years. To assess whether MSPB is accomplishing its statutory mission through the appeals process in a fair and timely manner, we developed a mail questionnaire to obtain the views of practitioners who had represented federal employees or agencies before MSPB during the 2-year period ending September 1993 (see app. I). The results of our survey can be projected to the populations from which the survey respondents were selected. We analyzed MSPB’s case processing performance reports to determine whether MSPB met its case processing guidelines at the regional and headquarters levels in fiscal years 1991 through 1994. Additionally, we analyzed data on the extent to which MSPB’s final decisions were appealed to the U.S. Court of Appeals for the Federal Circuit and the court’s disposition of these appeals. To determine what accountability mechanisms MSPB had in place to provide its employees the merit system protections that MSPB was created to uphold, we reviewed the agency’s EEO and internal oversight activities designed to protect its employees against workplace discrimination, mismanagement, abuse, and improper personnel practices. In doing so, we reviewed MSPB’s EEO policy, procedures, and processes and compared its provisions for internal oversight to those of 10 other federal entities that are roughly comparable to MSPB in budget and staff size. Further, we developed and mailed a questionnaire to all MSPB employees as of May 12, 1994, asking for their views on selected aspects of the agency’s EEO operations and internal oversight activities (see app. II). To determine what actions MSPB had taken to foster a work environment that is based on trust, respect, and fairness, we interviewed MSPB’s Chief Operating Officer, EEO director, and members of a task force charged with proposing actions for implementing MSPB’s 1992 vision statement. Further, our MSPB employee questionnaire asked employees for their views on how successful their agencies had been in fostering such an environment (see app. II). We compared MSPB employees’ views of their work environment to those reported by federal employees in a 1992 OPM governmentwide survey. More information on our scope and methodology is presented in appendix III. MSPB’s chairman provided us written comments on a draft of this report by letter dated July 14, 1995. A summary of his comments is presented on page 19, and they are reprinted in their entirety in appendix VI. We did our review between November 1993 and January 1995. It was done in the Washington, D.C., area in accordance with generally accepted government auditing standards. In each of the practitioner groups we surveyed, the percentage of respondents who expressed the opinion that MSPB has been successful in accomplishing its mission was higher than the percentage who indicated that it has not been successful. As shown in table 1, the percentages varied among the practitioner groups. For example, the responses from agency general counsels were the most favorable: 89 percent of the general counsels responding to our survey believed MSPB has been very or generally successful in accomplishing its mission; none of these respondents believed MSPB has been very or generally unsuccessful. By comparison, the responses from union officials were the least favorable: 45 percent of the union officials expressing an opinion indicated that MSPB has been very or generally successful in accomplishing its mission, while 23 percent indicated that it has been very or generally unsuccessful. The responses from the practitioner groups reflect a general view that MSPB has been fair in processing federal employee appeals at the regional and headquarters levels. As shown in table 2, the percentage of respondents who indicated that MSPB’s appellate process at the regional level was almost always or generally fair ranged from a high of 93 percent among agency general counsels to a low of 59 percent among union officials. The percentage of respondents who viewed the process at the regional level as very or generally unfair ranged from a low of 0 percent among agency general counsels and employee and labor-management relations representatives to a high of 26 percent among union officials. As for the fairness of MSPB’s appellate process at the headquarters level, table 3 shows that the percentage of respondents indicating that the process was almost always or generally fair ranged from a high of 95 percent among employee and labor-management relations representatives to a low of 46 percent among private attorneys. The percentage of respondents indicating that the appellate process at the headquarters level was very or generally unfair ranged from a low of 0 percent among agency general counsels to a high of 29 percent among private attorneys. Another measure of MSPB’s fairness in processing employee appeals is the rate at which the U.S. Court of Appeals for the Federal Circuit has affirmed MSPB’s final decisions. As shown in table 4, during the 4-year period ending September 1994, 1,422 MSPB final decisions were appealed to the court and adjudicated on the merits. Of that amount, 1,287 cases (91 percent) were affirmed, while 116 cases (8 percent) were either reversed (3 percent) or remanded (5 percent) to MSPB for further processing. According to a recognized expert on the administrative redress system for federal employees, the affirmation rate of MSPB’s decisions by the U.S. Court of Appeals for the Federal Circuit is much higher than the rate at which other federal circuits affirm the decisions of other federal administrative tribunals. Guidelines for the length of time MSPB should take to process cases differ at the regional and headquarters levels. Guidelines at the regional level stipulate that cases be processed in no more than 120 days. This guideline was imposed by MSPB itself and is identical to the time limit in the statute (5 U.S.C. 7702(a)(1)) requiring MSPB to decide appeals that involve allegations of discrimination within 120 days of the filing of the appeal. At the headquarters level, cases are to be processed within 110 days. This guideline also is self-imposed; the chairman explained that the 110-day guideline at the headquarters level represents a goal to strive toward rather than a hard and fast requirement for processing petitions for review (PFR). Further, MSPB’s written policy states that “it will attempt to complete action on petitions for review of initial decisions within 110 days.” Over the 4-year period ending September 1994, the regional offices took an average of 78 days to process initial employee appeals and almost always met their 120-day case processing guideline. Most survey respondents were either very satisfied or generally satisfied with the actual amount of time MSPB took to process cases at the regional level (see app. I, question 22). Satisfaction with actual processing time ranged from 87 percent for employee and labor-management relations representatives to 67 percent for agency general counsels. Dissatisfaction with processing time among the five practitioner groups ranged from a high of 18 percent for agency general counsels to a low of 7 percent for agency attorneys and representatives. As shown in table 5, of the initial appeals processed at the regional office level in fiscal years 1991 through 1994, 3 percent, on average, were not processed within the 120-day guideline. During the same 4-year period, headquarters took an average of 170 days to process PFR cases. Although headquarters met its guideline less often than the regional offices met theirs, its processing time had been improving until fiscal year 1994. Most survey respondents were very or generally satisfied with the actual processing time at headquarters (see app. I, question 26). Satisfaction with actual processing time ranged from 68 percent for the agency attorneys and representatives to 44 percent for agency general counsels. Dissatisfaction among the five practitioner groups with actual processing time at the headquarters level ranged from a high of 31 percent for private attorneys and agency general counsels to a low of 20 percent for agency attorneys and representatives. As shown in table 6, the percentage of PFR cases processed within the 110-day guideline at the headquarters level over the 4-year period steadily increased from 52 percent in fiscal year 1991 to 78 percent in fiscal year 1993, but it dropped to 61 percent in fiscal year 1994. According to an MSPB headquarters official, the decline during 1994 in the percentage of PFR cases decided within 110 days was caused by an increase in workload due to Postal Service reorganization cases. Also, the chairman cited two additional reasons for the drop in the fiscal year 1994 PFR processing time: a conscious effort by headquarters to reduce the backlog of PFR cases over 1 year old and a decrease in staff occurring simultaneously with an increase in PFR caseload. In explaining the generally longer headquarters processing times, an MSPB official cited the complexity of cases reaching the PFR stage and the fact that individual cases may have been kept pending for various reasons, such as related cases being held to allow MSPB to decide a lead case. Also, cases have been held because they involved issues under consideration by the courts or because the contending parties failed to provide necessary information within the 110-day period. Further, according to the MSPB official, complex cases occasionally take more than 110 days because they require more time for research, analysis, and drafting, and for a majority of the board members to agree on a decision. MSPB recently expanded its commitment to improving the appellate process by further encouraging the settlement of appeals. In June 1994, in accordance with National Performance Review (NPR) recommendations encouraging the use of alternative dispute resolution techniques, MSPB established a program to help parties resolve their disputes at the PFR level. From June 1994 through the end of September 1994, MSPB headquarters settlement attorneys attempted to settle 52 cases and succeeded in settling 17 cases—a success rate of 33 percent. MSPB is considering doing a comprehensive assessment of the PFR settlement program after more time has passed. MSPB has established accountability mechanisms in the form of policies, procedures, and processes to protect its employees against workplace discrimination, mismanagement, abuse, and improper personnel practices. MSPB has established an EEO policy and has taken various actions to implement it. The agency also has established new internal oversight arrangements in lieu of the nonstatutory OIG it abolished in February 1994. Despite these actions, a substantial number of employees expressed concerns about participating in the processes for handling EEO complaints and for reporting allegations of wrongdoing. MSPB’s EEO policy is to provide equal opportunity to all persons and to prohibit discrimination because of race, color, sex, age, religion, national origin, or handicapping condition. In addition, its written policy prohibits reprisals against individuals who file a discrimination complaint; testify, assist, or participate in any manner in an investigation, proceeding, or hearing; or oppose a practice prohibited by Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, the Equal Pay Act, or the Rehabilitation Act. To facilitate implementation of its EEO policy, MSPB has (1) provided EEO training opportunities to supervisors, managers, and EEO staff; (2) evaluated supervisors and managers and rewarded some for their EEO performance; and (3) taken steps to inform employees about the EEO complaint process and their rights under it. MSPB’s EEO manual, distributed in April 1994, states that “as part of its commitment to equal opportunity in all management actions and decisions, training will be given to managers, supervisors and other personnel as a way of furthering EEO objectives.” According to an MSPB official, all office directors, managers, and supervisors are responsible for identifying and obtaining the necessary EEO training for themselves, and the EEO director is responsible for ensuring that EEO counselors are knowledgeable of current federal EEO rules and regulations. MSPB training data provided to us showed that during the 2-year period ending September 1993, MSPB managers and supervisors were provided opportunities for training and were encouraged to attend various internal and external EEO training courses. The MSPB training data also showed that 10 of MSPB’s 21 EEO staff, which consists of 2 full-time staff and 19 collateral-duty staff (mostly EEO counselors) received some type of EEO training during the 2-year period. In November 1993, the chairman informed MSPB employees of his strong commitment to diversity and equitable treatment in the workplace. To demonstrate his commitment, he established a new performance element under which to review and evaluate employees’ performance in valuing and respecting diversity in the workplace. An MSPB official told us that top management takes its commitment to EEO very seriously, as demonstrated by the fact that only one MSPB manager has received an outstanding rating under the new performance element. MSPB’s senior executives and other managers and supervisory employees are also evaluated on their adherence to EEO and affirmative action principles under the performance element on human resources management. To reward EEO performance, MSPB established the Chairman’s Award for Excellence in EEO. An employee is eligible for this award under eight criteria, one of them being whether his or her leadership in achieving EEO goals and objectives serves as a model for others. Since October 1990, three MSPB employees have received the award—one manager and one nonsupervisory employee in January 1992 and another manager in November 1994. MSPB’s EEO director told us that MSPB has taken various actions to communicate MSPB’s EEO policy, program, and complaint process to employees and to make them aware of their EEO rights. For example, offices and employees were provided information on the names and locations of the agency’s EEO counselors and special emphasis program managers and on the circumstances under which EEO counseling should be sought. Also, in April 1994, MSPB’s EEO manual was distributed to all MSPB offices for their employees’ reference and use. As shown in table 7, most of the survey respondents indicated that they had received or seen posted information that described MSPB’s EEO program, their EEO rights, and procedures for contacting MSPB’s EEO counselors; less than one-half of the respondents indicated that they had received or seen posted information that described the details of the EEO complaint process. MSPB abolished its nonstatutory OIG in February 1994 because of concerns about its effectiveness and efficiency and in accordance with NPR’s governmentwide goal of streamlining agency operations. Since then, MSPB has assigned the former OIG’s internal oversight functions to its OGC. Also, to provide its employees with an avenue for reporting their concerns should they become aware of mismanagement, abuse, or improper personnel practices, MSPB has entered into an oral agreement with USDA’s Inspector General for hotline and investigative services. MSPB’s OGC is responsible for planning and directing internal audits of MSPB’s programs and operations. However, OGC is to contract for the actual performance of the audits, either with other agencies through interagency agreements or with private firms. MSPB has contracted with a certified public accounting firm to help develop a 5-year audit plan and to establish procedures for handling the agency’s internal oversight functions. OGC also is responsible for arranging, as necessary, for investigations into allegations of wrongdoing and for coordinating the hotline and investigative services provided by USDA’s Inspector General. Any complaints or allegations received by USDA’s Inspector General hotline are to be forwarded to MSPB’s OGC for appropriate action. USDA’s Inspector General has agreed to provide investigative services to MSPB on an as-needed basis. As of November 1994, USDA’s OIG hotline had received two allegations; both allegations involved complaints about MSPB appellate decisions rather than instances of mismanagement, abuse, or improper personnel practices. We found that by placing its oversight functions in a staff office in lieu of an OIG, MSPB is administering these functions in a manner comparable to those of 10 other federal entities we studied of roughly comparable budget and staff size (see app. IV). Like MSPB, 8 of the 10 federal entities we studied do not have their own audit or investigative offices but obtain audit or investigative services through contracts with private firms or agreements with other agencies’ OIGs. Such arrangements do not ensure the same level of objectivity or independence as do statutory inspector general offices. However, if MSPB’s arrangements are implemented as planned, MSPB should be able to comply with federal audit requirements set by the Office of Management and Budget and with investigative requirements set by the President’s Council on Integrity and Efficiency. Since MSPB’s arrangements are new, data on usage and results are not yet available, and it is too early to assess their effectiveness in ensuring that MSPB employees are provided an avenue for reporting their concerns should they become aware of mismanagement, abuse, or improper personnel practices. Despite the provisions MSPB has made to protect its employees against workplace discrimination, mismanagement, abuse, and improper personnel practices, a sizeable portion of survey respondents indicated concerns about becoming involved in the processes established for handling EEO complaints and for reporting wrongdoing. About 40 percent of the respondents indicated they would be either uncertain about or unwilling to participate in EEO counseling or to file a formal EEO complaint if they felt they had been discriminated against; the most common reason was fear of reprisal (see app. II, questions 19 through 22). Fifty-one percent of the survey respondents indicated they would be either uncertain about or unwilling to report waste, fraud, abuse, or mismanagement of which they became aware; the most common reasons were a concern over maintaining anonymity and a fear of retaliation (see app. II, questions 24 and 25). As shown in figure 1, nearly two-thirds of the MSPB employees responding to our survey described the work environment as impartial. Nearly one-fourth of the respondents described it as discriminatory. More female (29 percent) than male (16 percent) employees and more minority (45 percent) than nonminority (18 percent) employees described their work environment as discriminatory (see figure V.1 in app. V). Overall, most survey respondents believed that they had been treated fairly regarding various employment and personnel decisions, such as job assignments and promotions (see table V.1 in app. V). However, more women, minorities, and nonattorneys believed they had been treated unfairly regarding such decisions (see tables V.2 through V.4 in app. V). Sixty-nine (29 percent) of the MSPB employees responding to our survey suggested various actions for MSPB top management to take to further promote an impartial work environment. Employees’ suggestions generally related to promoting more diversity in management; treating everyone fairly; making promotion and hiring decisions only on the basis of merit and not on such factors as race, gender, or political affiliation; and providing counseling and training to supervisors who are repeatedly the subject of EEO complaints. Regarding actual numbers of EEO cases, our analysis of MSPB data on the EEO complaint process showed that during the 5-year period ending September 1994, 34 employees received EEO counseling, and 10 employees filed a total of 21 formal EEO complaints. We have no basis on which to infer from these numbers the extent to which MSPB has been effective in establishing a fair and impartial workplace. We recognize that the responses of MSPB employees to our survey represent perceptions rather than proven instances of discrimination or indicators of wrongdoing going unreported. Nevertheless, we believe the fact that, overall, 24 percent of MSPB employees view their workplace as discriminatory, and that 45 percent of the agency’s minority employees concur, should be of concern to the agency because of (1) its role as protector of the merit system; and (2) the standard it set for itself in its 1992 vision statement, “To promote and protect, by deed and example, the Federal merit principles in an environment of trust, respect and fairness.” We believe that in light of MSPB’s mission and the standard adopted in its vision statement, it is important that MSPB’s own employees feel confident that they are fairly treated regardless of their race, religion, color, sex, national origin, political affiliation, marital status, disability, or age. In November 1992, MSPB announced as its vision, “To promote and protect, by deed and example, the Federal merit principles in an environment of trust, respect and fairness.” Although the vision statement was developed and adopted prior to the current chairman’s appointment, he supports it, according to MSPB officials. MSPB officials said that several actions have been taken to foster “an environment of trust, respect, and fairness.” These actions have included the institution of a new performance element for valuing and respecting diversity in the workplace, various efforts to improve internal communication, and the involvement of employees in the decisionmaking process. Examples of these efforts include (1) featuring articles in MSPB’s internal newsletter, News of Merit, that highlight various agency events; (2) cross-training employees within MSPB offices; (3) allowing employees to select award recipients; and (4) involving employees in the process for reengineering and reorganizing MSPB in response to NPR recommendations on streamlining agencies and empowering employees. Twenty-nine percent of survey respondents expressed the belief that MSPB has been very successful or generally successful in fostering an environment of trust, respect, and fairness. But 39 percent, of whom disproportionate numbers were headquarters, women, and minority employees, expressed the belief that the agency has been generally unsuccessful or very unsuccessful. Respondents’ perceptions of MSPB’s success varied by duty station, gender, and minority or nonminority status (see figs. V.2 through V.4 in app. V). MSPB believes that the staff’s mixed perceptions of the agency’s efforts to foster an environment of trust, respect, and fairness may have been affected by the uncertainties associated with the arrival of a new chairman, along with the staff’s concerns over job security at a time when management was in the process of reorganizing and reengineering the agency’s operations. However, an internal MSPB survey that was administered about 2 years prior to both our survey and MSPB’s current reorganization efforts yielded similar results. Fifty percent of the respondents to MSPB’s internal survey administered in February 1992 disagreed with the statement that “overall, the Board creates and fosters an environment of trust, respect, and fairness.” The views of MSPB employees in this regard are somewhat similar to those of federal employees in general. In OPM’s 1992 survey of 56,767 federal employees, 44 percent of the respondents expressed confidence and trust in their organizations, but 26 percent did not. OPM’s survey also showed that although 30 percent believed their organizations treated all employees equally regardless of position or rank, 47 percent did not. OPM’s survey results demonstrate that other federal agencies, based on their employees’ perceptions, have had mixed success in fostering trust and fairness in the workplace. Nearly half (45 percent) of MSPB employees responding to our survey—including many who felt that MSPB has been successful in promoting an environment of trust, respect, and fairness—suggested actions for MSPB top management to take in this regard. The suggestions generally related to improving workplace communication and managerial decisionmaking about employment and resource matters, allowing employees to contribute to the decisionmaking process and involving staff at all levels, and treating staff equitably without regard to position or to minority or nonminority status. MSPB officials said that some of these suggestions correspond with actions already taken by management, and they are continuing to pursue them. Our examination of MSPB’s mission performance, employee protections, and working environment began 4 months after the swearing-in of MSPB’s current chairman and therefore covered a transitional period for the agency. Regarding mission performance and employee protections, indications were that MSPB’s new management is pursuing policies and initiatives that are in accord with relevant standards and with the needs of its customers and employees. Management has taken several actions to provide employee protections and to promote a working environment based on trust, respect, and fairness. However, MSPB employees had mixed perceptions of the impartiality of the workplace and of management’s success in improving the work environment. We believe the eventual impact on MSPB’s employees of management’s actions will become clearer after the current chairman has been in office for a longer period of time, and the process of reorganizing and reengineering the agency’s operations has been completed. In a letter dated July 14, 1995, MSPB’s chairman provided comments on a draft of this report. The chairman said that our report, on the whole, was thorough and thoughtfully presented. He did not express any disagreement with its findings and conclusions. His comments consisted of apprising us that (1) processing time for petitions for review on merit cases improved during the first half of fiscal year 1995 (a time period that was outside the scope of our review); and (2) while the report states that OGC will contract out audits, it may also be prudent for OGC, under the current fiscal environment, to arrange for Board personnel to perform some audits in situations where appropriate safeguards can be established. The chairman also recommended a technical change that we made where appropriate. The chairman’s comments are presented in their entirety in appendix VI. We are sending copies of this report to the Chairmen of the Merit Systems Protection Board, the Senate Committee on Governmental Affairs, and the House Committee on Governmental Reform and Oversight. Copies will be made available to others upon request. This report was prepared under the direction of Stephen Altman, Assistant Director, Federal Management and Workforce Issues. Other major contributors are listed in appendix VII. If you have any questions about this report, please contact me on (202) 512-8676. The questionnaire can be completed in about 20 minutes. Space has been provided at the end of the questionnaire, and additional pages may be added, for any comments you may want to make. U.S. General Accounting Office Attn: William Trancucci Room 3150 441 G Street, N.W. Washington, D.C. 20548 -- time limits for filing appeals, -- case processing standards, and -- alternative dispute resolution practices. If you have any questions about this survey, please call William Trancucci at (202) 512-5043 or Mary Martin at (202) 512-4345. Thank you for your cooperation and assistance. We are sending this questionnaire to: -- general counsels of federal agencies, -- federal agency attorneys/advocates, -- employee and labor-management relations representatives in federal agencies, -- private attorneys representing appellants, and -- union officials representing appellants. To obtain different perspectives on MSPB’s appellate process, we would appreciate receiving individual responses from each representative. We would like you to focus on your experiences with MSPB’s operations since October 1991. Regional level - Under MSPB regulations, an employee must initially file an appeal of an agency personnel decision with one of MSPB’s 11 regional offices. At the regional level, an administrative judge will hear and decide the employee’s appeal.In this survey, the appellate process that is carried out at MSPB’s 11 regional offices is referred to as the regional level. Your responses will be kept confidential and will not be released outside GAO, unless compelled by law to do so or required to do so by the Congress. While the results are generally provided in summary form, individual answers may be discussed in our report, but they will not include any information that could be used to identify individual respondents. The questionnaire is numbered only to aid us in our follow-up efforts and will not be used to identify you with your response. The link between you and your response will be destroyed before the report is issued. Headquarters level - Under MSPB regulations, an employee and/or agency may ask the 3-member Board located in Washington, D.C. to review the appeal’s decision that was made by an administrative judge in the regional office.In this survey, the appellate process that is carried out by the 3-member Board at its office in Washington, D.C. is referred to as the headquarters level. 1. Since October 1991, with about how many MSPB 3. Are you an attorney? (Check one.) cases, if any, have you had experience in representing clients?(Enter number and continue to Question 2. If none, enter "0" and read box below.) If you had no experience (entered 0) with MSPB cases, please stop here and return the questionnaire to us in the enclosed envelope. Thank you. Considering all the cases since October 1991 with which you have had experience representing clients before MSPB (see Question 1), about how many were in each of the following categories? (Enter numbers in appropriate boxes.) MSPB was established by Reorganization Plan No. 2 of 1978, which was codified by the Civil Service Reform Act of 1978, as an independent, quasi-judicial agency. MSPB’s mission is to assure that Federal merit systems are kept free of prohibited personnel practices, employees are protected against abuses by agency management, and executive branch agencies make employment decisions in accordance with merit systems principles. One way MSPB accomplishes its mission is by hearing and deciding employee appeals from agency personnel actions. 4. How familiar or unfamiliar are you with the MSPB precedential body of case law? (Check one.) Question 5.) reduction in grade or pay, suspension for more than 14 days, or furlough for 30 days or less for cause that will promote the efficiency of the service under 5 U.S.C. 7512) Question 6.) b. Unacceptable performance action (reduction in grade or removal under 5 U.S.C. 4303) 5. c. Rights or interests of individuals under federal retirement programs (5 U.S.C. 8347(d)(1)-(2) and 8461(e)(1)) In your opinion, are the outcomes in decisions that constitute the MSPB precedential body of case law consistent or inconsistent from case to case (where the same or similar issues are involved? (Check one.) d. Other (Specify.) Consider the cases with which you were associated. level consistent or inconsistent with the evidence in the appeals records related to each individual case? (Check one box in each row.) In your opinion, are the decisions at the regional level and headquarters (1) (2) (3) (4) (5) (6) In your opinion, at the regional level, how fair (impartial) or unfair (partial) is the appellate process? (Check one.) 9. In your opinion, at the headquarters level, how fair (impartial) or unfair (partial) is the appellate process? (Check one.) Question 9.) Question 11.) Question 8.) Question 10.) Question 9.) Question 11.) Please explain your response to Question 7. MSPB considers each case on its own merits (is neutral) or is it biased in favor of employees or agencies (management)? (Check one.) Very biased in favor of employees Somewhat biased in favor of employees Neutral (considers each case on its own merits) MSPB requires that an appeal of an agency personnel action, such as removal from employment or suspension for more than 14 days, be filed with the appropriate regional office within 20 days of the action’s effective date. MSPB requires that a petition to review a regional office administrative judge decision be filed with the Clerk of the Board within 35 days after the regional office decision is issued.If these time frames are not met, MSPB may dismiss the appeal or the petition to review the regional office decision as untimely filed, and not decide the appeal or petition on its merits, unless a good reason for the delay in filing is shown. 13. In your opinion, how adequate or inadequate is the 20-day time limit for filing an appeal with an MSPB regional office?(Check one.) No basis to judge/Don’t know In your opinion, how successful or unsuccessful has MSPB, through the appellate process, been in accomplishing its mission (see introductory paragraph, Section II, page 2.)(Check one.) Question 15.) Question 14.) Question 15.) Very unsuccessful - ------- 14. In your opinion, which of the following limits for filing an appeal with an MSPB regional office would provide appellants the most reasonable amount of time to file the required petition for appeal?(Check one.) (Specify.) In your opinion, how adequate or inadequate is the 35-day time limit for filing a petition for review of a regional office decision with the Clerk of the Board? (Check one.) MSPB requires that an agency response to an employee petition for appeal of an agency personnel action be filed with the appropriate regional office within 20 days of the date of the regional office order acknowledging the appeal. 17. Question 17.) In your opinion, how adequate or inadequate is the 20- day time limit imposed on agencies for filing a response to an employee appeal with an MSPB regional office? (Check one.) Question 19.) Question 16.) No basis to judge (Skip to Question 17.) Question 18.) In your opinion, which of the following time limits for filing a petition for review with the Clerk of the Board would provide petitioners the most reasonable amount of time to file the required petition?(Check one.) Very inadequate ------- No basis to judge (Skip to Question 19.) 18. In your opinion, which of the following time limits for filing a response to an employee appeal with an MSPB regional office would provide agencies the most reasonable amount of time to file a response? (Check one.) Other ____________________________________ ____________________________________ ____________________________________ ----------- (Specify.) MSPB requires that an agency or employee response to a petition for review of a regional office decision be filed with the Clerk of the Board within 25 days after the date that the petition for review was served on the party. 20. In your opinion, which of the following time limits for filing a response to a petition for review with the Clerk of the Board would provide a party the most reasonable amount of time to file a response? (Check one.) In your opinion, how adequate or inadequate is the 25-day time limit imposed on parties for filing a response to a petition for review with the Clerk of the Board? (Check one.) Question 21.) Question 20.) Very inadequate ------- No basis to judge (Skip to Question 21.) MSPB has established standard time frames for processing cases at both the regional (120 days) and Board (110 days) levels. At the regional level, the 120 days begins with receipt of the initial appeal and ends with the issuance of an initial decision. During that time period, the appeal is assigned to an administrative judge, the agency’s case file is received, and the discovery process begins. Also, prehearing motions are filed, attempts are made to achieve settlement, and a hearing may be held. At the Board headquarters level, the 110 days begins with the filing of the petition for review by the appellant or agency. The case file is received from the regional office and reviewed. The 110 day period ends with the issuance of a final Board decision. We are interested in (1) how satisfied or dissatisfied in general you are with these standard time frames for processing cases; (2) how satisfied or dissatisfied you are with the actual amount of time MSPB has taken to process and decide cases at the regional and headquarters level; (3) whether you believe MSPB should revise its standard time frames; and (4) what would be a reasonable standard for MSPB to process and decide cases. 21. How satisfied or dissatisfied are you with MSPB’s standard time frames for administrative judges and board members to decide cases at the regional (120 days) and headquarters (110 days) levels, respectively? (Check one box in each row.) Not sure/ No basis to judge (1) (2) (3) (4) (5) (6) a. Regional level (120 days) b. Headquarters level (110 days) N=415 22. Consider the actual amount of time it has generally taken MSPB to process a case at the regional level. How satisfied or dissatisfied are you with the time it actually takes MSPB to process a case at the regional level? (Check one.) 23. Please explain the reason for your dissatisfaction with the actual amount of time it takes MSPB to process a case at the regional level. Question 26.) 24. Question 23.) If you are dissatisfied with the actual amount of time it has taken MSPB to process cases at the regional level, do you believe it needs to revise its standard time frames for processing cases? (Check one.) Yes --> (Continue with Question 25.) Question 26.) Question 26.) In your opinion, what would be the most reasonable standard for processing a case through MSPB at the regional level? (Check one.) 27. Please explain the reason for your dissatisfaction with the actual amount of time it takes MSPB to process a case at the headquarters level. 28. If you are dissatisfied with the actual amount of time it has taken MSPB to process cases at the headquarters level, do you believe it needs to revise its standard time frames for processing cases? (Check one.) Yes ---> (Continue with Question 29.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - page 9.) 26. Consider the actual amount of time it has generally taken MSPB to process a case at the headquarters level. How satisfied or dissatisfied are you with the time it 29. actually takes MSPB to process a case at the headquarters level? (Check one.) In your opinion, what would be the most reasonable standard for processing a case through MSPB at the headquarters level? (Check one.) Question 30.) Question 27.) Question 30.) Broadly defined, alternative dispute resolution is any process that disputing parties use to resolve a disagreement other than a formal process such as court or administrative proceedings. MSPB has incorporated alternative dispute resolution into the appellate process by requiring administrative judges to order the parties to discuss the possibility of voluntarily settling their dispute. With the parties’ approval, the administrative judge may participate in the settlement discussions. 32. Consider all the cases with which you have had experience with MSPB since October 1991 (see Question 1).In about how many cases, if any, have you entered into a settlement agreement with the other party to resolve the dispute?(Enter number and continue to Question 33.If none, check box below and then skip to Question 36.If necessary, an estimate will suffice.) N = 454 N = 39 N = 7 N = 10 1 - 10 Cases 11 - 20 Cases 21 - 30 Cases Over 30 Cases 30. Consider all the cases with which you have had experience with MSPB since October 1991 (See Question 1). you had experience where an administrative judge participated (encouraged, discussed terms of settlement, etc.) in the settlement discussions? (Enter number and continue to Question 31. If none, check box below and then skip to Question 32.)N = 551 In about how many cases, if any, have (Number of cases) N = 438 N = 65 N = 20 N = 12 N = 7 N = 9 1 - 10 Cases 11 - 20 Cases 21 - 30 Cases 31 - 40 Cases 41 - 50 Cases Over 50 Cases entered into settlement agreements with other parties (see Question 32). In about how many, if any, do you believe that the settlement agreements’ terms were equitable, given the weight of the evidence in the other parties’ and your cases? (Check one.) All, or almost all, of the cases (Number of cases) None (Skip to Question 32.) In about how many of these cases (see question 30), if any, did you want to proceed with a formal adjudication of the case, but were persuaded by the Administrative Judge not to terminate settlement negotiations? (Check one.) All, or almost all, of the cases About half of the cases None, or almost none, of the cases ----------- No basis to judge 34. Consider those cases since October 1991 where you entered into settlement agreements with the other parties (see Question 32). better off than if you had proceeded with adjudication? In about how many, if any, do you believe that the terms of the settlement agreements left the party you represented (Check one.) All, or almost all, of the cases About half of the cases None, or almost none, of the cases ----------- 35. Consider all the cases since October 1991 where you entered into settlement agreements with the other parties (see Question 32). following? (Check one box in each row.) In about how many of these cases, if any, did the administrative judges discuss (either in writing or verbally) the Administrative judges discussed whether you ... About half of the cases almost none, of the cases (1) (2) (3) (4) (5) (6) If you have additional comments regarding any previous question or general comments or suggestions for improving the MSPB appeals process, please use the space provided below.If necessary, attach additional sheets. Please check here if you would like a copy of the final report. The following section will be separated from the questionnaire before processing. Name of agency, union, law firm you represent: Name of the person who filled out this questionnaire and who may be contacted, if necessary, for clarification of responses. (Please print.) (_____________) (Area code) (Number) The U.S. General Accounting Office (GAO), an independent agency of Congress, is reviewing the U.S. Merit Systems Protection Board (MSPB) at the request of the Chairman, Senate Committee on Governmental Affairs and the Chairman, Subcommittee on Civil Service, House Post Office and Civil Service Committee. As part of this review, we are surveying all MSPB employees to get their views on the work environment in general and MSPB’s efforts to (1) foster a work environment that is based on trust, respect, and fairness; (2) ensure that its employees work in an environment that is free of discrimination; and (3) create a climate that encourages reporting of waste, fraud, abuse, and mismanagement. We also ask for views on the reasonableness of MSPB’s standard time frames for processing and deciding cases. The questionnaire should take about 20 to 30 minutes to complete. Space has been provided at the end of the questionnaire for any comments you may want to make. Additional pages may be added if necessary. Your frank and honest answers will help GAO inform the committees on how MSPB employees view their work environment and the agency’s equal employment opportunity (EEO) operations. Your responses will be kept confidential and will not be released outside GAO, unless compelled by law or required by Congress to do so. While the results are generally provided in summary form, individual answers may be discussed in our report, but they will not include any information that could be used to identify individual respondents. The questionnaire is numbered only to aid us in our follow-up efforts and will not be used to identify you with your response. The link between you and your response will be destroyed before the report is issued. In this section, we ask about your satisfaction or dissatisfaction with certain aspects of your work environment at MSPB. 1. In general, how satisfied or dissatisfied are you with each of the following as they relate to your current MSPB work environment? (Check one box in each row.) The people you work with daily (peers/colleagues) b. Your immediate supervisor N=221 d. Your office director in g. Availability of resources (i.e., budget, technology, staff, etc.) necessary to do your job N=233 The career progress you have made in MSPB up to nowN=230 The opportunities for promotion or career advancement N=222 Other (Please specify.) 2. Looking at your responses to question 1, overall, 3. Thinking in general about your responses to how satisfied or dissatisfied are you with your current MSPB work environment? (Check one.) question 1, would you say that the work environment at MSPB is improving, staying about the same, or getting worse? (Check one.) N=5 4. What actions, if any, would you suggest that MSPB top management take to further improve the work environment at MSPB? (Please describe below. If you believe no actions are necessary, enter "None".) No comment "None" entered Suggested actions 20% (N=49) 9% (N=21) 56% (N=135) Dissatisfaction expressed or problems cited 15% (N=35) 5. Based on your experience, in general, to what extent does your MSPB manager or supervisor currently show respect to you in the following ways? Please focus on manager or supervisor you currently report to. (Check one box in each row.) a. He/she shows interest in my well- being. b. He/she creates an environment in which I feel valued. c. He/she is receptive to my suggestions on how to improve operations. d. He/she considers my views in formulating policy or programs. e. Other (Please specify.) Trust as much as distrust a. Members of your work group or unit (peers/colleagues) b. Your immediate supervisor N=224 d. Your office director in Senior managers above your office N=178 director g. Members of the Board h. Human resource officials or Other (Please specify.) have been treated in terms of decisions in each of the following areas? By fair, we mean decisions that were based on merit and were free of bias and favoritism. (Check one box in each row.) (32-38) Don’t know/No basis to judge If necessary, what actions would you suggest that MSPB top management take to further promote a work environment based on greater trust, respect, and fairness? (Please describe below. If you believe no actions are necessary, enter "None".) No comment "None" entered Suggested actions 25% (N=59) 11% (N=27) 45% (N=108) Dissatisfaction expressed or problems cited 19% (N=46) 10. We have listed below certain behaviors and actions that could occur in any organization. During the past 2 years, do you believe that any of these situations occurred anywhere at MSPB? If yes, please indicate whether it happened to you, you saw it happen or you were told by the person it happened to, or you heard about it through a third party. (Check yes or no for each situation. If yes, check all boxes that apply.) What is the basis for your belief? (Check all boxes that apply.) occurred at MSPB? I heard it through a a. An employee was not considered for job or career advancement because of family responsibilities (e.g., caring for children/elders). b. An employee was not considered for job or career advancement because of a physical disability. c. An employee was assigned to a job or project based primarily on race or sex. d. An individual was hired based primarily on the hiring official’s personal bias regarding race or sex. e. An employee was given formal recognition or rewarded based primarily on race or sex. An employee was given a training or developmental opportunity based primarily on race or sex. g. A qualified employee was not promoted based primarily on the selecting official’s personal bias regarding race or sex. h. Management was informed that remarks with racial or ethnic overtones were being made but continued to tolerate them. i. Management was informed that remarks or actions with sexist or sexual overtones were being made but continued to tolerate them. Other (Please specify.) - 11. Considering your responses to the situations 14. If necessary, what actions would you suggest listed in question 10, would you describe the work environment at MSPB as impartial or discriminatory? (Check one.) that MSPB top management take to further promote an impartial work environment at MSPB? (Please describe below. If you believe no actions are necessary, enter "None".) We define an impartial work environment as one in which an employee is treated fairly without regard to race, religion, color, sex, national origin, political affiliation, marital status, disability, or age (if at least age 40). No comment "None" entered Suggested actions 35% (N=84) 24% (N=57) 29% (N=69) Dissatisfaction expressed or problems cited 12% (N=24) 15. During the next year, how likely are you to stay --------------------------- with or leave MSPB? (Check one.) Very likely to stay More likely to stay than leave 12. Are you aware of any efforts by MSPB during the past 2 years to further promote an impartial work environment? (Check one.) Yes ---> (Continue with Question 13.) No ---> (Skip to Question 14.) 16. To what extent, if at all, is your likelihood to 13. In your opinion, how successful or unsuccessful have MSPB’s efforts been to further promote an impartial work environment? (Check one.) leave MSPB within the next year or your indecision due to any dissatisfaction you might have with the employment environment at MSPB (i.e., the issues raised in the questions you’ve answered thus far)? (Check one.) MSPB’s EEO office manages the agency’s equal employment, affirmative action and complaint programs. Among other things, it is responsible for disseminating information on the agency’s EEO program and complaint process, developing EEO and affirmative action plans, providing counseling to employees who believe they have been discriminated against, and processing and resolving discrimination complaints. This section asks about the extent to which MSPB’s EEO office has taken certain actions to inform you of your EEO rights and of the agency’s EEO program and operations. It also asks about your willingness to participate in the complaint process if you believed you had been discriminated against. 17. Before reading the description at the beginning of this section, how familiar or unfamiliar were you with the responsibilities of MSPB’s EEO office? (Check one.) 18. Within the past 2 years, do you recall receiving the following materials or seeing them posted at MSPB? (Check all boxes that apply.) N’s reported for this question. Responses combined for responses 1 and 2. a. Written materials that describe MSPB’s EEO <-- 145 --> b. Written materials about your rights under the federal government’s EEO regulations <-- 127 --> c. Written materials describing how to contact MSPB’s EEO counselors such as their names, addresses, and telephone numbers <-- 164 --> d. Notices, memoranda, or newsletters that describe the <-- 105 --> e. Notices, memoranda, or newsletters that communicate MSPB’s sexual harassment policy <-- 126 --> f. Other materials (Please specify.) EEO counseling at MSPB, including discussing the matter with all parties in an attempt to informally resolve it early in the process? (Check one.) Skip to Question 21. Continue with Question 20. ------------------- 20. Which of the following describes your reason(s) for being uncertain or unwilling to participate in counseling? (Check all that apply.) I would be concerned that my contact with the EEO counselor during the counseling period would not be kept confidential. I would be concerned that I would be assigned to an EEO counselor who was not competent or well trained. I would be concerned that the matter would not be resolved in a timely manner. I would be concerned that too much of my time would be consumed in the complaint process. I would be concerned that the matter, if resolved informally, would not result in a mutually satisfactory solution for all parties involved. I would be concerned that I would be alienated from my co-workers. I would fear reprisal. I would not be willing to participate for personal reasons. Other (Please specify.) _________________________________________________________________________ 21. If you believed that you had been discriminated against, how willing or unwilling would you be to file a formal EEO discrimination complaint at MSPB? (Check one.) Skip to Question 23. Continue with Question 22. ------------------- 22. Which of the following describes your reason(s) for being uncertain or unwilling to file a formal discrimination complaint? (Check all that apply.) I would be concerned that my complaint would not be investigated in a competent manner. I would be concerned that my complaint would not be thoroughly investigated by the EEO office. I would be concerned that my complaint would not be handled in a fair manner. I would be concerned that my complaint would not be handled in a timely manner. I would be concerned that too much of my time would be consumed in the complaint process. I would be concerned that I would be alienated from my co-workers. I would fear reprisal. I would not be willing to file a formal complaint for personal reasons. This section asks for your views on reporting illegal or wasteful activities, whether you would report such activities if you became aware of them, and the extent to which MSPB has taken certain actions to create a climate that encourages reporting of waste, fraud, abuse, or mismanagement. 23. In encouraging you to report any activities involving waste, fraud, abuse, or mismanagement, how important, if at all, would it be to you that MSPB take the following actions? (Check one box in each row.) How important is it to you that MSPB would . . . a. take action to correct the problem. b. punish the wrongdoer(s). c. allow me to remain anonymous. N=229 d. assure me that the legal protections against unlawful retaliation for reporting such activities would be enforced. e. provide me with a nonmonetary award for reporting such activities. f. provide me with a monetary award for reporting such activities. g. Other (Please specify.) operation, how willing or unwilling would you be to report it? (Check one.) Skip to Question 26. Continue with Question 25. 25. If you are uncertain or unwilling to report any activities involving waste, fraud, abuse, or mismanagement, which of the following would describe the reason(s) for being unwilling to report them? (Check all that apply.) I am not sure to whom I should report such activities. I feel it would not be my responsibility to report such activities. I would be concerned that the solution to the problem would not be under MSPB’s control. I would be concerned that MSPB would not take action to correct the problem. I would be concerned that MSPB would not punish the wrongdoer(s). I would be concerned that I would not remain anonymous. I would be concerned that MSPB would not assure me that the legal protections against unlawful retaliation for reporting such activities would be enforced. I would be concerned that MSPB would not provide me with a nonmonetary award for reporting such activities. I would be concerned that MSPB would not provide me with a monetary award for reporting such activities. Other (Please specify.) ______________________________________________________ 26. Over the past 2 years, how adequate or inadequate was the information provided by MSPB about the following aspects of reporting instances of waste, fraud, abuse, and mismanagement? (Check one box in each row.) environment that encourages reporting of waste, fraud, abuse, or mismanagement? (Check one box in each row.) Over the past 2 years, MSPB has . . . a. solicited employee knowledge of illegal or wasteful activities using surveys or other means. b. made presentations to MSPB employees at your location that emphasized the importance of reporting illegal or improper activities. c. distributed or made readily available literature which described how and/or where to report illegal or improper activities. d. taken action(s) to correct illegal or improper activities. e. punished an individual who took part in illegal or improper activities. f. punished an individual who retaliated against an employee who reported illegal or improper activities. g. provided an individual with a nonmonetary award for N=80reporting illegal or improper activities. h. provided an individual with a monetary award for reporting illegal or improper activities. i. Other (Please specify.) 28. Are you . . . ? (Check one.) 0% (N=0) 25% (N=58) 13% (N=31) 2% (N=4) 5% (N=11) Continue with Question 29. 55% (N=129) None of the above -----> Skip to Question 38 on page 15. 29. How satisfied or dissatisfied are you with MSPB’s 32. How satisfied or dissatisfied are you with MSPB’s 110-day standard time frame for board members to decide petition for review cases at the headquarters level? 120-day standard time frame for administrative judges to decide initial appeal cases at the regional level? (Check one.) (Check one.) Question 35. Skip to Question 32. Continue with Question 33. Question 30. Question 35. Skip to Question 32. 33. Please explain your reason(s) for your dissatisfaction 30. Please explain your reason(s) for your with the standard 110-day time frame for deciding petition for review cases at the headquarters level. dissatisfaction with the standard 120-day time frame for deciding initial appeal cases at the regional level. 44 respondents expressed or implied a reason 48 respondents expressed or implied a reason 34. In your opinion, which of the following time frames 31. In your opinion, which of the following time would be the most reasonable standard for board members to decide petition for review cases at the headquarters level? (Check one.) frames would be the most reasonable standard for administrative judges to decide initial appeal cases at the regional level? (Check one.) N=2 35. On average, how many initial appeal cases or petition for review cases do you have pending per month? (Enter number. If none, enter zero.) 38. How long have you been employed with MSPB? (Check one.) Initial Appeal Cases Per Month 2 to less than 5 years 0 Cases 1 to 10 Cases 11 to 20 Cases 21 to 30 Cases 31 to 40 Cases . . . . . . . . . . . . . . . . . N=22 5 to less than 10 years . . . . . . . . . . . . . N=7 . . . . . . . . . . . . N=6 . . . . . . . . . . . . N=44 . . . . . . . . . . . . N=6 Petition for review Cases Per Month 39. Where is your permanent duty station? (Check one.) . . . . . . . . . . . . . . . . . N=11 0 Cases . . . . . . . . . . . . . N=16 1 to 10 Cases . . . . . . . . . . . . N=10 11 to 20 Cases . . . . . . . . . . . . N=2 21 to 30 Cases . . . . . . . . . . . . N=1 31 to 40 Cases More than 40 Cases . . . . . . . . . N=1 36. In your opinion, given the current standard time frames and the requirements for adjudication, is your average pending caseload too heavy, about right, or too light? (Check one.) 40. What is your sex? (Check one.) 41. Are you of Hispanic origin? (Check one.) -----> Skip to question 38. Continue with Question 37. 37. In your opinion, given the current standard time frames and the requirements for adjudication, what would be the appropriate number of cases you should reasonably have pending per month? (Enter number.) 42. What is your race? (Check one.) Number of Cases Pending Per Month 1 to 10 Cases . . . . . . . . . . . . . . N=13 . . . . . . . . . . . . N=18 11 to 20 Cases 21 to 30 Cases . . . . . . . . . . . . N=20 (American-Indian) Please return your completed questionnaire in the enclosed preaddressed envelope. Thank you for your assistance. To assess whether MSPB is accomplishing its statutory mission through the appeals process in a fair and timely manner, we (1) developed a questionnaire and mailed it in April 1994 to individuals who had experience as practitioners with MSPB’s process for adjudicating federal employees’ appeals of agency personnel actions during the 2-year period ending September 1993; (2) analyzed its case processing performance reports to determine whether MSPB abided by its own guidelines in processing cases during fiscal years 1991 through 1994 at the regional and headquarters levels; and (3) analyzed data on the extent to which MSPB’s final decisions were appealed to and affirmed by the U.S. Court of Appeals for the Federal Circuit during fiscal years 1991 through 1994. In analyzing the case processing performance reports, we did not verify them to source documents but did review available MSPB information on the reliability of the case management system from which the reports were generated. For example, we identified various MSPB systemwide controls used to ensure the accuracy and reliability of appeals case data. We also identified and reviewed three data verification studies performed by MSPB’s management analysis group. We did not review appeals cases to determine whether the process was carried out fairly or resulted in well-reasoned decisions. We mailed a total of 1,179 questionnaires on April 27, 1994, to appeals process practitioners asking for their views on (1) how successful MSPB has been in accomplishing its mission through the appeals process, (2) the fairness of the appeals process, and (3) the time limits for filing and processing appeals. The practitioners included (1) general counsels of federal agencies, (2) federal agency attorneys and representatives, (3) employee and labor-management relations representatives in federal agencies, (4) private attorneys representing appellants, and (5) union officials representing appellants. The questionnaire was designed by a social science survey specialist in conjunction with GAO evaluators who were knowledgeable about MSPB’s appeals process. We pretested the questionnaire with members of each of the five participant groups to determine if (1) the respondents possessed the information desired; (2) the questionnaire would be burdensome to the respondents; and (3) the questionnaire design, including such elements as the type size, layout, and procedures for recording the information, was appropriate. Any problems with the questionnaire that were identified by the pretest process were corrected. We also provided the questionnaire to MSPB for review and incorporated the agency’s comments as appropriate. After the questionnaires were completed and returned by survey respondents, the questionnaires were edited. Three data verification procedures were used. All data were double-keyed and verified during data entry. A random sample of these data was verified back to the source questionnaires. Also, computerized logic checks were run to look for incorrect data; any errors that were found were corrected. The total population of the 5 participant groups was 5,015 individuals; 1,179 of these 5,015 individuals were mailed questionnaires. Questionnaires were mailed to all persons in the general counsel (83) and employee and labor-management relations representative groups (98). Federal agency general counsels were identified and selected from agency general counsels listed in the Federal Yellow Book, Winter 1994. Employee and labor-management relations representatives in federal agencies were selected from a membership listing of OPM’s Interagency Advisory Group Committee on Employee and Labor-Management Relations. We sent questionnaires to a total sample of 998 individuals who made up the remaining 3 groups of appeals process participants—federal agency attorneys and representatives (492), private attorneys (368), and union officials (138). Federal agency attorneys and representatives, private attorneys, and union officials were selected from MSPB’s lists of individuals who represented federal employees or agencies before MSPB sometime during fiscal year 1992 or fiscal year 1993; because of the large number of individuals included in these lists, we randomly sampled from the three groups. Table III.1 presents the population sizes and the original and revised sample sizes for each of the five participant groups that were mailed questionnaires on April 27, 1994. Of the 1,179 questionnaires we mailed in April 1994, 206 questionnaires were returned by individuals who indicated they had not had personnel appeals case experience with MSPB since October 1991 and thus were deemed ineligible for our sample. These individuals were dropped from our original population of individuals, resulting in a revised survey sample size of 973 questionnaires for our 5 participant groups. Those individuals who did not respond were sent a second questionnaire mailing on June 8, 1994, and a final questionnaire mailing on August 4, 1994. As a result of these 3 mailings, we received 676 completed and useable questionnaires, for a response rate of 69 percent. Table III.2 summarizes the questionnaire returns for the revised survey sample size of 973. The useable return rates for the individual groups ranged from 47 to 85 percent. Table III.3 presents the revised sample size and return rates for each group. The results obtained from our sampling methodology allow us to make observations about each group’s experience in representing clients before MSPB. Our sample results can be projected to the populations for three of the five groups who have had experience representing clients before MSPB—federal agency attorneys and representatives, private attorneys, and union officials representing appellants. The other two groups—employee and labor-management relations representatives and general counsels of federal agencies—were not sampled; instead, the populations of individuals in these two groups were mailed questionnaires. Because our survey selected a sample or portion of the population of agency attorneys and representatives, private attorneys, and union representatives, the review results obtained are subject to some uncertainty, or sampling error. The sampling error consists of two parts—confidence level and confidence interval. The confidence level indicates the degree of confidence that can be placed in the estimates derived from the sample. The confidence interval is the upper and lower limit between which the actual population estimate may be found. We chose the specific sample sizes for each of the three groups so that the confidence interval, based on a 100-percent response rate, would not be greater than plus or minus 5 percent at the 95-percent level of confidence. However, because the useable questionnaire response rate was less than 100 percent and varied for each of the three practitioner groups we sampled, the confidence intervals were generally larger than plus or minus 5 percent. We calculated the confidence intervals only for the sampled groups’ responses to the three survey questions on MSPB’s success in accomplishing its mission and the fairness of its appellate process, which we presented earlier in tables 1, 2, and 3. In calculating the confidence intervals, we assumed that for each of the three practitioner sample groups the reported percentage of practitioners responding to the three survey questions was near 50 percent, which may result in larger confidence intervals. The confidence intervals are smaller when the actual reported percentages approach 100 percent and 0 percent. For example, the 90 percent of agency attorneys and representatives who responded that MSPB’s appellate process was fair at the headquarters level had a confidence interval of plus or minus 4 percent, as compared to a confidence interval of plus or minus 6 percent if 50 percent had responded that the process was fair. Table III.4 shows what the confidence intervals for each of the sampled groups would have been if the reported percentage of practitioners responding to the three questions had been near 50 percent. To determine what accountability mechanisms MSPB had in place to provide its employees the merit system protections that MSPB was created to uphold, we reviewed the agency’s EEO and internal oversight activities designed to protect its employees against workplace discrimination, mismanagement, abuse, and improper personnel practices. We also sought MSPB employees’ views on selected aspects of the agency’s EEO operations and internal oversight activities by mailing a questionnaire to all MSPB employees. A more detailed discussion of the questionnaire development and its mailing is presented later in this appendix. As discussed and agreed with your office, we did not review (1) MSPB’s affirmative employment program for recruiting, hiring, advancing, and placing minorities, women, and other protected groups; or (2) how well audits and investigations were performed by MSPB’s former OIG. We also did not review appeal cases of MSPB employees to determine whether the process was carried out fairly and resulted in well-reasoned decisions. In reviewing MSPB’s EEO operations, we focused on (1) training received during fiscal years 1992 and 1993 by MSPB managers and supervisors to make them aware of their EEO responsibilities and by EEO staff in carrying out the agency’s EEO functions; (2) policies and procedures in place to evaluate and reward managers and supervisors on their EEO performance; and (3) actions MSPB has taken to communicate its EEO policy, program, and complaint process to its employees and to make them aware of their rights under the EEO complaint process. We reviewed MSPB’s EEO manual and collected and analyzed data for fiscal years 1990 through 1994 on the number of employees who had received EEO counseling or filed formal EEO complaints. We also reviewed MSPB’s performance management manual and collected data for fiscal years 1991 through 1994 on the number of employees who had received the Chairman’s Award for Excellence in EEO. We interviewed MSPB officials to determine what measures agency management has taken, since abolishing its nonstatutory OIG in February 1994, to provide audit and investigative coverage of its programs and operations. We examined whether these measures would enable MSPB to conduct audits and investigations in compliance with requirements established by the Office of Management and Budget and the President’s Council on Integrity and Efficiency. We also compared MSPB’s measures for carrying out its internal oversight activities with those of 10 other federal entities (see app. IV). Specifically, we compared the entities’ capabilities to provide audits and investigations, the offices responsible for handling allegations of wrong-doing, arrangements for obtaining investigative and audit services, and the types of audits provided. We judgmentally selected these 10 entities because they were roughly comparable to MSPB in budget and staff size. As agreed with your office, we did not review how well audits and investigations were performed by MSPB’s former OIG. To determine what actions MSPB has taken to foster a work environment that is based on trust, respect, and fairness, as called for in its 1992 vision statement, we interviewed MSPB’s Chief Operating Officer, EEO director, and members of a task force charged with proposing actions for implementing MSPB’s vision. We also used our previously mentioned MSPB employee questionnaire to solicit employees’ views on their agency’s success in fostering an environment based on trust, respect, and fairness. Lastly, we compared the workplace views of MSPB employees with those of federal employees in general. We did this by reviewing a May 1992 OPM special report entitled “Survey of Federal Employees,” which contained data on federal employees’ attitudes towards their jobs, their supervisors, and their organizations. OPM had distributed this questionnaire to 56,767 federal employees. We mailed our questionnaire to all individuals employed by MSPB as of May 12, 1994, asking for their views on MSPB’s efforts to (1) carry out its EEO operations and ensure that its employees work in a discrimination-free environment; (2) create a climate that encourages reporting of waste, fraud, abuse, and mismanagement; and (3) foster a work environment that is based on trust, respect, and fairness. We also asked employees who were involved in the processing of cases at the regional and headquarters levels for their views on the reasonableness of MSPB’s guidelines regarding time frames for processing and deciding cases. These employees included the regional administrative judges, attorneys in MSPB’s Offices of the Appeals Counsel and the General Counsel, attorneys on the board members’ personal staffs, and the board members. The Chairman was not included. The questionnaire was first mailed on May 25, 1994, to all MSPB employees. On July 7, 1994, we sent a second copy of the questionnaire to those who did not respond to our first mailing. On August 1, 1994, we sent a third copy of the questionnaire to those who still had not responded. The questionnaire was designed by a social science survey specialist in conjunction with GAO evaluators who were knowledgeable about MSPB’s EEO and internal oversight operations. Before mailing our questionnaire, we pretested it with 10 MSPB employees who held various job titles and were assigned to either an MSPB regional office or to various offices within MSPB headquarters. The pretest helped to ensure that our questions would be interpreted correctly and that the respondents would be willing to provide the information required. We also provided the questionnaire to MSPB for review and incorporated the agency’s comments as appropriate. After the questionnaires were received from survey respondents, the surveys were edited. Three data verification procedures were used. All data were double-keyed and verified during data entry. A random sample of these data was verified back to the source questionnaires. Also, computerized logic checks were run to look for incorrect data, and any errors detected were corrected. A total of 301 employees were mailed the questionnaire, but 2 employees were later dropped from the our original population because they were not employed with MSPB as of May 12, 1994. Of the 299 eligible employees in our universe, 240 of them returned useable questionnaires to us, for a response rate of 80 percent. Table III.5 summarizes the questionnaire returns for the 299 eligible MSPB employees who were mailed questionnaires. In addition to the sampling errors of the kind discussed earlier for the MSPB appeals process questionnaire, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted by the survey respondents could introduce unwanted variability into the survey’s results. We took steps in the development of each questionnaire, the data collection, and the data analysis to minimize nonsampling errors. These steps, such as pretesting and editing the questionnaires, have been discussed in previous sections of this appendix. Fiscal year 1992 actual budget authority (in millions of dollars) Financial (to be performed) MSPB - Merit Systems Protection Board FRTIB - Federal Retirement Thrift Investment Board IAF - Inter-American Foundation IMS - Institute of Museum Services FMCS - Federal Mediation and Conciliation Service NTSB - National Transportation Safety Board NGA - National Gallery of Art OSHRC - Occupational Safety and Health Review Commission ABMC - American Battle Monuments Commission SJI - State Justice Institute USIP - U.S. Institute of Peace IMS does not have a general counsel. An IMS official said that depending on the nature of the allegation, it may be handled by the National Endowment for the Humanities’ general counsel under the interagency agreement that IMS has with it. Based on situations that they either had personally experienced or had seen or heard occurred at MSPB during the past 2 years, 24 percent of the respondents to our employee questionnaire described the MSPB work environment as either somewhat discriminatory or very discriminatory. The discriminatory acts that respondents perceived to have taken place most often involved employees being hired, assigned to jobs, or formally recognized or rewarded primarily because of their race or sex. As figure V.1 shows, more women than men and more minority than nonminority employees described the work environment at MSPB as discriminatory. Women and minorities were 61 percent and 32 percent of MSPB’s workforce, respectively, at the time we initiated our survey. As shown in table V.1, most employees responding to our survey believed that since MSPB announced its vision statement in November 1992, they have been treated fairly in decisions regarding job assignments, training, formal ratings, monetary awards and bonuses, promotion or career advancement, and nonmonetary awards and recognition. However, employees’ responses regarding these decisions varied with their gender, nonminority/minority status, and position as shown in tables V.2 through V.4. Figures V.2 through V.4 show how employees’ perceptions of MSPB’s success in fostering an environment of trust, respect, and fairness in the workplace varied with their gender, nonminority/minority status, and duty station. More women (47 percent) than men (26 percent), more minorities (42 percent) than nonminorities (37 percent), and more headquarters (42 percent) than nonheadquarters (34 percent) employees indicated they believed that MSPB had been unsuccessful in fostering such an environment. In addition to those named above, the following individuals in GAO’s General Government Division (GGD), Accounting and Information and Management Division (AIMD), and Office of the General Counsel (OGC) made important contributions to this report: Philip Kagan, Senior Evaluator (GGD), assisted with the design of the job; Stuart M. Kaufman, Senior Social Science Analyst (GGD), assisted with the design and development of both questionnaires and prepared the computerized analyses of the MSPB employee questionnaire results; Gregory H. Wilmoth, Senior Social Science Analyst (GGD), assisted with selecting the appropriate methodology to accomplish the job’s objectives; Jerome T. Sandau, Social Science Analyst (GGD), prepared the computerized analyses of the MSPB appellate process questionnaire results; Jackson W. Hufnagle, Assistant Director (AIMD), reviewed MSPB’s internal oversight activities; Clarence A. Whitt, senior accountant (AIMD), did the comparative analysis of MSPB’s internal oversight activities with 10 other federal entities; Alan N. Belkin, Assistant General Counsel (OGC), and Jessica A. Botsford, Senior Attorney (OGC), provided legal advice. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO examined the Merit Systems Protection Board's (MSPB) performance, management, and operations, focusing on: (1) whether the MSPB appeals process is effectively protecting federal employees against improper personnel practices; (2) the accountability mechanisms MSPB has in place to provide its employees with merit system protections; and (3) the actions MSPB has taken to foster an environment of trust, respect, and fairness. GAO found that: (1) the practitioner groups surveyed generally felt that MSPB has been fair in processing employee appeals of agency personnel actions and MSPB is accomplishing its statutory mission; (2) during the 4-year period ending September 1994, 91 percent of the final MSPB decisions appealed to the U.S. Court of Appeals were upheld; (3) MSPB regional offices met their 120-day case processing guideline 96 percent of the time during the same period, while MSPB headquarters met its 110-day guideline 61 percent of the time; (4) the differences in the timeliness in processing appeals has been due to the complexity of headquarters' cases; (5) MSPB has established accountability mechanisms and an equal employment opportunity (EEO) policy to protect its employees against workplace discrimination, mismanagement, abuse, and improper personnel practices; (6) two-fifths of MSPB employees surveyed said they would be reluctant to become involved in the processing of EEO complaints; and (7) although MSPB has taken a variety of actions to foster an environment of trust, respect, and fairness, only 29 percent of survey respondents felt that MSPB has been successful.
|
According to 2009 population estimates from the U.S. Census Bureau, about 3.2 million U.S. residents identify themselves as solely of American Indian or Alaska Native origin. Furthermore, the Bureau of Indian Affairs (BIA) in DOI reported that for 2005—the most recent year for which data are available—the total number of enrolled members in federally recognized American Indian and Alaska Native tribes and villages was nearly 2 million. American Indians have high rates of poverty, unemployment, single-parent families, and substance abuse relative to the population as a whole. BIA reported that the unemployment rate for American Indians living on or near a reservation was 49 percent in 2005, the most recent year for which data are available, while the Bureau of Labor Statistics reported a national unemployment rate of 5 percent for the same year. A recent study by the Economic Policy Institute noted that from the first half of 2007 to the first half of 2010, the national American Indian unemployment rate—which includes individuals living in both urban areas and on tribal lands—increased 7.7 percentage points to 15.2 percent, while it increased 4.9 percentage points to 9.1 percent among Caucasians. Additionally, in 2009, 53 percent of American Indian children lived in single-parent families, compared with 34 percent of all children nationwide. HHS data estimates for 2004 through 2008 indicate that the percentage of American Indians or Alaska Native adults who needed treatment for alcohol or illicit drug use problems in the past year was higher than the national average (18 percent and 10 percent, respectively). As of October 2010, there were 565 federally recognized tribes—340 in the continental United States and 225 in Alaska. Federally recognized Indian tribes are Native American groups eligible for the special programs and services provided by the United States to Indians because of their status as Indians. Under the Indian Self-Determination and Education Assistance Act, as amended, federally recognized Indian tribes can enter into self- determination contracts or self-governance compacts with the federal government to take over administration of certain federal programs for Indians previously administered by the federal government on their behalf. Tribal lands vary dramatically in size, demographics, and location. The largest reservation, that of the Navajo Nation, is about 24,000 square miles in size and is inhabited by more than 176,000 American Indians. In comparison, some of the smallest tribal lands, held by California tribes, take up less than 1 square mile and some tribal lands have fewer than 50 Indian residents. Some Indian reservations have a mixture of Indian and non-Indian residents. In addition, most tribal lands are rural or remote, although some are near metropolitan areas. Although tribal members were previously served through state AFDC programs, under PRWORA, they can be served through tribal or state TANF programs. Tribal and state TANF programs may use TANF funds in any manner reasonably calculated to accomplish the purposes of TANF. These purposes are to (1) provide assistance to needy families so that children can be cared for in their own homes or in the homes of relatives; (2) end needy parents’ dependence on government benefits by promoting job preparation, work, and marriage; (3) prevent and reduce the incidence of out-of-wedlock pregnancies; and (4) encourage the formation and maintenance of two-parent families. Like states, tribes generally have the flexibility to set their own TANF eligibility requirements, to determine what policies will govern mandatory sanctions for noncompliance with program rules, and to determine what types of work supports they will provide to recipients, such as child care, transportation, and job training. However, some of the federal requirements for state TANF programs differ from those for tribal TANF programs (see table 1). Before the 1996 welfare reforms, some tribes were operating their own Tribal Job Opportunities and Basic Skills (JOBS) programs that provided work and training activities. These tribes remained eligible to provide these services through the Native Employment Works (NEW) program when PRWORA repealed the JOBS program. As of January 2011, 31 tribes and tribal organizations operate both a tribal TANF program and a NEW Program. More recently, in 2006, DRA made several modifications to state TANF programs, none of which applied to tribes. DRA also reauthorized TANF through fiscal year 2010, and the Claims Resolution Act of 2010 extended it through fiscal year 2011. TANF has a maintenance-of-effort (MOE) provision that requires states to maintain a significant portion of their historic financial commitment to their welfare programs. States are not required to provide funding to tribal TANF programs, but many do so. As of July 2010, HHS data indicate that 54 of 64 tribal TANF programs received state funding. Contributions made by states to tribal TANF programs generally count toward a state’s MOE requirement. The Office of Family Assistance within the Administration for Children and Families (ACF) in HHS is the main federal agency responsible for overseeing tribal TANF programs. Both HHS’s headquarters office and 6 out of 10 regional offices have staff that work directly with tribes to help them implement and maintain their tribal TANF programs. Tribal TANF is a block grant, and we have previously found that building accountability into block grants is an important, but difficult, task requiring trade-offs between federal and state—or in this case, tribal—control over program finances, activities, and administration. HHS holds tribes accountable, in part, through the tribes’ reporting requirements, which are similar to those for state TANF programs. For example, tribes have to submit quarterly reports that include caseload data, data used to calculate work participation rates, and financial information. In addition, per the Single Audit Act, as amended, all states—including tribal governments—local governments, and nonprofit organizations expending $500,000 or more in federal awards during one fiscal year are required to obtain an audit in accordance with the requirements set forth in the Single Audit Act. In addition to the TANF block grant that tribes receive from HHS, the Recovery Act’s TANF Emergency Contingency Fund provided up to $5 billion to help states and tribes in fiscal years 2009 and 2010 that had an increase in caseloads or in certain types of expenditures. HHS provides these funds to tribes as a reimbursement for expenses incurred no later than September 30, 2010. According to HHS, the funds a jurisdiction receives as reimbursement are available without fiscal year limitation and can be spent in any way permissible under TANF. According to HHS, 24 tribes have received Emergency Contingency Fund grants totaling approximately $14 million as of June 2011. The Indian Employment, Training and Related Services Demonstration Act of 1992 provides tribes with additional program and funding flexibilities. Specifically, it allows DOI to authorize federally recognized Indian tribes to combine funds they receive from various federal agencies and programs for employment, training, and related services, such as TANF, into one program, called a “477 plan.” These plans help tribes streamline funding from as many as 11 different federal sources by utilizing a single budget and a single reporting system. According to a DOI official, eligible grant funds include formula-funded programs in HHS, DOI, and the Department of Labor. DOI oversees these plans at the federal level. As of July 2011, 15 tribes incorporate their tribal TANF programs into these plans. The number of tribal TANF programs has increased from 36 in 2002 to 64 in 2010 (see fig. 1), and several additional tribes are actively pursuing administering their own programs. Most of the tribes that have begun administering their own program since 2002 had no previous experience managing a TANF program; however, according to information provided by HHS, at least two tribes that were previously served as part of a larger tribal TANF consortium of tribes have decided to administer their own programs. For a list of all tribes administering a TANF program, including those tribes who were previously served as part of a larger tribal TANF consortium, see app. II. In addition to the 64 tribal TANF programs in operation in 2010, HHS officials stated that as of April 2011, 11 more tribes were actively pursuing starting their own program. The number of tribes served by tribal TANF programs has also increased from 174 in 2002 to at least 272 in 2010, and more of the tribes administering their own programs are serving Native families outside of their own tribe. Tribes have the flexibility to determine whom their program will serve as well as their service area—the geographic area that their TANF program will cover. In 2002, we reported that 16 out of 36 tribes (44 percent) served only their own enrolled tribal members. According to our review of tribes’ TANF plans, 16 out of 64 tribes (25 percent) administering a program serve only members of their tribe whereas 48 tribal TANF programs (75 percent) extend their services and benefits to families who are not enrolled members of their tribe. For example, according to their most recently approved TANF plan, the Oneida Tribe of Indians of Wisconsin serves both enrolled tribal members as well as other Indians who are members of federally recognized tribes residing on its reservation who are eligible for TANF, whereas the Hopi tribe serves only its own enrolled tribal members. We also found that according to HHS data, 11 of the 64 tribal TANF programs have expanded in order to serve more Native families in nearby or surrounding areas. For example, the California Tribal TANF Partnership began administering TANF in 2003 and has expanded its program at least three times since then to include more tribes. As of May 2011, this partnership was associated with 35 tribes and other organizations, and its TANF program service area spanned 14 different counties in California. Nationwide, the total number of families receiving tribal TANF cash assistance has increased since 2002, primarily because the number of programs has grown, but also because of varied caseload increases among existing programs. Figure 2 below shows the changes in the nationwide average monthly number of families receiving tribal TANF cash assistance since 2002. As shown, the total nationwide average monthly caseload increased almost every year between 2002 and 2009; in some years, increases were driven primarily by the addition of new programs. However, aggregate changes from year to year hide significant variation in caseload trends among programs. For example, between 2008 and 2009, the majority of tribal TANF programs experienced increases in their average monthly caseloads, but some saw their caseloads decline. And even where increases occurred, they varied widely. A couple of smaller programs—which serve a dozen or fewer families on average per month— saw their average monthly caseloads increase by as few as four families, which represents caseload increases of about 33 and 67 percent, respectively. In comparison, a couple of the larger programs saw their average monthly caseloads increase by more than a hundred families, representing caseload increases of about 16 percent and 21 percent, respectively. While tribal TANF programs range in size, the majority of these programs are relatively small, and according to preliminary fiscal year 2009 caseload data provided by HHS, 41 out of 63 tribes reporting caseload data had an average monthly caseload of less than 200 families. Tribes report that the flexibility they are given to tailor their tribal TANF programs allows them to address the specific needs of their TANF families. All 50 of the tribes responding to our survey reported that the flexibility to provide employment-focused and education-related services to families was a very major or major benefit to administering their own TANF program (see fig. 3). In addition, 49 out of 50 tribes reported that the ability to administer and deliver TANF services in a culturally sensitive manner and the ability to tailor the program to the needs of their community were very major or major benefits to administering TANF. One tribe responding to our survey reported that administering TANF encourages nation-building by strengthening the tribe’s social fabric and by helping to develop their tribal workforce. Similarly, a study conducted by Mathematica Policy Research, Inc. notes that tribal control of TANF affords tribes the opportunity to improve services for program participants and expand program coordination. In addition, we found that tribes continue to use the flexibility to set their own work participation requirements. According to tribes’ fiscal year 2009 work participation data provided by HHS, participation rate requirements for both newer and more-established programs combined ranged from 20 to 50 percent. In 2002, we reported that most of the tribes’ work participation rates generally ranged from 15 to 30 percent over the first few years of the tribal TANF program. We also found that some tribes have increased their rates over time. For example, more than half of the 36 tribes that have been administering a TANF program since 2002 have raised their work participation rate goals over time. One tribe gradually raised its work participation rate goal for all families more than 10 percentage points over the course of 8 years, from 35 percent to 48 percent. According to tribes’ TANF plans, minimum work requirements vary among tribes, and while one newly established program required 22 percent of all tribal TANF families to participate in 16 hours of work each week in fiscal year 2010, another more-established program required 35 percent of all families to participate in a minimum of 40 hours of work each week. According to HHS data, of those tribal TANF adults required to participate in work activities, a higher percentage were participating and meeting minimum requirements in 2009 than in 2002 (see fig. 4). Our review of tribes’ TANF plans shows that the majority of tribes administering TANF programs count work activities beyond the 12 identified in PRWORA toward meeting work participation requirements, and over time tribes have increased the number and types of activities they count as work activities. PRWORA provides tribes the flexibility to count a wide spectrum of activities as work activities, which helps them accommodate the training needs and cultural traditions of their recipients. Some tribes count cultural activities (including beading and participating in tribal ceremonies), NEW participation (including educational activities and training and job readiness activities), and commuting time toward meeting work participation rate requirements. For example, the California Tribal TANF Partnership allows tribal TANF recipients to participate in cultural activities, such as basket weaving, to help meet work participation requirements (for a related photo of the basket weaving activity, see app. III). Furthermore, in 2002, we reported that 1 out of 36 tribal TANF programs (3 percent) counted commuting time toward meeting work participation requirements. In 2010, according to our analysis of tribes’ TANF plans, we found that 35 out of 64 tribes (55 percent) counted commuting time toward meeting work participation requirements. Some of the activities tribes count as work activities, including receiving counseling, substance abuse treatment, and participating in life skills and parenting classes, are used by tribes to support the more family-oriented goals of the TANF program, such as preventing and reducing out-of- wedlock pregnancies, promoting marriage, and encouraging the formation and maintenance of two-parent families. For example, the Forest County Potawatomi tribe offers classes such as Positive Indian Parenting, Healthy Relationships, and Nurturing Fathers, and counts time spent in these classes toward work participation requirements (see fig. 5). These types of activities may benefit TANF families, even when they do not lead directly to paid employment. For more examples of work activities that tribes use to meet their work participation rate requirements, see photos from our site visits in appendix III. Both in our survey and during our site visits, tribes reported that the recent economic downturn has contributed to an increase in the size of their TANF caseloads, in part because it has exacerbated the scarcity of job opportunities within and near their service areas. In all, 30 of the 50 tribes that responded to our survey question reported that as of September 2010, their average monthly caseload was larger than when their tribal TANF program first began providing services; and of those, 12 reported that the increase was due to economic conditions or high unemployment. One tribe in particular said that its caseload was higher in September 2010 because there had been job layoffs and because companies in their area had been consolidating positions or not hiring. According to preliminary data provided by HHS, seven out of 55 tribes were serving at or above their program capacity in fiscal year 2009. An Oneida tribal TANF staff member told us that because there are fewer jobs available in their area, there could be over 100 people to apply for an entry-level position at a fast food restaurant. Another tribe we met with, the Hopi tribe, said that, despite the lack of employment opportunities on the reservation, many tribal members have moved back to the reservation after the economic recession caused them to lose their jobs, which has further strained the tribes’ resources and contributed to an increase in their average monthly caseload. Changing economic conditions have in some instances led to reductions in state contributions to tribal TANF programs. Most states with tribal TANF programs have in the past provided tribes with state funding, but some are revisiting this commitment in light of tight fiscal conditions. In an effort to address a growing shortfall in its budget for TANF, the state of Washington, for example, reduced funding for tribal TANF programs effective January 2011, reducing funding that tribes in the state have relied on to help administer their TANF programs. Similarly, in the state of Arizona, a tribe we spoke with said that the state has had to cut back on funding for TANF-related programs. For example, the tribe said that its parenting program used to be funded by five different grants from the state, but because the state faces budget deficits, it has cut both the number of grants and the amount of funding for the program. In response to the economic recession that began in 2007, the Recovery Act created the $5 billion TANF Emergency Contingency Fund for states and tribal TANF programs. Tribes can qualify for these funds based on increases in the number of families receiving cash assistance or in TANF expenditures for nonrecurrent, short-term benefits or subsidized employment. As of June 2011, 24 tribes had received Emergency Contingency Fund grants totaling approximately $14 million. According to our review of tribes’ HHS-approved Emergency Contingency Fund applications, 21 out of the 24 tribes demonstrated an increase in the number of families receiving basic assistance. Fifteen tribes showed increased expenditures for funding for nonrecurrent, short-term benefits, and 9 tribes requested funds due to increased expenditures for subsidized employment needs. Once a tribe received the Emergency Contingency Fund grant, the funds could be spent on any TANF-related purpose for TANF-eligible families. For example, 22 of the tribes responding to our survey applied for and received Emergency Contingency Fund grants, and of these 22 tribes, 17 reported using these grants to expand existing tribal TANF services and programs. Furthermore, more than half of these 22 respondents reported using Emergency Contingency Fund grant funding to issue more cash grants or to fill TANF budget gaps caused by the recession. While two of the tribes we visited said that they used Emergency Contingency Fund grants to fund cash grants for families, another tribe said they used the funds for supportive services, such as providing approximately 700 children with school clothes. According to our survey and site visits, the recent economic downturn has also affected the types of TANF services some tribes are providing to participants. Of the 49 tribes that responded to our survey question, 39 (80 percent) reported that since the beginning of the economic recession in 2007, they have increased their provision of nonrecurrent, short-term benefits—emergency payments to families to cover housing, utilities, transportation, or other expenses. For example, the Forest County Potawatomi stated in their application to HHS for Recovery Act funds that as a result of current economic conditions, they have been providing more assistance to help families with car repairs and utilities. Other tribes have had to cut back on supportive services so that they could provide more TANF families with basic cash assistance. The Lac du Flambeau tribal TANF staff reported having to reduce spending on alcohol and other drug abuse programs as a result of economic conditions. The majority of tribal TANF programs that responded to our survey reported that they have faced administrative challenges related to initial program implementation, staff development and retention, and development of adequate data systems (see fig. 6). In addition, all 11 tribes we visited talked about other challenges related to overcoming the various barriers to self-sufficiency that their TANF participants face, such as a lack of transportation and limited employment opportunities. According to survey respondents, some of the top challenges were: Staff development and retention. Many tribes face challenges in finding, developing, and retaining their TANF staff. One tribe we visited said that it has been difficult for them to hire knowledgeable staff, such as a TANF program manager that is familiar with the program’s goals. According to HHS officials, another tribe lost their tribal TANF director 3 years ago and has struggled to find someone to permanently fill that position. Our survey results also indicate that 38 out of 49 tribes (78 percent) responding to the question have had difficulty in developing expertise in the staff they do employ. For example, one tribe said that while it was important for them to hire locally based staff for its TANF program, it was difficult to do so because not only were there very few qualified applicants, but also because there was a lack of training opportunities for new staff not familiar with the administration of TANF. Another tribe noted that they had to train and develop their own staff, as state TANF caseworkers often had Master’s degrees in Social Services, while most of their own caseworkers did not. Furthermore, once tribes have hired and developed their staff, it is increasingly difficult for them to retain that staff. One tribal TANF administrator we spoke with said she had about three to four different supervisors during the last 4 years. An HHS official also told us that, in his opinion, staff turnover can affect tribal TANF programs more dramatically than state TANF programs since tribes may lack the institutional knowledge and experience necessary to administer the program and provide training to staff. systems that can support their TANF programs. While tribes can use a percentage of their TANF grant for developing a new information system, this would decrease the amount of funding available for direct services to TANF families. With a reservation spanning about 24,000 miles and three states—Arizona, New Mexico, and Utah—the Navajo Nation refers to its TANF program as the Program for Self-Reliance, in order to reflect its mission to empower families to become self-sufficient. A tribal TANF administrator said that having the flexibility to design a program that incorporates the Navajo teaching of Taa’ hwo ajit eego laid the groundwork to “break the cycle” of dependence and instill self-confidence. In addition to the challenges outlined above, tribes we visited identified several barriers to self-sufficiency faced by their TANF participants that present challenges to their programs. These barriers include limited public transportation, employment opportunities, child care options, and educational attainment, among others. All 11 tribes we visited mentioned the availability of transportation as a challenge, with tribal TANF officials noting that many of their program participants lack a valid driver’s license or have limited or no public transit options. Many tribes also said that their TANF participants have limited job opportunities. The Forest County Potawatomi tribe, for instance, told us that few jobs exist for TANF participants because of recent closures of logging mills and because seasonal jobs are only available during the summer months. This same tribe said that due to its rural setting and recent child care facility closures, TANF participants have limited options for child care, hampering their ability to work. A lack of education among participants also affects their ability to secure employment. One tribe told us that its TANF participants, some of whom have only earned their General Equivalency Diploma, have had a harder time competing for jobs during the economic recession. Furthermore, of the 11 tribes we visited, 6 mentioned substance abuse, domestic violence, or both as barriers to their TANF participants’ self-sufficiency. When tribes experience challenges administering their TANF programs, they often turn to other entities for assistance, such as HHS, other tribal and federal programs, and consultants, among others. In particular, those responding to our survey reported that they most commonly contact HHS regional office staff, other tribes, and private consultants (see fig. 7). In addition, 9 out of the 11 tribes we visited indicated that they also work with other federal programs to help address challenges. According to our survey, tribes most commonly contacted HHS regional office staff for assistance. All 48 tribes responding to this question (100 percent) selected “yes” for this question in our survey. According to HHS officials, tribes often reach out to their regional offices for guidance and technical assistance to address challenges that can occur during the initial implementation of their TANF programs. Regional offices provide most of HHS’s training and technical assistance to tribes, and the majority of their assistance focuses on the development and oversight of tribal TANF plans. Regional offices also inform tribes about policy and procedural updates and provide clarification if needed. For example, when the Recovery Act’s TANF Emergency Contingency Fund became available, HHS provided technical assistance and outreach through its regional offices. HHS regional offices also provide regional tribal TANF conferences, typically held once per year. In addition, one senior HHS official said that their regional staff can also conduct in-person site visits to provide direct one-on-one assistance to tribes, but generally they lack the resources to do any extensive travel. To enhance employment and training opportunities for program participants, tribal TANF programs reach out to other programs within their tribe (see fig. 8), and many also contact other tribes when they experience challenges administering their program. The Menominee Tribe has worked with its local tribal college to provide different education and training opportunities to its tribal TANF participants, such as degree and trades programs (for a related photo of the Menominee tribal college, see app. III). In our survey, 46 out of 47 tribes (98 percent) responding to the question indicated that they contact other tribes when they experience challenges administering their own TANF programs. For example, tribal TANF administrators from one program we spoke with participated in meetings with other tribal TANF programs in their state, which they found more valuable than HHS regional meetings for coordinating with other tribal TANF programs on particular issues and sharing information about such topics as data, tribal TANF plans, and HHS guidance. To enhance employment opportunities for tribal TANF participants and address some of their barriers to self-sufficiency, tribes also collaborate with other federal programs. Tribes seek out these partnerships, in part, because their TANF programs are typically serving areas with high unemployment rates. Some tribes we visited told us they collaborated with such programs to create opportunities for individuals to help meet their work participation requirements. For example, at least three tribes we visited—the Lac du Flambeau, Zuni, and Hopi tribes—placed participants at their Head Start offices to gain work experience (for a related photo of the Lac du Flambeau Head Start program, see app. III). The Hopi tribe has also sent some participants to an orientation for the Job Corps program, an education and training program that helps young people learn a career, earn a high school diploma or General Equivalency Diploma, and find and retain employment. To facilitate further coordination with federal programs and address challenges related to program implementation and staffing, tribal TANF programs can also participate in a “477 plan” administered by DOI. According to DOI, when TANF is integrated into a comprehensive “477 plan,” participants may receive additional support services, such as longer-term job preparedness, and “477 plan” case managers can receive additional training from DOI to better assist unemployed tribal members with finding jobs. DOI also noted that consolidating resources into a single plan helps to minimize overhead costs, maximize client participation, and integrate services. Of the 11 tribes responding to our survey that include their TANF program in such a plan, nearly all indicated that doing so improved service delivery coordination (10 out of 11), increased continuity of service provision (10 out of 11), and improved administrative and staff coordination (9 out of 11). For example, one tribe we visited said their participation in a “477 plan” allowed them to reduce paperwork and duplication among their various federal programs, including TANF, and to provide a one-stop service location as well. Tribal TANF programs also coordinate with other federal grant programs under HHS, such as the Native Employment Works (NEW) program. Of the tribes we surveyed, 18 out of 22 (82 percent) that operate both NEW and TANF programs reported improved service delivery coordination as a benefit of operating both programs. In addition, 16 out of the 22 tribes indicated that they had benefited from increased continuity of service provision and improved administrative and staff coordination as a result of administering both programs. In our survey, 37 out of 46 tribes (80 percent) reported that they contact private consultants when they experience challenges administering their TANF programs. During our site visits, we learned that tribes reach out to consultants to address some of the top challenges reported in our survey, such as developing data systems and staff expertise. For example, to address challenges related to developing adequate data systems, we learned that the majority of tribes use data systems and receive training on these systems from consultants, according to HHS. Another tribe we visited consulted with the University of California, Davis to help facilitate the development of goals for their tribal TANF plan, which included clarifying the tribe’s definitions for performance and results. In addition, the Center for Human Services at the University of California, Davis annually presents a National Tribal TANF Institute to provide information, tools, and networking opportunities to support the development and operation of tribal TANF programs that meet the needs of Native people. According to HHS officials, the single audit is the primary oversight mechanism for tribal TANF programs, and single audit findings are used to target technical assistance to tribes. At least 19 tribes have had repeat single audit findings since 2002, most often in the areas of allowable costs/cost principles, reporting, eligibility, cash management, and equipment and property management. One official from an HHS regional office explained that a lack of infrastructure and the inability to retain qualified staff in tribal TANF programs are often the main causes of repeat audit findings such as these. HHS officials described how record keeping can be a challenge for tribal TANF programs as a result of inadequate computer systems. More specifically, one senior HHS official stated that one of the most common findings from tribes’ audits is weaknesses in procurement systems, where documents supporting procurement purchases are missing and incomplete, or inventory lists are missing. One HHS regional official added that tribes are especially susceptible to financial audit findings because of staff turnover—a tribe could be making progress with addressing their audit findings, but then a key staff member may leave, and the tribe is “back to square one.” To help tribes prevent financial audit findings resulting from new or inexperienced staff, one regional official stated that they invite new tribal TANF financial officers to come to their offices for basic training on the TANF program and fiscal issues. Another HHS official described how they have also used single audit report findings to target their training and technical assistance by holding sessions on common audit findings and resolutions at some of their annual tribal TANF conferences. HHS has also included guidance on single audits in some of its policy manuals available on its tribal TANF Web site, such as an audit supplement guide that outlines common tribal TANF audit findings and program activities that can ensure compliance with government regulations. Additionally, HHS has the authority to impose financial penalties if it decides they are warranted. However, we found that HHS’s tracking of single audit reports was fragmented, with multiple systems tracking different sets of reports with tribal TANF findings. An HHS Office of Inspector General (OIG) official explained that audits are tracked in an agencywide single audit database that HHS’s OIG oversees, and some program offices, including ACF, have their own database for tracking audits they are responsible for resolving. However, audits with tribal TANF findings may not always be tracked in ACF’s database, because depending on the nature of the finding, another HHS program office or even another federal agency may be responsible for resolving it. For example, HHS officials explained that audit reports with crosscutting findings affecting multiple programs usually do not show up in ACF’s database, as they are assigned to and tracked by HHS’s Office of Finance, Division of Systems Policy, Program Integrity and Audit Resolution, which is responsible for handling or resolving these specific types of findings. However, a summary of all audit findings are sent to the HHS tribal TANF program office for their review. One senior HHS tribal TANF official confirmed that due to workload priorities, they are behind in reviewing these summaries of audit findings for both state and tribal TANF programs. One HHS OIG official explained that these summary reports contain all audit findings for tribal TANF programs, including those that the tribal TANF office is not responsible for resolving. If these summary reports are not reviewed in a timely manner, tribal TANF officials may not be aware of all recurring audit findings related to tribal TANF programs. For example, one tribe was found to have not met compliance requirements for allowable costs or cost principles for each of the five consecutive years it submitted single audit reports. According to information provided by the OIG, ACF was responsible for resolving some but not all of the findings, and thus may not have known that these findings had occurred every year if they did not review the summary reports in a timely manner. Our Standards for Internal Control in the Federal Government provide that internal control monitoring should ensure that findings of audits and other reviews are promptly resolved. Due to the delays in reviewing the summaries and the fragmented systems for reporting and tracking single audit findings, HHS tribal TANF officials may not consistently be aware of all the single audit findings related to tribal TANF programs, or be in a position to promptly identify and address recurring problems and mitigate risk. In addition to HHS officials’ use of the single audit as the primary oversight mechanism for tribal TANF programs, quarterly data reports used to calculate work participation rates and financial reports, as well as other program reporting requirements, also help HHS oversee program performance and ensure program integrity (see table 2). Through quarterly data reports, HHS reviews tribes’ data used to calculate work participation rates and follows up with tribes to make updates and changes to the data as necessary (see fig. 9). One tribe we visited mentioned that it is helpful to have HHS review their data to make sure that both HHS and the tribe itself are calculating the work participation rates correctly and arriving at the same numbers. While one of the tribes we visited noted that it was helpful to have HHS review their data, HHS does not consistently update and review tribal TANF quarterly work participation data submitted by tribes in a timely manner. We found that in some cases, it has taken HHS several years to review, update, and share the results of its work participation rate data review with tribes, even though these rates help tribes measure the degree to which TANF families are engaged in work activities that can lead to self-sufficiency. According to our survey, 22 out of 49 tribes (45 percent) indicated that failing to receive data reports from HHS in a timely manner has been a very major or major challenge to administering their tribal TANF program. One tribe in particular stated that they received an official response from HHS regarding their fiscal year 2009 participation rates two years later, in fiscal year 2011. The tribal official noted in our survey that “tribes have deadlines to meet and we have to wait years for a response.” Another tribe responding to our survey stated that while they submit their data reports to HHS each quarter, they are waiting up to three years to receive their reports back from HHS. Because HHS does not review and share work participation rate data with tribes in a timely manner, tribes may not know of any errors in their data reporting until years later, which could impact not only the data reporting for that year, but also for subsequent years. An HHS contractor primarily responsible for working with tribes on their TANF work participation data said that, in his opinion, tribal TANF data present different challenges for HHS than state TANF data. For example, updating information on tribes’ work participation rates requires keeping track of 64 different TANF plans, where the work participation rates or work hours often change every year, which, according to the HHS contractor, is not as common with state TANF programs. In addition, the number of plans will continue to grow as more tribes have expressed interest in starting their own programs. The same HHS contractor noted that while it would be useful for tribes to see their work participation calculations or to access information for a specific month or year, access to that type of information would require additional programming. Furthermore, he said it would take a more sophisticated program than what HHS is currently using—such as a program flexible enough to define the different tribal TANF variables—to help make updating the data to share with tribes a little easier. HHS uses different methods to provide guidance to tribes on their TANF programs (see fig. 10). Tribal TANF programs are generally satisfied with the assistance they receive from HHS, but some cited deficiencies. When asked to rate different types of assistance received from HHS headquarters and regional offices, the majority of the tribal TANF respondents in our survey indicated that they found guidance and policy documents, technical assistance, training and conferences provided to them by HHS to be somewhat to very useful (see fig. 11). However, many tribes noted that the assistance that HHS headquarters and regional offices provided to them for data reporting and data system development was only slightly or not useful, with some tribes also indicating that they had not received these types of assistance from HHS at all (see fig. 11). For example, 13 out of 40 tribes (33 percent) indicated that data system development assistance from HHS headquarters was only slightly or not useful, while 14 (35 percent) indicated that they had not received any assistance in this area. Tribes also reported that HHS regional and headquarters offices’ timeliness varied in responding to requests for assistance. Specifically, tribal TANF survey respondents indicated greater satisfaction with the speed of HHS regional offices compared to headquarters (see fig. 12). For example, 33 out of 48 tribes (69 percent) responding to our survey said that they were very satisfied with the speed of HHS regional office staff, while only 18 of 46 tribes (38 percent) were very satisfied with the speed at which HHS headquarters office staff respond to their requests. However, some of the survey respondents and tribes we interviewed described situations where HHS regional and headquarters assistance was not timely. For example, some tribes we interviewed described how they had repeatedly sent emails to their regional office asking for information, but the region was unresponsive. Some tribes we interviewed at regional conferences and who responded to our survey also indicated that delayed responses from HHS were particularly frustrating when they were trying to figure out new policies or when they had a limited time in which they could act—such as implementing the new financial form or submitting their Emergency Contingency Fund applications. One senior HHS official explained that even those questions that seem simple on the surface may have greater implications, so all questions must be reviewed and vetted to provide an accurate response, which takes time. However, one tribe stated that HHS made a general announcement about the new financial reporting form in 2007, and then they never heard anything else about the development or implementation of the form until 2 years later, when they were required to start using it. This tribe also noted that HHS held a training meeting with tribes to discuss its requirements after tribes were already required to begin reporting program data using the new financial form. As a result, they did not have an opportunity to include in the reports what they learned in the training. In our survey, another tribe described how they do not receive notifications about program changes from HHS in a timely manner, stating that “we are expected to implement . . . federal requirements immediately, without immediate guidance or training if needed.” Tribes also indicated in our survey that tribal TANF guidance provided by HHS regional and headquarters offices via phone, email, and training conferences was not always clear or consistent. In our survey, 18 out of 50 tribal TANF respondents (36 percent) indicated that HHS policy on subsidized employment was not clear. HHS officials stated that the timeframes for implementing the Recovery Act did not allow for the issuance of proposed and final rules. HHS posted questions and answers on subsidized employment on their website and sought to provide guidance when they could. However, some tribes explained that in general, they receive mixed messages from HHS’s regional offices and headquarters, and sometimes even from different staff members within the same regional office. Tribes told us that there seems to be some confusion at HHS over how and what information is communicated to tribes, with HHS staff sending tribes incorrect or inconsistent information on tribal TANF policies. For example, one tribe responding to our survey noted that their regional office contact does not always provide direct answers to tribal leaders, which can lead to misinterpretation, while another tribe we visited noted that different HHS regional offices had different interpretations of what types of activities count as cultural activities. HHS officials stated that because tribes are very diverse, it is difficult to have a “one size fits all’ approach to developing some of the policies that tribes want guidance on, such as cultural activities allowed to meet work participation requirements. Federal officials would prefer to give tribes broad flexibility to determine themselves what constitutes an appropriate activity. However, some tribes expressed frustration with this approach, citing how the cultural activities they choose to include in their tribal TANF plans are still subject to review by HHS, and some are not always approved. In addition, tribes have received different kinds of guidance in different formats, and not all tribes were satisfied with the way in which HHS provided it. One tribe described how all of the guidance they received on subsidized employment was shared informally via phone calls and emails—there was no official policy memo from HHS that detailed this guidance. This tribe also acknowledged that it can be difficult to provide policy information to all of the tribal TANF programs at one time, and suggested that HHS leverage its Web site to provide relevant guidance and ensure that all tribes have access to the same information. For example, documents related to past regional tribal conferences, Web casts, and information on tribal TANF technical assistance services are posted on a different HHS Web site, and are not linked to the tribal TANF Web page. As a result, tribes may not know that this information is related to tribal TANF and available to them online. Further, some tribes cannot always attend HHS’s annual regional training conferences, and as a result they miss out on training opportunities and access to key information or guidance. One HHS regional office added that regional offices are not universally consulted or pulled in by HHS headquarters to strategize on technical assistance efforts, or to come up with collective objectives and goals. As a result, the types and amount of technical assistance provided to tribes by each of the different regions varies. The majority of the HHS regional offices we interviewed said that they would like to be able to visit tribes in person to provide more one-on-one training and guidance when tribes need or want it, but recognized that there are limited resources for travel. HHS headquarters officials also described how limited travel funds impacted their ability to visit tribal TANF programs in person as well. Additionally, tribes indicated in our survey that they would like to receive more assistance from HHS—33 out of 44 respondents (75 percent) wanted additional assistance from HHS regional offices, while 30 out of 38 respondents (79 percent) wanted more assistance from HHS headquarters. Regional offices do not always receive clear and consistent guidance from HHS headquarters on new policies, either. HHS officials told us that tribal TANF policies are primarily created in their headquarters office, and then it is up to the regional offices to provide much of the training and technical assistance to tribes related to these policies. One regional official stated that they do not have written policies or guidance on what they should do if tribes are having difficulties administering their tribal TANF program, but that this is the same for state TANF programs, too. Another regional official said they had asked the HHS headquarters office for guidance for the new financial reporting form for a year before they received it. As one tribe indicated, this resulted in the regional office being unable to answer questions from tribes about the new form. Further, because HHS and DOI did not always agree on how to coordinate oversight of tribal TANF programs incorporated in “477 plans,” tribes with “477 plans” were sometimes confused over which agency’s rules and regulations they are required to follow. For example, one tribe responding to our survey described how HHS and DOI still needed to provide them a definitive answer as to whether or not the Emergency Contingency Fund grant which was transferred to their “447 plan” program could be expended until the end of fiscal year 2011. One HHS official described how in the past, DOI did not provide HHS with written regulations or terms and conditions for “477 plans” in general, and this made it difficult for them to know how to implement tribal TANF as part of the “477 plan.” In response, a DOI official explained that DOI purposely did not develop any regulations because adding more rules would diminish the flexibility of the plans, which is contrary to their principal goals. However, both DOI and HHS officials told us that they have been coordinating more to share information with each other and develop policies together, such as recent joint consultations with tribes with “477 plans” on using one funding instrument. While the tribal TANF program as a whole is relatively small in comparison to the TANF program for states, Congress designed tribal TANF in recognition that tribes, like states, would be better equipped to understand and meet the needs of their own communities. However, since the creation of tribal TANF, HHS’s administration of the program has not kept pace with the growth of tribal TANF or with tribes’ changing needs. Improved access to information on how to implement parts of their TANF program or policy changes that could affect their programs can facilitate tribes’ achieving program goals. Further, more prompt, consistent collection and review of all tribal TANF-related single audit report findings and work participation rate, caseload, and financial data, could help HHS to more effectively monitor tribal TANF programs and determine how it could better target its technical assistance and guidance to address areas where tribes may be having difficulty. In addition, more timely HHS data analysis could improve both the accuracy of tribes’ data reporting and the ability of HHS and tribal administrators to determine if tribal TANF programs are effectively maintaining program integrity and meeting their goals. Given the fiscal pressures facing the federal government and the continued demands placed on assistance programs, it is critical that programs designed to serve those most in need are in a position to provide benefits and services as effectively and efficiently as possible while maintaining program integrity. Unless HHS makes improvements in the consistency and availability of single audit report findings, tribal TANF policy guidance, and program data, tribal TANF program administrators will not have the complete information they need to improve the effectiveness and integrity of their programs. To improve guidance and oversight of tribal TANF programs, we recommend that the Secretary of Health and Human Services take the following three actions: Review and revise, as appropriate, HHS’s process for monitoring, tracking, and promptly resolving tribal TANF single audit findings so that it can more systematically target training and technical assistance to better address recurring problems and mitigate risk. Improve processes for maintaining and monitoring tribal TANF data— such as work participation rate, caseload, and financial data—that can be shared with tribes in a timely manner. Create procedures to provide more timely, accessible, and consistent guidance on tribal TANF policies that is clearly communicated to tribal TANF programs, and ensure that all tribal TANF policy developments and procedures are readily and easily accessible on HHS’s Web site. For example, HHS could consider more effective ways to provide training to tribes on how new guidance or policy decisions will affect the administration of their programs, and consistently update its Web site to provide information on related tribal TANF technical assistance and training. We provided a draft of this report to the Secretary of the Interior and the Secretary of Health and Human Services for review and comment. HHS provided us with written comments on a draft of our report which are reprinted in appendix IV. Both DOI and HHS also provided us with technical comments that we incorporated, as appropriate. HHS agreed that effective monitoring and continuous improvement of its guidance and technical assistance to tribes as well as to states and other grantees is important, and stated their appreciation for our findings on areas where monitoring, guidance and technical assistance could be improved to tribal TANF programs. HHS also stated that it would be mindful of our overall recommendations and specific examples of ways to improve its efforts, and it is already outlining actions they plan to take to address our recommendations. Specifically, with regard to our first recommendation, HHS commented it would review and seek to identify opportunities for improvement at each step of its process for monitoring, tracking and resolving tribal TANF audit findings, including the identification of recurring problems and risks, and the identification of technical assistance needs identified through the audit resolution process. During the course of our audit work, HHS officials could not find some single audit reports, but they were recently able to provide them, so we removed our finding related to this from the report. HHS also stated that it will take follow-up steps to ensure that all audits with tribal TANF findings will be promptly addressed, and has committed additional staff to working on audit issues. In response to our second recommendation, HHS recognized the need for more timely sharing of data with tribal TANF programs, and cited efforts it is undertaking to address this, including the hiring of an additional tribal TANF data specialist and its continuing work on improving reporting and publishing of preliminary and final caseload and work participation data for recent years. With regard to our third recommendation, HHS stated it would strengthen its efforts to be attentive to opportunities for improvement in training and technical assistance, but it also commented on how we presented findings on its guidance to tribal TANF programs. First, HHS noted that while our report title and text highlight the need to improve guidance, the data provided in the report generally indicate a high level of satisfaction with the guidance and technical assistance currently being provided. We state in our report that tribal TANF programs responding to our survey were generally satisfied with the assistance they received from HHS, but some respondents did cite specific weaknesses in areas such as data system development and reporting. Additionally, in multiple survey open- ended responses and interviews with us, tribal TANF staff cited instances where the timeliness, clarity, and consistency of guidance could be improved, and our title reflects the need for HHS to examine these areas further. In regard to the specific survey findings on data system development and reporting, HHS clarified that the need for additional data system development assistance reflects a need to increase capacity across a broad range of HHS programs, but it is training new employees to assist with data reporting. In addition, HHS noted that the regional and headquarters offices work together, and that a tribe may be unaware that the headquarters office contributed to assistance received through a regional office. HHS also stated that a question raising complex issues would typically be reviewed by both the regional and headquarters offices, and would likely take longer to resolve. We point out in our report that tribes often cited frustration with not receiving consistent and timely information from both regional and headquarters offices, especially on policy changes that had a limited time in which they could act. Thus, it is important that if both offices are indeed collaborating to provide assistance to tribal TANF programs, that their information be consistent and timely for all tribes in all regions. It can be challenging to work with multiple tribes who each have their own unique tribal TANF programs, but if it is taking the regional and headquarters HHS offices longer to resolve a particular question, if would be helpful if they communicated this to the tribes, especially if it is related to a policy change with a specific timeframe or deadline. Finally, HHS described how ACF has committed to undertake additional research initiatives to better understand the needs of tribal members, operations of tribal TANF programs and effective practices. These studies could be helpful in providing HHS with more information on better ways to support the tribes. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Health and Human Services, the Secretary of the Interior, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To obtain information on how tribal Temporary Assistance for Needy Families (TANF) programs have changed since 2002 (when we last reviewed the program), the challenges tribes face in administering their own programs and what tribes have done to address them, and federal agencies’ guidance and oversight of tribal TANF programs, we analyzed federal TANF data, documents and tribal TANF single audit data collection reports for selected years; surveyed all tribal TANF administrators; conducted site visits at 11 tribal TANF programs in four states, and interviewed federal officials. We conducted our work from June 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Because the U.S. Department of Health and Human Services (HHS) is responsible for collecting tribal TANF data and reporting on tribal TANF programs nationally, we reviewed relevant TANF data compiled by that agency. Specifically, we reviewed both published and unpublished data for fiscal years 2002 to 2009 on (1) the work participation status of all tribal TANF adults, (2) work activity data for those TANF adult recipients with activities, (3) tribes that met and did not meet work participation rates, (4) tribes’ caseloads, and (5) tribes’ expenditure data. The expenditure data analysis also includes tribal TANF expenditure data provided by the U.S. Department of the Interior (DOI) for those 15 tribes that include TANF in a “477 plan” for fiscal years 2002 to 2009. We also reviewed fiscal year 2009 and 2010 expenditure data from HHS for the American Recovery and Reinvestment Act of 2009 (Recovery Act) Emergency Contingency Fund and tribes’ applications for these funds. We interviewed HHS officials to gather information on the processes they use to ensure the completeness and accuracy of the tribal TANF work participation, work activity, caseload, and expenditure data, but we did not independently verify these data with tribes. However, we did follow up with HHS during the course of our analysis whenever we found any inconsistencies or errors with the data in order to ensure that the data were complete, reasonable, and sufficiently reliable for the purposes of this report. In some cases, we received revised information from the agency. We also reviewed DOI documentation related to expenditure data reporting. We found these data to be sufficiently reliable for our purposes. In addition, we reviewed selected documents submitted by tribes to HHS, which the agency does not publish. For example, we reviewed all 24 tribes’ HHS-approved applications for the Emergency Contingency Fund as of June 2011, mentioned above, and all 64 tribal TANF plans approved by HHS as of October 2010. In addition, we reviewed published and unpublished documents from HHS and DOI, such as all seven of the Welfare Peer Technical Assistance Network needs assessments for tribal TANF and Native Employment Works (NEW) programs and a sample of four “477 plan” assessments selected and provided by DOI officials. HHS does not regularly perform on-site reviews of tribes’ TANF data, but auditors periodically review tribal TANF programs to comply with the Single Audit Act of 1984, as amended. To determine if there were any significant tribal TANF or “477 plan” single audit compliance findings, we reviewed all 398 Office of Management and Budget Circular No. A-133 single audit data collection reports for fiscal years publicly available as of May 2011 that included tribal TANF or “477 plan” programs for fiscal years 2002 through 2010. Then we reviewed the specific types of compliance findings in all 114 available single audit data collection reports for tribal TANF programs in existence for 2 years or longer with significant compliance findings for the majority of years that their program was in existence for fiscal years 2002 through 2010. To better understand tribal TANF programs, we conducted a Web-based survey of all tribal TANF administrators for all 64 tribal organizations that administer their own TANF program. The survey included questions about the benefits and challenges of administering a tribal TANF program, changes to TANF service delivery related to the economic recession, and HHS assistance to tribes after the Recovery Act. The survey was conducted from October to December 2010 with 50 out of the 64 tribal TANF administrators (78 percent) responding. We obtained contact information for surveyed tribal TANF administrators from HHS. Beginning on October 25, 2010, we sent e-mail notifications to these officials, and we sent two follow-up e-mails over a period of about 2 weeks to encourage tribes to respond to our survey. We also made follow-up phone calls to encourage nonrespondents to complete our questionnaire. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variation in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pretesting draft instruments and using a Web-based administration system. Specifically, during survey development, we pretested draft instruments with three tribal TANF administrators from three states (Alaska, California, and Washington) in September and October 2010. We selected the pretest tribes to provide variation in selected program characteristics and geographic location. In the pretest, we were generally interested in the clarity, precision, and objectivity of the questions, as well as the flow and layout of the survey. For example, we wanted to ensure that definitions used in the survey were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section was appropriate. We revised the final survey based on pretest results. Another step we took to minimize nonsampling errors was using a Web-based survey. Allowing respondents to enter their responses directly into an electronic instrument created a record for each respondent in a data file and eliminated the need for and the errors associated with a manual data entry process. To further minimize errors, programs used to analyze the survey data and make estimations were independently verified to ensure the accuracy of this work. While we did not validate specific information that tribal TANF administrators reported through our survey, we reviewed their responses, and we conducted follow-up, as necessary, to determine that their responses were complete, reasonable, and sufficiently reliable for the purposes of this report. For example, we reviewed responses and identified those that required further clarification and, subsequently, followed-up with those tribes to ensure the information they provided was reasonable and reliable. In our review of the data, we also identified and logically fixed skip pattern errors for questions that respondents should have skipped but did not. On the basis of these checks, we believe our survey data are sufficiently reliable for the purposes of our work. To gather additional information on how tribal TANF programs have changed since 2002, the challenges tribes face in administering their own program and what tribes have done to address them, and federal agencies’ guidance and oversight of tribal TANF programs, we conducted site visits to 11 selected tribes administering TANF programs in Wisconsin, New Mexico, Arizona, and California to interview tribal TANF administrators and their staff about their programs (see table 3). We visited these tribes from November 2010 to January 2011. We selected these tribes because they varied in geographic location and selected tribal TANF program characteristics, including the size of the tribal service population, the number of years operating their tribal TANF program, program structure (e.g., tribes with “477 plans”), and type and amount of TANF and TANF-related program funding received (e.g., NEW grants, the Recovery Act’s Emergency Contingency Fund, and state funding). We also selected tribes that were located in both urban and rural areas to ensure that we captured any related differences in TANF program implementation as well as the types of challenges tribes may face. During the site visits, we interviewed tribal TANF officials and staff as well as TANF participants. Through these interviews, we collected information on tribes’ TANF services and work activities, the benefits and challenges of administering a TANF program, the impacts of the economic recession, and tribes’ working relationship with federal agencies as well as with other tribes administering a TANF program. We cannot generalize our findings beyond the tribes we visited. To learn more about federal agencies’ oversight and guidance of tribal TANF, we conducted interviews with DOI officials and HHS officials in headquarters and all regional offices serving areas where tribal TANF programs were located. These six regional offices are located in Chicago, IL; Dallas, TX; Kansas City, MO; Denver, CO; San Francisco, CA; and Seattle, WA. We also attended HHS regional tribal TANF conferences in California and Washington. In addition, we interviewed tribal TANF consultants and reviewed relevant information from past GAO, HHS, DOI, nonprofit, academic, and research institutions’ reports on tribal TANF, and reviewed relevant federal laws, regulations, and guidance related to tribal TANF. Appendix II: Approved Tribal TANF Programs (Fiscal Years 2002-2010) Association of Village Council Presidents, Inc. (serves 56 Alaska Native villages) Cook Inlet Tribal Council, Inc. (serves all members of federally recognized tribes in the Municipality of Anchorage) Central Council Tlingit and Haida Indian Tribes of Alaska (serves 20 Indian and Alaska Native villages) Bristol Bay Native Association (serves 29 Alaska Native villages) Navajo Nation (also in New Mexico and Utah) San Carlos Apache Tribe California Robinson Rancheria/California Tribal TANF Partnership (serves 16 tribes) North Fork Rancheria of Mono Indians Owens Valley Career Development Center (serves 8 tribes) Morongo Band of Mission Indians Southern California Tribal Chairmen’s Association, Inc. (serves 18 tribes) Scotts Valley Band of Pomo Indians Federated Indians of Graton Rancheria (serves 4 tribes) Karuk Tribe Round Valley Indian Tribes Shingle Springs Band of Miwok Indians Montana Chippewa Cree Tribe of the Rocky Boy’s Reservation Washoe Tribe of Nevada and California (serves 2 tribes) Oklahoma Muscogee (Creek) Nation Washington Spokane Tribe of Indians South Puget Intertribal Planning Agency (serves 4 tribes) Additional tribal TANF programs that started from FY 2003 through 2010 (28) In addition to the contact named above, Kathy Larin (Assistant Director), Rachel Frisk, Kristy Kennedy, Meredith Moore, Brenda Muñoz, and Heddi Nieuwsma made significant contributions to this report. Joanna Chan, Lorraine Ettaro, and Stuart Kaufman also made important contributions to this report. David Chrisinger and Mimi Nguyen provided writing and graphics assistance, and Alex Galuten provided legal assistance. Indian Issues: Observations on Some Unique Factors that May Affect Economic Activity on Tribal Lands. GAO-11-543T. Washington, D.C.: April 7, 2011 Human Services Programs: Opportunities to Reduce Inefficiencies. GAO-11-531T. Washington, D.C.: April 5, 2011. Temporary Assistance for Needy Families: Implications of Caseload and Program Changes for Families and Program Monitoring. GAO-10-815T. Washington, D.C.: September 21, 2010. Temporary Assistance for Needy Families: Implications of Recent Legislative and Economic Changes for State Programs and Work Participation Rates. GAO-10-525. Washington, D.C.: May 28, 2010. Temporary Assistance for Needy Families: Implications of Changes in Participation Rates. GAO-10-495T. Washington, D.C.: March 11, 2010. Temporary Assistance for Needy Families: Fewer Eligible Families Have Received Cash Assistance Since the 1990s, and the Recession’s Impact on Caseloads Varies by State. GAO-10-164. Washington, D.C.: February 23, 2010. Welfare Reform: Better Information Needed to Understand Trends in States’ Uses of the TANF Block Grant. GAO-06-414. Washington, D.C.: March 3, 2006. Welfare Reform: More Information Needed to Assess Promising Strategies to Increase Parents’ Incomes. GAO-06-108. Washington, D.C.: December 2, 2005. Welfare Reform: HHS Should Exercise Oversight to Help Ensure TANF Work Participation Is Measured Consistently across States. GAO-05-821. Washington, D.C.: August 19, 2005. Indian Child Welfare Act: Existing Information on Implementation Issues Could Be Used to Target Guidance and Assistance to States. GAO-05-290. Washington, D.C.: April 4, 2005. Welfare Reform: Rural TANF Programs Have Developed Many Strategies to Address Rural Challenges. GAO-04-921. Washington, D.C.: September 10, 2004. Indian Economic Development: Relationship to EDA Grants and Self- determination Contracting Is Mixed. GAO-04-847. Washington, D.C.: September 8, 2004. TANF And Child Care Programs: HHS Lacks Adequate Information to Assess Risk and Assist States in Managing Improper Payments. GAO-04-723. June 18, 2004. Welfare Reform: Tribal TANF Allows Flexibility to Tailor Programs, but Conditions on Reservations Make it Difficult to Move Recipients into Jobs. GAO-02-768. Washington, D.C.: July 5, 2002. Welfare Reform: Tribes Are Using TANF Flexibility To Establish Their Own Programs. GAO-02-695T. Washington, D.C.: May 10, 2002.
|
The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) gives American Indian tribes the option to administer their own Temporary Assistance for Needy Families (TANF) block grant programs. GAO first reported on the use of this flexibility by tribes in 2002 (GAO-02-768), and given the upcoming expected reauthorization of TANF, GAO was asked to examine (1) how tribal TANF programs have changed since 2002, especially in light of changing economic conditions; (2) the challenges tribes face in administering their own TANF programs and what tribes have done to address them; and (3) the extent to which the U.S. Department of Health and Human Services (HHS) has provided guidance and oversight to promote the integrity and effectiveness of tribal TANF programs. GAO analyzed federal TANF data; interviewed federal officials; surveyed all tribal TANF administrators; and conducted site visits at 11 tribal TANF programs in four states. Since GAO first reported on tribal TANF programs in 2002, the number of programs has increased--from 36 in 2002 to 64 in 2010. In addition, more tribes use program flexibilities to both tailor services to meet the needs of their TANF families and cope with changing economic conditions. GAO also found that some tribes have increased their work participation rate goals over time. For example, more than half of the 36 tribes that have been administering a TANF program since 2002 have raised these goals over time. Many tribes also allow a wide range of activities families can use to meet work participation rates, such as cultural activities or commuting time. Tribes also reported in GAO's survey that changing economic conditions have adversely affected their caseloads, funding, and services provided. For example, some tribes reported that since the beginning of the economic recession in 2007, they have larger average monthly caseloads, use other federal funding to fill budget gaps, and cut back supportive services to provide more cash grants. According to GAO's survey results, tribal TANF programs face challenges with initial program implementation, staff development and retention, and the development of adequate data systems. Moreover, all 11 tribes GAO visited talked about the various barriers to self-sufficiency facing their TANF participants, such as a lack of transportation and limited employment opportunities. To address these challenges, many tribes reach out to HHS regional office staff, other tribal and federal programs, and private consultants. For example, to address challenges related to developing adequate data systems, GAO learned that the majority of tribes use consultants to develop their systems and provide training. In addition, to enhance employment opportunities, some tribes have placed participants at their Head Start offices, while another tribe has partnered with its modular housing plant. HHS provides oversight and guidance for tribal TANF programs, but does not always do so in a timely or consistent manner. HHS officials told GAO that they use tribal TANF single audit report findings to target training and technical assistance to tribes. However, the systems that HHS uses to track these reports are fragmented, and as a result, tribal TANF officials may not consistently be aware of all the single audit findings related to tribal TANF programs, or be in a position to promptly identify and address recurring problems and mitigate risk. Other oversight tools, such as quarterly data reports used to calculate work participation rates, are not consistently updated by HHS in a timely manner, which, according to GAO's survey, is a challenge to tribes' administration of their TANF programs. HHS headquarters and regional offices provide guidance such as basic policy manuals, training at yearly conferences, and one-on-one assistance over the phone. However, some tribes expressed difficulty in finding and receiving clear, consistent, and timely guidance from HHS, which hinders their ability to successfully manage tribal TANF programs and finances. GAO recommends that HHS review its process for tracking related single audit reports, improve processes for maintaining tribal TANF data that can be shared in a timely manner, and provide timely, accessible and consistent guidance that is clearly communicated to its tribal TANF programs. HHS commented it will be mindful of these recommendations as it examines ways to improve its efforts.
|
DOD has designated four locations to process DCPS payroll transactions. Three of the locations—Denver, Colorado; Pensacola, Florida; and Charleston, South Carolina—were in operation at the time of our review. The fourth location in Omaha, Nebraska is scheduled to begin processing DCPS transactions in August 1995. DCPS was paying civilian employees from four DCPS databases in December 1993 and was scheduled to increase the number of databases to nine by August 1995. DOD designated the Financial Systems Activity at Pensacola, Florida, as the Central Design Activity responsible for maintaining and updating the payroll system. The Defense Civilian Personnel Data System (DCPDS)—DOD’s standard personnel system—provides DCPS with most essential personnel information needed to pay Navy civilian employees. DCPDS has an automated interface with the payroll system that is designed to automatically transfer personnel information such as employee name, social security number, job grade or step, and salary. The personnel data is entered into the personnel system by individual Navy Human Resource Offices throughout the country. These Human Resource Offices are the responsibility of the Navy’s Assistant Secretary for Manpower and Reserve Affairs. Generally, Navy civilian employee time and attendance data for actual hours worked is entered into DCPS separately by the timekeepers at the employee’s work location. To evaluate the propriety and accuracy of Navy civilian payroll payments, we performed computer analyses of pay records to (1) determine if payments were made only to authorized personnel and (2) identify any payments made in excess of authorized amounts. To determine if payments were being made to authorized personnel, we obtained and compared electronic copies of Navy civilian payroll and personnel records for the pay period ending December 25, 1993. At that time, DCPS was paying about 188,000 Navy civilian personnel. In addition, we reviewed the Uniform Financial Management System in Arlington, Virginia, and the Uniform Automated Data Processing System, in Honolulu, Hawaii, which paid about 28,000 and 9,000 Navy civilians, respectively, at that time. In total, we reviewed the propriety and accuracy of payroll payments made to about 80 percent of the 281,000 civilians employed by the Navy in 1993. To ensure that we had all and only the payroll data for that pay period, we matched the total payroll amounts for each payroll office with their corresponding payroll certification report. To identify potential improper payments or overpayments, we conducted various computer matches and searches to identify Navy civilians who received multiple DCPS payments; were paid without an active personnel record; were paid at a higher rate than authorized; had high annual leave balances and did not use annual leave, sick leave, or compensatory time in 1993; and were paid by both DCPS and other civilian pay systems. To determine if any payments in excess of authorized amounts existed within the universe of the potential overpayments identified through our tests discussed above, we provided DFAS with our test results and requested that DFAS contact Navy Personnel and jointly determine whether any payments were made in excess of authorized amounts. Because they had not yet responded after 4 months, we contacted about 60 Navy Human Resource Offices throughout the country and requested a copy of the official personnel record showing the authorized pay rate for each potentially overpaid Navy civilian. We compared this pay rate to the rate each potentially overpaid Navy civilian was paid to determine which Navy civilians were actually overpaid. To assess the vulnerability of DFAS’ and the Navy’s civilian payroll internal controls to loss of funds from fraud and abuse, we observed payroll processing, reviewed applicable DCPS documentation (including reports, policies, and regulations), and interviewed cognizant DFAS and Navy Personnel officials. We performed our work at the three active DCPS locations in Charleston, South Carolina; Pensacola, Florida; and Denver, Colorado. We also performed work at two payroll processing locations using other payroll systems at the time of our review—the Uniform Financial Management System in Arlington, Virginia, and the Uniform Automated Data Processing System, in Honolulu, Hawaii. In addition, we performed audit work at the DCPS Central Design Activity in Pensacola, Florida; the Navy Civilian Personnel Data System Center in San Antonio, Texas; and Navy Human Resource Offices in Charleston, South Carolina; Pensacola and Jacksonville, Florida; and San Diego, California. Our work was performed between August 1993 and February 1995 in accordance with generally accepted government auditing standards. We obtained oral comments on a draft of this report from cognizant DFAS and Navy Personnel officials. Their views have been incorporated where appropriate and are further discussed in the agency comments section of this report. Our matching tests of 225,000 payroll and associated personnel records to determine the propriety and accuracy of Navy civilian payroll payments disclosed overpayments to 134 Navy civilians, or less than one-tenth of 1 percent of the accounts tested. This is in contrast to the Army where we found improper payroll payments totaling millions of dollars, including payments to “ghost” soldiers and deserters. As shown in table 1, we confirmed overpayments of $62,500 were made to Navy civilians. However, we also found that total overpayments were actually higher than those we confirmed because some erroneous payments continued for nearly a year. For example, one Navy civilian was paid by DCPS at a rate of $52,217 for 1993, instead of the authorized rate of $47,209. Thus, although the overpayment for the pay period we reviewed was about $190, the total amount overpaid on an annual basis was about $5,000. Because DCPS only keeps payroll records on-line for a 6-month period, and because of the significant amount of resources required to research each overpayment on microfiche, we did not determine the full extent of overpayments associated with the 134 cases we identified. However, the total overpayment amounts are undoubtedly far greater than the amounts shown in table 1. These overpayments were caused, at least in part, because (1) DFAS and Navy Personnel did not reconcile discrepancies between personnel and payroll records and (2) DFAS staff did not compare payments from the various payroll databases to detect unauthorized payments to a single civilian employee. Nonetheless, our examination of payroll records showed that many of the overpayments were identified by DFAS within 6 months of their occurrence. When DFAS detected overpayments, it processed retroactive transactions to change the pay records and to initiate DFAS’ recovery or resolution process. We noted such retroactive adjustments, totaling $50,374, for 45 of the 134 overpaid Navy civilians identified in table 1. As noted previously, we did not determine the extent to which the conditions permitting the specific overpayments we identified resulted in overpayments in other pay periods, nor did we ascertain how DFAS learned of the overpayments for which it initiated retroactive transactions. Instead, in November 1994, we met with cognizant DFAS and Navy Personnel officials and provided them with a comprehensive list of all the overpayments we identified. We requested that they jointly follow up to determine the full extent of overpayments and that DFAS recover these amounts. As of March 1995, DFAS and Navy had not completed their determination of the full extent of overpayments, and as a result had not yet completed necessary recovery actions. Comparisons between the payroll and personnel systems and reconciliations of discrepancies were not routinely done. Specifically, Navy and DFAS compared Navy civilian payroll and personnel files only four times between May 1992 and August 1994. More importantly, discrepancies identified from these comparisons were not resolved because Navy and DFAS had not established procedures for systematic follow-up and correction of identified discrepancies. Had more frequent payroll and personnel comparisons taken place, and any discrepancies systematically researched and their resolution documented, DFAS could have promptly detected and corrected the overpayments we identified. For example, as shown in table 1, we found that DFAS paid 84 civilian employees at a higher pay rate than authorized in their personnel records. These overpayments totaled $5,251 for the one pay period we tested. In addition, table 1 shows that DFAS paid about $7,700 to another 14 individuals who did not have active personnel records. DFAS and Navy Personnel acknowledged that they infrequently reconciled payroll and personnel data. DFAS officials told us that payroll and personnel data reconciliations do occur as part of the conversion process of payroll accounts to DCPS. However, these officials acknowledged that not all discrepancies identified during these reconciliations are researched and resolved prior to the conversion of the payroll account to DCPS. As a result, erroneous information, such as an incorrect pay rates, may be passed from the closing payroll offices to the cognizant receiving DCPS payroll center. The DCPDS/DCPS Payroll Handbook calls for conducting payroll/personnel reconciliations about every 4 months to ensure the accuracy and completeness of the payroll and personnel records. However, DFAS and Navy Personnel officials stated that they had not consistently reconciled differences between payroll and personnel records and pointed out that they did not have procedures to systematically resolve and document the disposition of the discrepancies found during their reconciliations of payroll and personnel records. With the continued rapid consolidation of payroll accounts into the DCPS system, which is discussed later in this report, it is critical that payroll and personnel reconciliations be routinely conducted and that all discrepancies be systematically followed up and resolved. In commenting on this report, DFAS officials told us that they believe the recent addition of edit checks to the electronic interface between DOD’s standard personnel system and DCPS decreased the need for data reconciliation. While not detailing the extent of these changes, DFAS officials told us that DCPS was enhanced to automatically reject and return proposed personnel actions affecting pay if they did not pass recently initiated DCPS edit checks and that this enhancement permitted faster identification of erroneous data. We agree that this improvement in the interface between the personnel and payroll systems could help prevent some of the kinds of overpayments we identified. However, it is unlikely that such edit checks would prevent overpayments arising from Navy civilians receiving multiple DCPS payments or Navy civilians being paid by both DCPS and other payroll systems. In addition, not all personnel information flows through the interface. For example, we noted instances where notices of personnel actions were manually entered into DCPS. Moreover, the reconciliations would be useful for determining the effectiveness of these recently added edit checks. Consequently, we believe that there is a continuing need for data reconciliations between the payroll and personnel systems. As shown in table 1, our audit disclosed that 25 civilian employees were overpaid at least $27,000 because DFAS erroneously paid them from two separate DCPS payroll databases. We found an additional 11 overpayments totaling nearly $22,300 that were caused by payments being processed independently from both DCPS and another payroll system for the same individual. DCPS did not have internal control procedures to determine if multiple payments were made to a single social security number and to ensure that its four databases did not generate undetected erroneous multiple payments to a single individual. DCPS’ vulnerability to erroneous payments from multiple databases is likely to increase because DFAS plans to expand the number of payroll databases from four at the time of our review to nine by the end of fiscal year 1995. In addition to the need for stronger controls to prevent overpayments, DCPS was also vulnerable to improper payments as a result of weaknesses in controls relied on to regulate access to data, document transaction processing, and perform file maintenance. Specifically, DFAS gave most payroll staff unnecessary access to sensitive DCPS data and did not provide for a complete audit trail documenting who made changes to payroll records. In addition, DFAS did not have controls in place to prevent payroll accounts from former employees remaining on the system from being fraudulently reactivated and paid. These internal control weaknesses could result in improper or fraudulent payments. Such internal controls are particularly critical in light of the scope of the ongoing DCPS consolidation effort. DFAS officials have described this as the most aggressive effort ever undertaken in this area, involving the consolidation of about 700,000 accounts from over 350 payroll offices worldwide. They further stated that this effort involved the consolidation of 19 different automated payroll systems and several manual systems operating overseas—all of which they acknowledged were in various states of disrepair. In addition, strong internal controls, including segregating key duties among responsible personnel, are necessary to provide reasonable assurance that assets, such as payroll funds, are safeguarded against loss. Computer access controls, such as those used by DCPS, are intended to permit authorized users to access the system to perform their assigned duties and preclude unauthorized persons from gaining access. However, we found that DFAS unnecessarily granted supervisory access codes to staff that did not have supervisory responsibilities. Supervisory level access, the highest access level DFAS granted to its payroll processing staff, allows individuals to create employee records; enter employee time and attendance data; and change salary amounts, names, and pay destinations. While such access would not enable DFAS payroll processing staff to directly access personnel data in DCPDS, it would enable DFAS staff to add to or modify personnel data—for example, adding employees or modifying pay rates—after transmission to DCPS. To illustrate, a single payroll staff with this access level would be capable of creating and paying a fictitious employee or fraudulently diverting payroll funds to another destination. About 86 percent of the supervisory level access codes at the three DCPS payroll processing locations were granted to nonsupervisors. For example, at the Denver DCPS processing location, 138 staff were granted supervisory level access, including 2 temporary employees, while only 13 had supervisory responsibilities. After our inquiry, Denver officials removed about 20 percent of the supervisory level access codes because the individuals either had left the organization or otherwise should not have had access to the payroll system. However, Denver still had 97 supervisory level access codes granted to nonsupervisors. By granting supervisory level access to payroll processing staff who did not need that level of access, DFAS inappropriately gave the majority of its staff access to both personnel information and time and attendance data. DCPS Security Guidelines Manual states that the system’s design should provide for a separation of duties between payroll clerks in the payroll office. Specifically, a single payroll clerk should not be capable of both creating or changing employee records and entering time and attendance data for the same group of payroll accounts. Further, GAO’s Internal Control Standards state that key duties and responsibilities in authorizing and processing payroll should be separated among individuals. According to DFAS officials, supervisory level access is necessary to perform a wide variety of tasks associated with maintaining payroll operations while converting Navy civilian payroll accounts from their previous payroll systems to DCPS, including the tasks of entering both new pay accounts and time and attendance data. DFAS officials told us that they accepted the increased risk resulting from granting supervisory level access. However, DFAS did not specifically assess whether—and how long—nonsupervisory payroll technicians may need supervisory level access during the period of DCPS consolidation. We believe that the increased risk associated with the large scope of the ongoing DCPS conversion process—which DFAS officials informed us is not scheduled for completion until March 1997—necessitates strong access controls. DFAS officials acknowledged that they needed to identify the appropriate number of staff who should have supervisory access at this time. DCPS’ audit trail contains incomplete information for identifying who was responsible for changing certain types of DCPS data. Lacking such audit trail capability leaves DCPS vulnerable to undetected fraudulent payments. Specifically, DCPS routinely recorded only the identity of the payroll clerk last accessing the payroll account, regardless of whether or not this person made any changes. However, to ensure effective control over changes in personnel data affecting pay, such as name, address, pay destination, and salary amounts, it is critical that DCPS have a complete audit trail identifying the payroll clerk responsible for each change, not merely the payroll clerk last accessing the system. In addition, retroactive transactions to correct or update previous payroll payments did not carry any payroll clerk identification. Audit trails identifying which payroll clerk initiated a change in DCPS data are necessary to document the responsibility for the sequence of events followed in processing a transaction. According to Joint Financial Management Improvement Program requirements, computer systems must provide audit trails to trace transactions from source documents, through successive levels of summarization, to the financial statements and from the financial statements to the source. Guidelines for Security of Computer Applications, Federal Information Processing Standards 73, states that computer system users should be uniquely identified so that they can be held responsible for their activities—it is usually not enough to verify that a user is one of a group of authorized users. It is difficult to detect security breaches unless there is a record of system events which can be analyzed, including information on who accessed the system, what was accessed, and what actions were performed. DCPS is currently incapable of providing a complete audit trail with this level of detail. For example, when DFAS officials in Denver were informed by a civilian that he was overpaid $1,000 in January 1993, DFAS was able to determine that an erroneous change had been made to this employee’s account, but DFAS could not determine which payroll clerk initiated the change in DCPS. The need for a well-documented audit trail is particularly important because, as discussed previously, most personnel have supervisory level access allowing them to access and change all records on a DCPS database. DFAS officials acknowledged the necessity and importance of audit trails. However, they informed us that they have not yet determined a specific course of action on how best to establish a comprehensive audit trail in a cost-efficient manner. The Navy’s civilian payroll was also at risk of fraud and abuse because many payroll accounts of former employees, who should no longer receive pay checks—called inactive payroll accounts—remained on the system. As of December 1993, DCPS had about 40,000 inactive payroll accounts on the system and no controls to prevent these accounts from being reactivated for fraudulent payments. Inactive payroll accounts may be reactivated by anyone with supervisory level access, which as discussed previously is the majority of the payroll clerks, by changing one code in DCPS. Because of the large number of staff with supervisory level access to the payroll system and the incomplete audit trail discussed previously, the risk that these accounts can be fraudulently reactivated is increased. DFAS officials stated that the inactive payroll accounts were maintained on the system until they prepared the W-2 tax information and made all necessary corrections to an employee’s payroll account. Once this process was complete, the inactive payroll files were to be purged from the system in July of the year the W-2s were created. Thus, an inactive payroll record could remain on DCPS for up to 19 months. Compounding this vulnerability, in July 1993, the DCPS Central Design Activity, did not purge the inactive payroll accounts from DCPS, which DFAS officials said accounted for the high number of inactive payroll records found during our testing at the end of December 1993. We agree that DFAS needs to maintain information on inactive accounts to prepare W-2s and make necessary corrections to payroll accounts. However, given the current unstable control environment associated with the ongoing unprecedented DCPS consolidation, we believe that information on inactive accounts should not remain on the active database. This risk can be significantly reduced if payroll accounts of former employees are removed from the active payroll system and placed in a separate database, with appropriately restricted access. Our testing identified insignificant overpayments in relation to the number and dollar amounts of payroll payments made to Navy civilians. However, we did identify internal control vulnerabilities which, if exploited, could permit additional improper civilian payroll payments to occur and not be readily detected. Strengthening DFAS and Navy procedures to restrict access to payroll and personnel data and modifying the DCPS system to provide a reliable audit trail would both help prevent fraudulent payments and detect overpayments when they occur. With the ongoing rapid consolidation of DOD civilian payroll accounts into the DCPS system that is not scheduled to be completed until early 1997, it is critical that top management devote attention and priority to correcting existing control vulnerabilities as soon as possible. In addition, effectively researching and documenting the correction of discrepancies identified through a payroll and personnel record comparison will require the concerted cooperative effort of both cognizant Navy and DFAS officials for personnel and payroll record accuracy, respectively. We recommend that the Assistant Secretary of the Navy for Manpower and Reserve Affairs, and the Director of the Defense Finance and Accounting Service direct appropriate officials to: Complete follow-up on the 134 overpaid employees we identified and referred to DFAS and Navy Personnel officials to determine the full extent of overpayment, collect amounts due, and identify and correct systemic causes of the overpayments. Conduct payroll/personnel reconciliations every 4 months, as called for by the DCPDS/DCPS Payroll Handbook, and establish a requirement for timely systematic follow-up, including research, correction, and documentation of all discrepancies. We recommend that the Director of DFAS: Establish and implement detailed automated procedures documented in the Defense Civilian Pay System Users Manual to detect and correct any unauthorized multiple payments to a single social security number. Assess, on a case-by-case basis, the extent to which nonsupervisory payroll technicians need supervisory level access, and, if so, grant such access for as limited a period as possible. Require the DCPS Central Design Activity to develop an audit trail in DCPS that marks all transactions with a user identification that cannot be overwritten. Remove current inactive payroll records from the active payroll system and place these records in a separate database, with restricted access. Establish and implement detailed written procedures to remove all future inactive payroll accounts from the active payroll system, and place these records in a separate database, with restricted access. DFAS and Navy Personnel officials generally agreed with our recommendations. However, DFAS officials expressed concern that we did not sufficiently recognize the extenuating circumstances brought about by the ongoing rapid consolidation of DCPS processing locations. We believe that the changes DCPS is undergoing warrant adequate controls to ensure that risks associated with such changes are sufficiently mitigated. Given the increased risk associated with the changing environment in which DCPS currently operates, we continue to believe that the findings and recommendations in our report are appropriate. We are sending copies of this report to the Secretary of the Navy; the Chief Financial Officer of the Department of Defense; the Assistant Secretary of the Navy for Financial Management; the Director of the Office of Management and Budget; and to the Chairmen and Ranking Minority Members of the House and Senate Armed Services Committees, the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, and the House and Senate Committees on Appropriations. This report was prepared under my direction and I may be reached at (202)512-9095 if you have any questions concerning this report. Major contributors to this report are listed in appendix I. Diane Handley, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO reviewed the Navy's civilian payroll operations, focusing on the: (1) propriety and accuracy of payments made to civilian personnel; and (2) vulnerability of the internal control system to prevent fraud and abuse. GAO found that: (1) 134 of 225,000 Navy civilians were overpaid a total of at least $62,500 in 1 year; (2) these overpayments were due to the Defense Finance and Accounting Service's (DFAS) failure to determine if individual civilian employees were paid from multiple databases for the same time period and infrequent reconciliations between civilian payroll and personnel systems; (3) the Navy's civilian payroll operations are susceptible to additional improper payments as a result of many personnel having unrestricted access to payroll data, DFAS inability to identify database changes, and DFAS maintenance of inactive payroll accounts on the active payroll database; and (4) the rapid consolidation of civilian payroll accounts into the Defense Civilian Payroll System could exacerbate control weaknesses if vulnerabilities are not adequately addressed.
|
According to IRS data, between tax years 2002 to 2011, the number of large partnerships more than tripled from 2,832 to 10,099. Over the same time, total assets of large partnerships more than tripled to $7.49 trillion. However, these numbers suffer from the double-counting complexities illustrated in figure 1. For comparison, our interim report on large partnerships, which defined large partnerships as those with 100 or more direct partners and $100 million or more in assets, found that over the same time period the number of large partnerships more than tripled, from 720 in tax year 2002 to 2,226 in tax year 2011. Similarly, total assets tripled to $2.3 trillion in tax year 2011. Without an accepted definition of a large partnership, there is not necessarily a right or wrong answer of whether direct and indirect partners should be included. Direct partners do not capture the entire size and complexity of large partnership structures. Accounting for indirect partners does, but it also raises the issue of double counting discussed above. Given the size and complexity of large partnerships, IRS does not know the extent of double counting among this population. Large partnerships, especially those in higher asset brackets, are primarily involved in the finance and insurance sector. For example, in 2011, 73 percent of large partnerships reported being involved in the finance and insurance sector and the majority of large partnerships that reported $1 billion or more in assets were in this sector. IRS data also showed that almost 50 percent of large partnerships with 100,000 or more direct and indirect partners reported being in the finance and insurance sector. According to IRS officials and data, many of these entities are investment funds, such as hedge funds and private equity funds, which are pools of assets shared by investors that are counted legally as partners of the large partnership. Being investment vehicles, these funds tend to invest in other partnerships, as well as other types of business entities. One IRS official said that these investments can affect the partner size of other partnerships based on where they choose to invest (e.g., buying an interest in other partnerships). For example, if an investment fund with a million partners chose to invest in multiple small operating partnerships, such as oil and gas companies organized as partnerships, all of those partnerships would count as having more than a million partners as well. One IRS official said the partnerships with more than a million partners increased from 17 in tax year 2011 to 1,809 in tax year 2012. The official attributed most of the increase to a small number of investments funds that expanded their interests in other partnerships. If in the future those investment funds choose to divest their interests in other partnerships, the number of large partnerships would decrease significantly. Although the reasons for the changes are not clear, from tax years 2008 to 2010, the number of large partnerships with 500,000 or more direct and indirect partners changed from 70 in 2008 to 1,088 in 2009, and decreased to 70 in 2010. IRS data on large partnerships also show their complexity, as measured by the number of partners and extent of tiering, or levels, below the large partnership. Almost two-thirds of large partnerships in 2011 had more than 1,000 direct and indirect partners, although hundreds of large partnerships had more than 100,000. See figure 2 for more detail. In 2011, about two-thirds of large partnerships had at least 100 or more pass-through entities in the partnership structure. Because almost all large partnerships tend to be part of multitiered networks, their partners could be spread across various tiers below those partners that have a direct interest in the partnership. For example, in 2011, 78 percent of the large partnerships had six or more tiers. Determining the relationships and how income and losses are allocated within a large partnership structure through multiple pass-through entities and tiers is complicated. For example, in figure 3, the allocation from the audited partnership on the far left side of the figure crosses eight pass- through entities along the bold path before it reaches one of its ultimate owners on the right. This path also may not be the only path from the audited partnership to the ultimate owner. While figure 3 appears complex, it has only 50 partners and 10 tiers. Large partnership structures could be much more complex. In 2011, as noted above, 17 had more than a million partners. According to one IRS official, there are several large partnerships with more than 50 tiers. IRS audits few large, complex partnerships. According to IRS data, in fiscal year 2012, IRS closed 84 field audits of the 10,143 large partnership returns filed in calendar year 2011—or a 0.8 percent audit rate. This is the same audit rate we found for fiscal year 2012 in our interim report, which defined large partnerships as having 100 or more direct partners and $100 million or more in assets. The audit rate for large partnerships remains well below that of C corporations with $100 million or more in assets, which was 27.1 percent in fiscal year 2012. See table 1. Table 1 also shows that most large partnership field audits closed from fiscal years 2007 through 2013 did not find tax noncompliance. In 2013, for example, 64.2 percent of the large partnership audits resulted in no change to the reported income or losses. In comparison, IRS audits of C corporations with $100 million or more in assets had much lower no change rates. For example, audits of large corporations had a no change rate of 21.4 percent in 2013. When the field audits of large partnership returns did result in changes, the changes to net income that the audits recommended were minimal in comparison to audits of large corporations, as shown in table 2. This could be because positive changes on some audits were cancelled out by negative changes on other audits. In 3 of the 7 years, the total adjustments from the field audits were negative. That is, they favored the large partnerships being audited. This did not occur for audits of large corporations. See table 2. In terms of audit costs, the number of days and hours spent on the audits of large partnerships in fiscal year 2013 has increased since fiscal year 2007, but varied from year to year in the interim, as shown in table 3. In contrast, the audit days and hours spent on audits of large corporation are decreasing while obtaining audit results that are noticeably better than those of large partnership audits. IRS does not track its audit results for large partnerships and therefore does not know what is causing the results in tables 1, 2, and 3. Consequently, it is not clear whether the results are due to IRS selecting large partnerships that were tax compliant versus IRS not being able to find noncompliance that did exist. The high no change rates and minimal adjustment amounts for IRS audits of large partnerships may be due to a number of challenges that can cause IRS to spend audit time on administrative tasks, or waiting on action by a large partnership or IRS stakeholder rather than doing actual audit work. Under the Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA), the period for auditing partnerships does not expire before 3 years after the original due date of the return or date of return filing, whichever is later. IRS on average takes approximately 18 months after a large partnership return is received until the audit is started, leaving on average another 18 months to conduct an audit, as illustrated in figure 4. Once a large partnership audit has been initiated, it falls under the TEFRA audit procedures. Congress enacted the TEFRA audit procedures in response to concerns about IRS’s ability to audit partnership returns. According to the congressional Joint Committee on Taxation (JCT), the complexity and fragmentation of partnership audits prior to TEFRA, especially for large partnerships with partners in many audit jurisdictions, resulted in the statute of limitations expiring for some partners while other partners were required to pay additional taxes as a result of the audits. TEFRA addressed these issues by altering the statute of limitations and requiring each partner of a partnership to report certain items like income, consistent with how the partnership reports them. However, according to IRS officials and in focus groups we held with IRS auditors, using the TEFRA procedures to audit large, complex partnership structures present a number of administrative complexities for IRS. These complexities may reduce the time IRS spends on actual audit work, adversely affecting IRS audit results for large partnerships. For example, one of the primary challenges for doing large partnership audits under TEFRA that IRS focus group participants reported was identifying the Tax Matters Partner (TMP). The TMP is the partnership representative who is to work with IRS to facilitate a partnership audit. The responsibilities of the TMP include (1) supplying IRS with information about each partner, (2) keeping the partners of the partnership informed and getting their input on the audit, and (3) executing a statute of limitations extension, if needed. Without being able to identify a qualified TMP in a timely manner, IRS may experience delays during large partnership audits. IRS focus group participants cited numerous examples of difficulties in identifying the TMP. One difficulty is that the TMP can be an entity, not a person. If an entity is designated as the TMP, IRS has to track down an actual person to act as a representative for the TMP. Focus group participants said that some large partnerships do not designate a TMP or designate an entity as TMP to delay the start of the audit, which would limit the audit time remaining under the statute of limitations. Entities will often be elusive about designating the TMP. The entities will use this tactic as a first line of defense against an audit. The burden for ensuring that the TMP meets the requirements of TEFRA largely falls on IRS. Time spent identifying a qualified TMP, according to IRS focus group participants, could take weeks or months. As shown in figure 4, IRS has a window of about 1.5 years to complete large partnership audits. A reduction of a few months from the 1.5 years IRS has to complete large partnership audits means that the time IRS has for the audit would be markedly reduced. Another challenge TEFRA poses is determining the extent to which IRS passes through audit adjustments to the taxable partners in a large partnership structure. In that large partnerships are nontaxable entities, TEFRA requires that audit adjustments be passed through to the taxable partners, unless the partnership agrees to pay the related tax at the partnership level. To pass through the audit adjustments to the taxable partners, IRS has to first link, or connect, the partners’ returns to the partnership return being audited. However, IRS officials said linking a large number of partners’ returns can be a significant drain on IRS’s resources. If a large partnership has hundreds or thousands of partners at multiple tiers, the additional tax owed by each partner as a result of large partnership audit may not be substantial enough to be worth passing through once those partners’ returns are linked. If the audit adjustment is lower than a certain level, IRS will not pass it to the taxable partners; and the time and resources spent linking the partners’ returns, and preparing a plan to pass through the audit adjustment to certain taxable partners’ returns, becomes effectively meaningless. Aside from the TEFRA challenges, another challenge involves the complexities arising from large partnership structures, which hinder IRS’s ability to identify tax noncompliance with complex tax laws. For example, IRS officials reported having difficulty in identifying the business purpose for the large partnerships or in determining the source of income or losses within their structures (i.e. knowing which entity in a tiered structure is generating the income or losses). Without this information, it is difficult for IRS to determine if a tax shelter exists, an abusive tax transaction is being used, and if income and losses are being properly characterized. I think noncompliance of large partnerships is high because a lot of what we have seen in terms of complexity and tiers of partnership structures… I don’t see what the driver is to create large partnership structures other than for tax purposes to make it difficult to identify income sources and tax shelters. To help IRS auditors better understand the complexity of the TEFRA audit procedures and the large partnership structures, various IRS stakeholders and specialists are to provide support during the audit. However, IRS focus group participants stated that they do not have the needed level of timely support. These include TEFRA coordinators to help with the TEFRA audit procedures, IRS counsel to help navigate the TEFRA audit procedures and provide input on substantive tax issues, and specialists who have expertise in a variety of areas. The support provided by IRS stakeholders is important because many IRS focus group participants said that their knowledge of partnership tax law was limited and they may only work on a partnership audit once every few years. The challenges identified by IRS are not recent occurrences but may have grown over time as the number and size of large partnerships has grown. For example, in 1990, the Department of the Treasury (Treasury) and IRS reported that applying TEFRA to large partnership audits resulted in an inefficient use of limited IRS resources. They cited a number of reasons for the inefficient use of resources, such as having to collect and review information on a large number of partners and the difficulty of passing through audit adjustments to those partners. IRS by itself cannot fully address the tax law and resource challenges in auditing large partnership returns. For example, IRS cannot make the structures or laws less complex and cannot change the TEFRA audit procedures in statute. In addition, IRS has recently experienced budget reductions, constraining the resources potentially available for large partnership audits. Despite these limitations, IRS has initiated efforts that may help address the challenges auditing large partnership returns. First, IRS can sometimes use a closing agreement to resolve an audit under the TEFRA audit procedures, if both IRS and the partnership agree to its terms. This agreement allows the tax owed from the net audit adjustment at the highest marginal tax rate to be collected at the partnership level, meaning IRS does not have to pass through the audit adjustments to the taxable partners. IRS does not track the number of closing agreements but IRS officials said that IRS enters into relatively few. IRS officials are encouraging audit teams to pursue closing agreements for large partnership audits. However, closing agreements come with challenges because the partnership must be willing to agree and the IRS review process can be extensive. Aside from closing agreements, the IRS efforts affect steps IRS takes at the beginning of an audit—such as understanding the complexity of large partnerships and selecting returns for audits. However, IRS has not yet determined the effectiveness of these efforts. The Chairman of the House of Representatives Committee on Ways and Means and the Administration have also put forth proposals to address some of challenges associated with the TEFRA audit procedures. While the proposals differ somewhat and apply to partnerships with different numbers of partners, both would allow IRS to collect tax at the partnership level instead of having to pass audit adjustments through to the taxable partners. In our ongoing work on large partnerships, we are assessing options for improving the large partnership audit process and, if warranted, will offer reforms for Congress to consider and recommendations to IRS. Chairman Levin, Ranking Member McCain, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. We provided a draft of this testimony to IRS for comment. IRS provided technical comments, which were incorporated, as appropriate. If you or your staff have any questions about this testimony, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony included Tom Short, Assistant Director, Vida Awumey, Sara Daleski, Deirdre Duffy, Robert Robinson, Cynthia Saunders, Erik Shive, Albert Sim, A.J. Stephens, and Jason Vassilicos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Businesses organized as partnerships have increased in number in recent years while the number of C corporations (i.e. those subject to the corporate income tax) has decreased. The partnership population includes large partnerships (those GAO defined as having $100 million or more in assets and 100 or more direct and indirect partners). Their structure varies. Some large partnerships have direct partners that are partnerships and may bring many of their own partners into the structure. By tiering partnerships in this manner, very complex structures can be created with hundreds of thousands of direct and indirect partners. Tiered large partnerships are challenging for the Internal Revenue Service (IRS) to audit because of the difficulty of tracing income from its source through the tiers to the ultimate partners. GAO was asked to study the challenges large partnerships pose for IRS. GAO describes the number of large partnerships and their assets, IRS's large partnership audit results and the challenges IRS faces in auditing these entities, and options for addressing these challenges. GAO analyzed IRS data on partnerships, reviewed IRS documentation, interviewed IRS officials, met with IRS auditors in six focus groups, and interviewed private sector lawyers knowledgeable about partnerships. Internal Revenue Service (IRS) data show, from tax years 2002 to 2011, the number of large partnerships more than tripled. According to IRS officials, many large partnerships are hedge funds or other investment funds where the investors are legally considered partners. Many others are large because they are tiered and include investment funds as indirect partners somewhere in a tiered structure. According to IRS data, there were more than 10,000 large partnerships in 2011. A majority had more than 1,000 direct and indirect partners although hundreds had more than 100,000. A majority also had six or more tiers. IRS audits few large partnerships—0.8 percent in fiscal year 2012 compared to 27.1 percent for large corporations. Of the audits that were done, about two-thirds resulted in no change to the partnership's reported net income. The remaining one-third resulted in an average audit adjustment to net income of $1.9 million. These minimal audit results may be due to challenges hindering IRS's ability to effectively audit large partnerships. Challenges included administrative tasks required by the Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA) and the complexity of large partnership structures due to tiering and the large number of partners. For example, IRS auditors said that it can sometimes take months to identify the person who represents the partnership in the audit, as required by TEFRA, reducing the time available to conduct the audit. Complex large partnerships also make it difficult to pass through audit adjustments across tiers to the taxable partners. IRS cannot resolve some of the challenges because they are rooted in tax law, such as those required by TEFRA. Congress and the Administration have proposed statutory changes to the audit procedures for partnerships, such as requiring partnerships to pay taxes on net audit adjustments rather than passing them through to the taxable partners. In addition, IRS has implemented some changes to its large partnership audit process, such as understanding the complexity of large partnerships and selecting returns for audits. GAO makes no recommendations but will issue a report later in 2014 assessing IRS's large partnership audit challenges. IRS provided technical comments, which were incorporated.
|
The purpose of the Corps’ civil works project study process is to inform federal decision makers whether a water resources project warrants further federal investment. The study process is conducted in two phases: reconnaissance and feasibility. In the reconnaissance phase, the Corps conducts an initial evaluation of potential solutions to a water resources problem. If the Corps determines that a project potentially warrants federal investment, it proceeds to a more detailed feasibility study. The feasibility phase generally begins with the signing of a feasibility cost- share agreement between the Corps and the local project sponsor. Feasibility studies are generally prepared by the Corps’ 38 district offices, with review and oversight provided by the cognizant Corps division office and by headquarters. During the feasibility phase, the Corps formulates and evaluates alternative plans for achieving the project’s objectives and reviews the proposed project to assess whether the benefits of constructing it outweigh its costs. At the beginning of this phase, a feasibility scoping meeting is held to bring the Corps, the local sponsor, and other agencies together to reach agreement on the problems and solutions to be investigated during the feasibility study and the scope of the analysis required. The next step includes an alternative formulation briefing to identify and resolve any legal or policy concerns and to obtain headquarters approval of the tentatively selected plan and to release the draft report to the public. Finally, the draft feasibility report—which presents the study results and findings, including those developed in the reconnaissance phase—is released to the public. At the conclusion of the feasibility phase, the Corps selects a recommended plan for proceeding with the project. The feasibility report also includes analysis and documentation to meet the requirements of the National Environmental Policy Act (NEPA). Under NEPA, federal agencies are to assess the effects of major federal actions, such as Corps construction projects, that significantly affect the environment and prepare a detailed statement on the environmental impacts of those actions. NEPA has two principal purposes: (1) to ensure that an agency carefully considers detailed information concerning significant environmental impacts and (2) to ensure that this information will be made available to the public. NEPA requires an agency to prepare a detailed statement on the environmental impacts of any “major federal action” significantly affecting the environment. NEPA implementing regulations generally require an agency to prepare either an environmental assessment or an environmental impact statement (EIS).procedures—such as providing the public with an opportunity to comment on the draft EIS for at least 45 days. NEPA implementing regulations also specify requirements and Corps project studies have historically been subject to various levels of internal and external review under a number of authorities as well as the Chief of Engineers’ responsibility to ensure the quality of Corps studies. For example, in 1902, Congress created the Board of Engineers for Rivers and Harbors, which was the result of efforts to address inconsistent treatment of proposed Corps projects. The board was made up of Corps staff. Until 1992, when Congress terminated the board, it reviewed thousands of Corps studies for civil works projects and made unfavorable recommendations on more than half. At the time the board was abolished, there was concern that too much duplicative review was occurring between the board and other internal Corps review processes. Subsequently, in the Flood Control Act of 1944, Congress established a mechanism for external review of Corps projects by giving the head of the Department of the Interior and the governors of affected states an opportunity to comment on proposed Corps projects before authorization. Furthermore, starting in 1970, under NEPA, environmental impact statements for Corps projects were required to be sent to the heads of other federal agencies and governors of affected states for comment. As a result of the Flood Control Act of 1970, Congress created the position of Assistant Secretary of the Army for Civil Works, who coordinates the review of Corps studies with the Office of Management and Budget (OMB) before they are submitted to Congress. Recent congressional interest in establishing an independent external peer review process for Corps project studies began in the late 1990s, following a series of damaging reports and events, including allegations that the Corps had manipulated information to justify projects. Investigations conducted by NAS and the Army’s Inspector General identified various problems with the Corps internal review process, including the manipulation of economic analysis and potential institutional bias toward large construction projects. Around this time, WRDA 2000 required the Corps to contract with NAS to study and make recommendations concerning the use of peer review for feasibility reports, including recommending potential criteria to determine how to apply peer review. In 2002, NAS released its study concluding that the Corps’ more complex water resources project planning studies should be subject to external, independent review. The study also found that not all Corps project studies necessarily require such review, recommending instead that external peer review be reserved for studies that are expensive, will affect a large area, are highly controversial, or involve high levels of risk. The study estimated that about five Corps projects per year would likely be subject to this level of review. According to the NAS study, criteria for selecting the appropriate level of review should balance the risks and consequences of inadequate review against the resources required for more complex and stringent levels of review. In addition, the study identified several criteria that should be considered in determining the appropriate level of review for Corps studies, primarily that as project magnitude and risks increase, an increasing degree of independence and scope of review are warranted (see fig. 1). In addition to recommendations related to the appropriate level of review for project studies, the 2002 NAS study made several other recommendations about the Corps’ peer review process. It recommended that peer review results be presented to the Chief of Engineers before a final decision on a project study is made, that the Chief of Engineers respond in writing to each key point of the peer review report, and that peer review be initiated early enough in the Corps’ study process so that review results can be meaningfully incorporated into project design. After NAS published its 2002 study, OMB in December 2004 issued its Final Information Quality Bulletin for Peer Review citing the Information as well as its general authorities to oversee the quality of Quality Act,agency information, analyses, and regulatory actions. This OMB bulletin established governmentwide guidance on enhancing peer review practices and covers what information is subject to peer review, the selection of appropriate peer reviewers, opportunities for public participation, and related issues. The Corps’ Engineering Circular 1105- 2-408 (EC 408) was issued in May 2005 and established procedures for ensuring the credibility and quality of Corps documents by supplementing its previous review process, including to add external peer review to its review process in special cases where risk and magnitude warrant this level of review. The Corps faced further criticism after the failure of Corps levees and floodwalls in New Orleans in the wake of Hurricane Katrina in August 2005. In 2006, the Corps announced “Twelve Actions for Change,” which included a set of actions intended to transform the Corps’ priorities, processes, and planning and apply lessons learned from Hurricanes Katrina and Rita. Among these actions was to employ independent review for projects with significant consequences, especially the potential for loss of life if the project were to fail. In November 2007, Congress passed WRDA 2007 and included section 2034, which establishes a 7-year trial period for peer reviews of certain studies of civil works projects; this trial generally applies to project studies initiated by the Corps from November 2005 through November 2014.The Corps was to provide an initial report to Congress on its implementation of the peer review trial under section 2034 by November 2010 and is to provide a final report by November 2013. In February 2011, the Corps submitted its initial report to Congress summarizing its experiences implementing the peer review process in response to the requirement in section 2034 of WRDA 2007. Section 2034 defines a project study as a feasibility or reevaluation study—including the EIS for that study—or any other study associated with a modification of a water resources project that includes an EIS. Under section 2034, project studies that meet at least one of the following criteria are required to undergo peer review: The project has an estimated total cost of more than $45 million. The governor of an affected state requests an independent peer review. The Chief of Engineers determines that the project study is controversial (i.e., significant public dispute exists as to the project’s size, nature, or effects or its economic or environmental costs or benefits). In addition, if the head of a federal or state agency charged with reviewing a project study determines the project is likely to have a significant adverse impact on environmental, cultural, or other resources, he or she may request that the Corps consider a peer review by an independent panel of experts. WRDA 2007 also provides some instances where exceptions may be made to peer review. For example, the Corps may exclude from peer review certain projects having a total estimated cost of more than $45 million but do not include an EIS, have not been determined by the Corps to be controversial, and come below specified thresholds of adverse impacts. The Corps may also exclude other project studies meeting certain exclusion criteria. For example, the Corps may exclude studies that involve only the rehabilitation or replacement of existing hydropower turbines, lock structures, or flood control gates within the same footprint and for the same purpose as an existing water resources project; an activity for which the Corps and industry have ample experience, so the activity may be considered routine; and minimal life safety risk. Section 2034 also has requirements for the Corps concerning the peer review panel and its independence, as well as for timing the peer review and publishing peer review reports, as described in more detail below. Under section 2034, the Corps is required to contract with NAS, a similar independent scientific and technical advisory organization, or an “eligible organization” to establish a panel of experts that will review a project study. Section 2034 defines an eligible organization as having the following five characteristics: is a 501(c)(3) tax-exempt organization, is free from conflicts of interest, does not carry out or advocate for or against federal water resources has experience in establishing and administering panels. Section 2034 states that when establishing peer review panels, contractors must apply NAS’s policy for selecting committee members to ensure that they also have no conflicts of interest. The NAS Policy on Committee Composition and Balance and Conflicts of Interest outlines several criteria for selecting peer review panel members, including the following: All panel members must be highly qualified in terms of knowledge, training, and experience. The knowledge, experience, and perspectives of the panel members must be thoughtfully and carefully assessed and balanced in terms of the subtleties and complexities of the particular scientific, technical, and other issues to be addressed. Potential sources of bias must be assessed to determine that the panel’s report will not be compromised by issues of bias or lack of objectivity. Panel members must not have financial interests that could significantly impair their objectivity or create an unfair competitive advantage for any person or organization. Panel members must not obtain and use, or intend to use, confidential information not reasonably available to the public for their own or direct and substantial economic benefit. Panel members must not serve as a member on a peer review panel that is to review the panel member’s own work. Panel members must not have become committed to a fixed position related to the review for which they have a significant directly related interest or duty. Persons currently employed by the agency sponsoring the study cannot be panel members, except in extremely limited special circumstances. Additionally, section 2034 requires that both the experts selected for the peer review panels and the organizations managing the peer review selections be independent. Section 2034 does not define the term independent, but both the 2002 NAS peer review study and OMB’s Final Information Quality Bulletin for Peer Review regard independent to mean external to the Corps. Specifically, the NAS study states that a fully independent review can be accomplished only by reviewers who are free of conflicts of interest and are appointed by a group external to the Corps. Similarly, the OMB bulletin states that independent reviewers are generally not employed by the agency or office producing the document. Section 2034 requires that the peer review be conducted during the period from the signing of the feasibility cost-share agreement between the Corps and the local sponsor and 60 days after the last day of the public comment period for the draft project study. Additionally, section 2034 lists three points during the feasibility study process at each of which the Chief of Engineers must consider whether to initiate peer review: when the without-project conditions—current and forecasted conditions if the project were not constructed—are identified, when the array of alternatives to be considered are identified, and when the preferred alternative is identified. Figure 2 shows the key steps in the feasibility study process, including those specified in section 2034. The Corps can conduct peer review at any time during the steps shown highlighted in gray in the figure, but according to Corps officials it generally conducts peer review after the draft feasibility report has been completed. The Washington-level review shown as the final step in the figure concludes with a signed Chief’s report for project studies that will be submitted to Congress for authorization. Section 2034 also requires the Corps to prepare and make publicly available a written response to all completed peer review reports before it finalizes project studies. The Corps must provide Congress with a copy of both the completed peer review report and the Corps’ written response when the signed Chief’s report or other final decision document for the project study is transmitted to Congress. The Corps’ Engineering Circular 1165-2-209 (EC 209) was issued in January 2010 and establishes its civil works review policy, which outlines the processes for implementing product review requirements for Corps civil works projects. EC 209 was developed to include the specific requirements for independent peer review contained in section 2034, OMB’s 2004 peer review guidance, as well as other Corps policy considerations. EC 209 requires that districts, in coordination with the relevant Corps planning center of expertise, prepare review plans for project studies. These review plans are to describe the appropriate levels of potential review that the specific project study will be subject to, such as the district’s quality control procedures, agency technical review, peer review, and policy and legal review. If a project study review plan indicates that a peer review will not be conducted, then the district is required to develop a risk-based recommendation for why the peer review is not required. This recommendation should document, among other things, that the project study is of such limited scope or impact that it would not benefit from a peer review. Since enactment of WRDA 2007, 49 Corps civil works project studies have undergone peer review as of January 2012, but it is unclear how many of these reviews were performed in response to the requirements in section 2034. This is because the Corps does not make specific determinations or track whether a peer review is being conducted in response to the requirements of section 2034. Of the 49 project studies that underwent peer review, the majority were for ecosystem restoration projects, flood risk management projects, or deep draft navigation projects. (App. II lists the 49 project studies that underwent peer review since WRDA 2007 was passed, including information on project and study type, as well as the district, division, and planning center of expertise associated with each study.) Moreover, it is not possible to determine how many project studies were required to undergo peer review in response to WRDA’s section 2034 requirements because the Corps does not centrally track project studies and could not provide us a list of all project studies that fell within the scope of section 2034. Our review of relevant Corps documents for the 49 project studies that underwent peer review, such as review plans and completed review reports, found that none of these documents specifies the authority under which peer reviews were conducted. Corps headquarters officials told us that the Corps does not make specific determinations as to whether a peer review is being conducted under section 2034 but instead focuses on ensuring that the peer review is being carried out in compliance with EC 209, which, in their view, complies with section 2034. These officials also told us that, to ensure the quality of Corps project studies, the agency may choose to conduct peer reviews under its other authorities, even if those peer reviews are not required by section 2034. In February 2011, the Corps submitted its initial report to Congress in response to the requirement in section 2034 of WRDA 2007 summarizing its experiences implementing the peer review process. In the report, the Corps noted that the 29 peer reviews that had been completed as of February 2011 followed the procedures described in agency and OMB guidance. The Corps report stated that, in its view, section 2034 provisions reinforce and add further definition to the Corps’ process. Nevertheless, because the Corps did not distinguish which studies had been selected for peer review in accordance with section 2034, we believe that it did not provide Congress with the type of information required by section 2034 that would help congressional decision makers evaluate the trial program. The 49 peer reviews conducted by the Corps since November 2007 resulted in direct costs of about $9 million in contract costs and contract administration fees. In addition, Corps staff resources were also used to manage peer reviews, but these costs are not fully quantifiable. Furthermore, the addition of peer review to the Corps study process has resulted in indirect costs by altering project study schedules because of the additional time required to complete the peer review. In some cases where a peer review was not planned for during the early stages of the study process, significant delays in the project studies have resulted from the addition of the peer review. The 49 project studies for which the Corps completed peer reviews since November 2007 cost about $9 million in contract costs and contract administration fees to establish and manage the expert panels for these reviews. In addition, Corps staff resources were also used to manage the peer reviews, but these costs are not fully quantifiable. The Corps used the services of three contractors to manage the peer review process: the nonprofit Battelle Memorial Institute, which managed 46 of the reviews; the nonprofit Noblis, which managed two; and NAS, which managed one. The cost per panel varied considerably. For example, the contracts managed by Battelle cost from about $76,000 to $484,000 for studies that underwent peer review, but the panel managed by NAS cost over $500,000. (See app. II for information on the contract costs for each of the 49 peer reviews.) In addition to the $9 million in contractor costs, the Corps incurred about $109,000 for the administration of the contracts for the 49 peer reviews. Specifically, the Corps used two different entities— the Institute for Water Resources and the Army Research Office—to administer these contracts, and both of these entities charged administration fees.contract cost. These fees ranged from no fee to 3 percent of the Corps staff resources were also used to manage the peer review process—including developing the scope of work for reviews, coordinating establishment of contracts, reviewing contract proposals, and responding to panel comments that the Corps received during a peer review process. Corps district, division, headquarters, and planning- center-of-expertise staff also spent time managing the peer review process. The total of these costs, however, is not fully quantifiable across all Corps districts because not all districts track district staff time spent on peer reviews. For those districts that did track or could estimate district staff time spent on peer review-related activities, we found the following examples of the staff resources that may have been dedicated to managing and responding to peer review activities: The Green Bay Dredged Material Management Plan peer review cost about $101,000 in staff time, according to data provided by district officials. But district officials involved in this peer review said that the cost in terms of staff time may have been higher than typical because this peer review was the first conducted for a study in that district. The Chatfield Water Reallocation Study peer review cost about $20,000 in staff time as of December 2011, but the Corps response has not been completed for this peer review, and additional staff time could be involved. For the Boston Harbor study, district officials estimated that costs totaled about $77,000 for district staff time, agency technical review team labor, and contractor fees for assisting the district with responding to peer review panel comments. For the American River study, district officials estimated that costs came to about $40,000 for district staff time. Similar to district staff time, other staff time involved in managing the peer review process, including headquarters, division, and some planning centers of expertise time, is also not always tracked and therefore not fully quantifiable: some of these positions are funded with general funds and not project-specific funds, according to Corps officials we spoke with. We did find two examples, however, where planning centers of expertise staff time devoted to peer review related activities was tracked. In these two instances, the cost of planning centers of expertise staff time devoted to peer review activities amounted to about $12,000 for the peer review of the Boston Harbor study and about $32,000 for the American River study. The addition of peer review to the Corps study process has also had an indirect cost because it has affected project study schedules. Planning centers of expertise and district officials estimate that obtaining the contract and executing the peer review generally take about a year. The breakdown for the peer review process, according to some of these officials, is about 3 months to initiate the contract; 3 to 6 months for the review to be completed; and an additional 3 months to close out the review, which involves responding to and receiving clarification on panel comments. Some of these processes occur concurrently with other aspects of the project study, but some parts of the peer review process, such as responding to panel comments, may add time to the study schedule. According to Corps officials, the addition of peer review to the project schedule adds steps to the review process and takes time away from other projects. In addition, according to several Corps officials, some project studies have been delayed because the district did not allocate funding for the peer review and therefore had to wait until additional funding was available. In some cases, this delay added significant time to the schedule. In contrast, according to some Corps division and planning- center-of-expertise officials, when the project manager had built in time for the peer review and had identified funding for it early, the peer review process had much less of an impact on the overall project study schedule. Local sponsors are also concerned about the impact that this additional time is adding to project studies, according to Corps officials and local sponsors we spoke with. Their concern arises largely because local sponsors share the cost of the Corps study and depend on its timely completion. District officials told us that because of the cost-sharing requirements and the current economic environment demand is greater from local sponsors for the Corps to finish studies quickly and keep costs down. Two local sponsors told us that delays negatively affect local sponsors because they can lose business if a project is not completed in a timely manner. Similarly, sponsors are accountable to their own local governments or state legislatures, and additional delays or time required for peer review can create challenges in getting continued support for a project. For example, in the case of the Green Bay Dredged Material Management Plan, Corps officials told us that peer review increased the cost of the project and caused a 5- to 6-month delay at a time when the local sponsor was attempting to acquire grant money contingent on completion of the dredged material management plan. The Corps’ process for determining whether a project study is subject to peer review is more expansive than section 2034 requirements because it uses broader criteria; this has resulted in peer reviews of studies that are outside the scope of section 2034. In addition, the process the Corps uses does not include the flexibility provided in section 2034 to exclude certain project studies from peer review. Moreover, some studies are undergoing peer review that do not warrant it, according to some Corps officials we spoke with. The Corps’ process for determining whether a project study is subject to peer review uses criteria that are broader than the requirements of section 2034. As table 1 shows, the Corps relies on its guidance outlined in EC 209 when selecting project studies for peer review, and this guidance extends beyond section 2034 requirements. Consequently, the Corps has selected some studies for review based in part on criteria included in EC 209 that are not required by section 2034, and others that are outside the scope of section 2034. For example, according to our analysis of review plans for 44 peer reviewed project studies, over one-third identified criteria that related to both section 2034 and other authorities. In addition, the Corps process for determining whether a peer review is required has resulted in 30 project studies undergoing peer review that were outside the time parameters identified in section 2034. Based on our analysis of the characteristics of these studies, the Corps’ process was applied to all studies and reports regardless of when they were initiated, whereas section 2034 applies to project studies initiated from November 2005 through November 2014. Specifically, section 2034 applies to (1) project studies initiated from November 2005 through November 2007 and for which the array of alternatives had not been identified, and (2) project studies initiated from November 2007 through November 2014. As a result, over half (30 of 49) of the peer reviews conducted since the enactment of WRDA 2007 were for project studies that did not fall under the scope of section 2034 because the studies were initiated before November 2005. Another reason the Corps’ process for selecting studies for peer review is more expansive than the scope of section 2034 is that Corps’ guidance does not clearly define “project study.” The guidance refers to a wide range of project studies, decision documents, and work products that may be subject to peer review, whereas section 2034 defines a project study subject to peer review as a feasibility or reevaluation study, including the EIS, or any other study associated with the modification of a water resources project that includes an EIS. According to our analysis of the 49 studies that underwent peer review, some of these studies did not fit this definition. Specifically, 34 of the 49 studies that underwent peer review were feasibility or reevaluation studies which are project studies as defined by section 2034 requirements, 8 were other kinds of reports that included an EIS and therefore may have been subject to section 2034 requirements, and 7 were neither feasibility nor reevaluation studies and did not include an EIS and therefore did not fit the definition of a project study subject to peer review under section 2034. For more details on each of the studies that underwent peer review, see appendix II. The Corps’ process for determining whether peer review is required for project studies does not include the flexibility provided in section 2034 to exclude certain project studies from otherwise mandatory peer review. EC 209 states that most studies should undergo peer review, and the Corps’ process requires that for any decision document to forgo a peer review, an exclusion must be requested and approved by headquarters. In addition, guidance provided to Corps staff on how to implement EC 209 discourages requests for exclusions, noting that time should not be wasted shopping around for exclusion requests. Furthermore, agency guidance and Corps headquarters officials, including the Director of Civil Works, highlight the value and importance of peer review in achieving the agency’s mission, noting that an extra set of eyes is beneficial. In addition, Corps headquarters officials told us, and agency guidance highlights language from the WRDA 2007 conference report, that “ection 2034 permits the Chief of Engineers to exclude a very limited number of project studies from independent peer review.” We believe, however, that the Corps has misconstrued this statement and overstated its significance. This statement is part of the explanation of the exclusion paragraph (a)(5), and does not apply to the provision as a whole; therefore, this statement pertains to how many studies for which peer review is mandatory would be eligible for exclusion.another relevant statement in a House committee report on WRDA 2007 suggests that 26 studies over 7 years, or about 4 studies per year, would be expected to be subject to peer review. Additionally, the 2002 NAS study—which is prominently mentioned throughout the subsequent legislative history of WRDA 2007—states that not all Corps water resources project planning studies will require external, independent review, but the Corps should institute external review for studies that are expensive, that will affect a large area, that are highly controversial, or that involve high levels of risk. In reviewing the exclusion requests that it receives, Corps headquarters determines whether the studies meet any of the mandatory requirements in EC 209 for undergoing peer review. Specifically, the Corps reviews whether the project has a cost estimate of greater than $45 million, represents a threat to health and safety, is controversial, and has had a request for peer review from a governor or the head of a federal or state agency. If studies do not meet any of these criteria, the Corps generally approves the study for exclusion from peer review. From our review of 50 studies that had requested exclusion from peer review between 2009 and 2011, we found that the Corps had granted an exclusion for 37 studies because they did not meet any of the criteria in EC 209 for studies that must undergo peer review except for one study, which did not fit the definition of a project study in section 2034. Under section 2034, however, the Corps also has the flexibility to exclude studies from peer review that exceed the $45 million threshold if they: do not have an EIS; are not controversial; are expected to have negligible adverse impacts on scarce or unique cultural, historic, or tribal resources; have no substantial impacts on fish and wildlife species or their habitats; and have no more than negligible impacts on threatened or endangered species or their critical habitat. Similarly, under section 2034, the Corps may exclude studies from peer review that involve (1) only the rehabilitation or replacement of existing hydropower turbines, lock structures, or flood control gates within the same footprint and for the same purpose as an existing water resources project; (2) an activity for which there is ample experience within the Corps and industry to treat the activity as being routine; and (3) minimal risk to human life and safety. Nevertheless, according to our analysis of exclusion request documents and headquarters’ responses to these requests, as of November 2011, the Corps had not granted an exclusion based on any of the flexibilities included in section 2034. Several Corps officials expressed concerns about the Corps exclusion process. Specifically, some officials told us that they were concerned about the cost and time involved and said that the exclusion of projects that do not meet any of the mandatory criteria should be delegated to the division offices. In their opinion such delegation would help streamline the process. Moreover, some of the studies that underwent or are currently undergoing peer review did not warrant it, according to some Corps officials we spoke with. Specifically, we found the following examples of studies that may not have warranted a peer review: Two dredged material management plans underwent peer review. For example, the Green Bay Dredged Material Management Plan underwent peer review but is not a project study as defined by section 2034. Officials we spoke with said that such plans should not generally require peer review because any significant impacts would be addressed under NEPA and because the Corps has sufficient expertise in the area of dredging. The Chacon Creek study in southern Texas underwent peer review but should not have, according to some Corps officials we spoke with. This study was for a project that would remove houses from a floodplain, but officials said it should not require peer review because there are no structural components and it did not exceed the $45 million threshold. Corps headquarters denied the request for exclusion and stated that flood studies warrant peer review because of the nature of the hazard and the need to assess the extent and treatment of risk. Headquarters officials highlighted the importance of assessing and addressing such risks in light of Hurricane Katrina and said that flood studies such as Chacon Creek require peer review because of the importance of assessing and decreasing risks associated with flooding. The Yuba River General Reevaluation study is undergoing peer review but does not warrant it according to some Corps officials. District officials told us that the study does not warrant peer review because the construction work involved has already been completed, and the purpose of the study is to determine the amount the local sponsor should be reimbursed by the Corps. Officials from several districts, divisions, and planning centers of expertise we spoke with told us that peer review should be focused on larger and more complex or controversial projects and should not be the default approach. Two Corps officials described the Corps’ peer review policy as a one-size-fits-all approach, and one of these officials stated that it is inflexible and risk averse. The Corps has a process to review general information on contractors’ conflicts of interest and independence during its contractor selection process, but it does not have a process for reviewing project-specific information provided by contractors to determine if conflicts of interest and independence exist at the project level. The Corps’ contractors, however, have a process for reviewing the appropriate information related to the conflicts of interest and independence of the experts selected for peer review panels at the project level. For its initial peer reviews, the Corps relied on Battelle to establish and manage the peer review panels. From 2007 to 2009, Battelle managed 15 independent peer reviews for the Corps. The Corps had identified Battelle as a potential contractor for managing its peer review panels as early as August 2007, when WRDA 2007 was being considered. To ensure that Battelle could meet the section 2034 independence requirements, according to Corps officials, the Director of Civil Works and the Chief of Planning and Policy held discussions with Battelle, and officials from the Corps’ Institute for Water Resources met with Battelle to discuss Battelle’s existing review process and the independent peer review requirements of WRDA 2007. Battelle informed the Corps that it met all WRDA 2007 requirements for an eligible organization, and Battelle identified its existing contract with the Army Research Office as a vehicle for employing Battelle to establish and manage peer review panels under section 2034. During that time, NAS also conducted one independent peer review for the Corps on the Louisiana Coastal Protection and Restoration Program, which charged the Corps with developing a full range of flood control, coastal restoration, and hurricane protection measures for South Louisiana. According to Corps planning-center-of-expertise officials, because of the extensive scope and breadth of the project, NAS was chosen instead of Battelle to conduct that peer review. But Corps headquarters and planning-center-of-expertise officials told us that, over the course of NAS’ review, they realized that NAS would not be the appropriate organization for reviewing individual projects studies because its process was too time-consuming and expensive. A member of that NAS peer review panel also told us that while he would recommend NAS review for larger projects, in his opinion NAS might not be the appropriate organization for reviewing smaller Corps projects. In 2009, the Corps sought additional contractors to establish and manage peer review panels and began its contractor selection process by putting out a request for proposals. This solicitation included as contract requirements the section 2034 criteria that the organizations establishing and managing peer review panels be independent and free from conflicts of interest. The Corps received six proposals, including one from Battelle, and each of these proposals was then evaluated by a three-person review panel. The panel chairperson told us that the section 2034 criteria that eligible organizations be independent and free from conflicts of interest were considered as minimum qualifications for screening and selection. As a result of this process, the Corps awarded a contract to Battelle—this contract was in addition to the existing contract Battelle already had with the Corps through the Army Research Office—and one to Noblis. The Corps determines which of the two contractors it will use to manage individual peer reviews on the basis of the contractors’ responses to specific project study scope of work requests, described below. Although the Corps’ contractor screening and selection process identifies general contractor independence and areas of conflicts of interest, the Corps does not have a process for reviewing the selected contractors’ project-specific independence and freedom from conflicts of interest. For each project study undergoing a peer review, the Corps sends both contractors a “scope of work” document, which describes the project study and lists the required contractor qualifications. These qualifications include independence and freedom from conflicts of interest related to the specific project study being reviewed. In response, the contractors send the Corps their proposals for conducting the peer review, which generally include statements that they are independent and free from project- specific conflicts. Nevertheless, we identified a number of weaknesses in the Corps’ approach for reviewing and corroborating this information, including the following: The Corps’ planning centers of expertise are expected to review the contractors’ overall proposals, but the Corps does not require the centers to ensure that contractors’ statements of independence within the proposals are reviewed and corroborated for each individual project. Although planning-center-of-expertise officials told us that they review the overall proposals, some of these officials also stated that they did not believe that the statements required review because Corps headquarters had already prescreened the contractors during the initial contractor screening and selection process. Furthermore, the Corps has not provided any guidance to the planning centers of expertise or other Corps offices that specifies how those officials should review the contractors’ project-specific statements at the proposal stage and ensure that they are accurate and that the contractors are in fact independent and free from conflicts of interest. Absent such guidance, the Corps cannot ensure that its contractors are independent and free from conflicts of interest at the project level. The Corps neither conducts any internal conflicts-of-interest checks nor asks contractors for documentation about potential conflicts of interest so that it can determine whether a conflict exists; rather, the Corps allows the contractors to make that determination on their own. As a result, if the contractors do not provide this information to the Corps, the agency does not have a process for otherwise obtaining this information. Unlike the Corps’ review of contractor independence, the Corps’ contractors do solicit and review information on panel members’ independence and conflicts of interest at the project level. The contractors gather information about prospective panel members using screening questions developed from the scope of work for each peer review. These questions cover issues described in the NAS policy on committee composition and conflicts of interest, such as financial and employment interests and public statements and positions. The peer review reports from both Battelle and Noblis state that they follow both the OMB guidance on peer review and the NAS policy when selecting panel members. According to contractors and Corps officials, district and planning-center-of-expertise officials review the contractors’ screening questions, as well as the resumes of selected experts, and can provide the contractors with additional information about potential conflicts of interest, such as previous work a particular expert may have done for the Corps. Corps officials told us that the contractors follow up on such information where appropriate, but the contractors and a Corps official we spoke to said that it is the contractors who ultimately select the panel members and ensure their independence. The Corps has adopted and incorporated most of the peer review recommendations it has received. Adoption of these recommendations has resulted in some technical improvements to project study reports but generally has not changed the Corps’ decisions in selecting preferred project designs. According to some Corps officials we spoke with, this is the result of the review occurring too late in the process to effect a change in decision making. Of the 49 project studies that have undergone peer review, the Corps has provided a final written response for 17. The Corps has adopted 231 of 274 recommendations, partially adopted 31, and rejected 12 for these 17 peer reviews. Several Corps district officials told us that they make every effort not to reject peer review recommendations and that headquarters has directed them to adopt recommendations whenever possible. In fact, some district officials told us that they felt pressure from headquarters to adopt peer review recommendations even when the recommendations would not affect the study outcome and would be burdensome to implement. The Corps’ adoption of peer review recommendations has improved the technical quality of its project study reports, according to Corps officials and panel members we spoke with. Corps officials complimented the quality and technical competence of panel members and stated that the panels’ recommendations have been helpful in clarifying and strengthening the arguments presented in the studies. Most of the recommendations either requested that the Corps add to or clarify the study report or stated that the study report did not sufficiently address certain issues. The Corps addressed these issues in almost all instances (193 of 201 recommendations) within its written responses to completed peer review reports. A smaller number of recommendations addressed the underlying assumptions and inputs to the project studies’ economic, engineering, and environmental analyses. Corps revised portions of its analysis on the basis of these kinds of recommendations. In none of these cases did the Corps indicate that the revised analyses would change the study decisions. In one case, according to Corps documents, the revised analysis served to strengthen arguments in favor of its recommended plan. In response to a recommendation concerning an environmental analysis from the peer review of the Mid-Chesapeake Bay Islands Ecosystem Restoration project study, the Corps conducted additional analyses to justify its calculations of environmental benefits. The Corps reported that the additional analyses led to the determination that the selected plan was appropriate but that by considering the ecosystem impacts of the project in a more detailed fashion, justification of the recommended plan was strengthened. Specifically, 34 out of the total 910 peer review recommendations indicated a problem with the economic analysis, 24 indicated a problem with the engineering analysis, and 19 indicated a problem with the environmental analysis. Nevertheless, despite these technical improvements, some Corps officials have questioned the benefit of peer review, given the significant amount of time that district staff have to spend managing the process and responding to recommendations. The process for responding to recommendations begins with district officials drafting a written response, which they provide to the panel. The Corps’ response to the peer review recommendations includes a detailed description of the steps that the Corps has taken or will take to incorporate the recommendations into the project study. The contractor then convenes a teleconference at which district officials discuss the draft response with panel members. After this discussion, the panel members provide written feedback—“backcheck responses”—to the Corps stating whether they agree with the district’s response. The district then finalizes its response to the recommendations and forwards the response to its division office. After its review, the division forwards the response to headquarters, where the response is finalized. The final written response is generally published at the same time as the final decision document for the project study. The time between completion of the peer review report and public availability of Corps’ written response therefore varies greatly depending on the individual project. In one case it was 3 months, while in other cases peer review reports have been completed for more than 3 years without a final response from the Corps having been made public. Corps officials we spoke with told us that peer review recommendations have generally had no impact on the Corps’ decision making process. These Corps officials were not aware of any project studies for which the study outcome changed as a result of peer review. Corps headquarters officials told us that one reason for the lack of impact of peer review on decision-making is because the Corps’ internal review process is identifying the same issues as peer review. Another reason cited by Corps officials for the lack of impact on decision making is the fact that peer review is occurring at the end of the study process. Peer review generally occurs concurrently with the public comment period for the draft study report, which comes after the preferred design has been selected. As a result, some recommendations about alternatives may not have been implemented because the decision on the preferred design had already been made. Selecting a different preferred design at that stage would require the Corps to revisit an already completed selection analysis and decision. For example, in the peer review report on the Cedar River- Cedar Rapids Iowa Flood Risk Management project study, the review panel recommended that the Corps further investigate one of the non- selected design alternatives, because panel members felt that the alternative might achieve project objectives better than the preferred design. The Corps, however, had already selected its design and decided to proceed. The Corps did not adopt this recommendation, stating that it believed its analysis of alternatives was sound and that there was no reasonable expectation that a more detailed analysis of the alternative would result in finding that it had greater net economic benefits than the preferred design. In contrast, when the Corps has conducted a peer review earlier in the process, opportunities have arisen for positive impacts on a study decision. For example, the American River Common Features project study peer review was conducted early in the study process. According to Corps division and planning-center-of-expertise officials, they conducted peer review early to obtain external input on defining the problem and to inform decision-making due to the complexity of the project. As a result, the peer review began before the alternative formulation briefing, when the without-project conditions were being identified. By employing this approach, the Corps received feedback from the review panel before selecting the preferred design. The panel’s recommendations included three suggested changes to the Corps’ analyses and model calibrations, which the Corps had time to incorporate before conducting the alternative analysis and selection. According to the contractor that managed the peer review, the panel members involved in the American River Common Features peer review also found the timing of the review to be beneficial and suggested that the Corps conduct peer review earlier for other project studies. The timing of peer review was also addressed in the 2002 NAS study on peer review. NAS recommended that the Corps initiate peer reviews early enough in the study process so that the review results could be meaningfully incorporated into the study or project design and stated that conducting peer review before selecting a recommended plan is essential if the Corps is to benefit from the review. Corps officials nevertheless told us that they have generally chosen to conduct peer reviews later in the process to minimize effects on project study schedules. Corps headquarters officials noted that, for many studies, peer review occurred late in the process because the studies were under way at the time the Corps began requiring peer review. These officials also noted that it would be challenging to assemble a peer review panel to conduct a review early in the study process and retain the same panel to complete this review at the end of the study. Furthermore, Corps headquarters officials noted that a further challenge is implementing section 2033 WRDA requirements along with section 2034. Section 2033 generally requires the Corps to complete feasibility studies within 2 years. According to Corps officials, there is tension between these requirements and it may be challenging to include peer review throughout the study process without altering project study schedules. Section 2034 established a trial to look at the cost and impact of conducting peer review for controversial and costly projects over a 7-year period. After the trial period, based on information provided by the Corps, Congress could reconsider whether to retain or revise section 2034 or allow it to lapse. Because the Corps generally does not specify the authority under which peer review was conducted, however, it has not provided Congress with the information needed to evaluate the merits of the section 2034 requirements. In addition, the Corps’ implementation of peer review has not focused on the larger, more complex, and controversial projects that were contemplated when section 2034 was enacted and as recommended by NAS a decade ago. As a result, project studies are being selected to undergo peer review that may not be warranted and may thereby be increasing project costs and schedules needlessly. Further, essential to the integrity of the peer review process is the assurance that the Corps has effective processes not only to ensure overall contractor independence and freedom from conflicts of interest but also to ensure project-level independence and freedom from conflicts of interest. The Corps’ current process, however, has a number of weaknesses with respect to ensuring no conflicts of interest exist at the project level. Finally, with peer review generally occurring late in the Corps’ project study process, peer review serves more to strengthen the Corps’ presentation of its decisions than to influence its decision making. This effect runs counter to what NAS recommended in 2002, that realizing the benefits of peer review requires the results to be used as inputs in the decision-making process. By choosing to apply peer review late in the project study process, the Corps has effectively chosen to not use the results of peer review to enhance its decision-making process and ensure selection of the most effective project alternatives. We recommend that the Secretary of Defense direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to take the following three actions: To facilitate congressional evaluation of the 7-year trial period outlined in section 2034, the Corps should: Identify for each past and future peer review the specific statutory authority under which the peer review was conducted and the criteria triggering peer review under the Corps’ civil works review policy. To better reflect section 2034 and provide more effective stewardship of public resources and ensure efficient and effective operations, the Corps should: Revise the criteria in the Corps’ process for conducting peer review to focus on larger, more complex, and controversial projects; to encourage peer review to occur earlier in the study process; and to include exclusions to peer review that align with section 2034. Develop a documented process to ensure that contractors are independent and free from conflicts of interest on a project-specific basis. We provided a draft of this report to the Department of Defense for review and comment. In its written comments, reprinted in appendix III, the department generally concurred with our recommendations. Specifically, in response to our first recommendation, the department agreed that the Corps should, and stated that it will, identify for each past and future peer review the specific statutory authority under which the peer review was conducted and the criteria triggering peer review under the Corps’ civil works review policy. In response to our second recommendation, the department partially concurred, stating that it agreed that peer review should be focused on studies that will significantly benefit from peer review and that initiating reviews early is advantageous. Nevertheless, the department noted that early involvement must be balanced with having sufficient data and analysis available for review and also highlighted work under way at the agency to overhaul its planning processes, which includes efforts to better align product reviews for greater effectiveness. In response to our third recommendation, the department agreed that the Corps should develop a documented process to ensure that contractors are independent and free from conflicts of interest on a project-specific basis. Although the department generally concurred with our recommendations, it disagreed with our report’s finding that the Corps’ process does not use the flexibility provided in section 2034, and it disagreed that some studies have undergone review that did not warrant it. The department stated that the Corps has carefully deliberated in support of the agency decision to conduct peer review on the three studies noted in our report and also stated that the Corps stands by all of its decisions to date to grant or deny exclusions from peer review. Nevertheless, the department stated that as part of the Corps’ ongoing review of the civil works review policy, it will assess the effectiveness of its criteria and how the criteria are applied to determine which studies should be considered for exclusion. In addition, the department expressed concern about the level of weight given in the report to anecdotal remarks from field-level officials, who in the department’s opinion may not have had the benefit of the corporate vision supporting the Army Civil Works Program. We disagree with the department’s characterization of our methodology. As clearly described in the scope and methodology section of this report, we interviewed officials who had a corporate-level perspective, as well as those who had a project-level perspective. Specifically, to obtain a corporate-level view, we interviewed senior level officials from Corps headquarters, the Institute for Water Resources, and the planning centers of expertise involved in managing the peer reviews. In addition, to get a project-level perspective and to assess the impact of peer review on division and district offices, we interviewed officials in all of the Corps’ eight divisions, and from 10 geographically dispersed Corps districts that had conducted studies that underwent peer review. We also interviewed the three contractors and selected peer review panel members and local sponsors of Corps civil works projects. We believe that the report provides a balanced perspective from both the headquarters and field levels. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Defense, the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and other interested parties. This report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives for this work were to examine (1) the number of Corps project studies that have undergone independent peer review in response to section 2034 of the Water Resources and Development Act (WRDA) of 2007, (2) the cost of these peer reviews, (3) the extent to which the U.S. Army Corps of Engineers’ (Corps) process for determining if a project study is subject to peer review is consistent with section 2034, (4) the process the Corps uses to ensure that the contractors it hires and the experts the contractors select to review project studies are independent and free from conflicts of interest, and (5) the extent to which recommendations from peer reviews have been incorporated into project studies. We focused on peer reviews for which reports had been completed since WRDA 2007 was enacted. To address all of these objectives, we reviewed relevant legal requirements, policy guidance, review plans, and peer review reports for project studies that were subject to a peer review and for which a peer review report had been completed since WRDA 2007 was enacted. In addition, we selected a nongeneralizable sample of six peer reviews to examine in greater depth to better understand the costs associated with conducting these reviews, as well as the overall impact of the process on the timeline of the project study and the study outcome. We chose these reviews as illustrative examples and selected one from each of the Corps’ planning centers of expertise and at least one for each of the three contractors the Corps has used to manage peer reviews since enactment of WRDA 2007. Although the information derived from analysis of these case studies cannot be generalized, these examples provide valuable insights into the peer review process. We conducted semistructured interviews with officials from Corps headquarters, the planning centers of expertise involved in managing the peer reviews, all of the Corps’ eight divisions, and from 10 geographically dispersed Corps districts that had conducted studies that underwent peer review. We also conducted semistructured interviews with the three contractors, as well as selected peer review panel members and local sponsors of Corps civil works projects. To determine the number of studies that have undergone peer review in response to section 2034 of WRDA, we reviewed all completed peer review reports, plus Corps reports and information on completed peer reviews. We reviewed information on completed peer reviews obtained from headquarters, the planning centers of expertise, divisions, and selected districts. We also reviewed information on completed peer reviews obtained from the contractors that established the peer review panels and the entities the Corps used to administer these contracts: the Institute for Water Resources and the Army Research Office. To determine the cost of these reviews, we reviewed contract award documents and information on contract costs from the contractors. Generally, we relied on the contract award amounts reported in the contracts to determine the cost of the contracts awarded for establishing review panels. For four contract awards, the contract work included establishing a peer review panel and additional work. For these awards, we therefore relied on information provided by the contractor on the portion of the contract cost that was for the peer review. For the contract award for peer review of a local sponsor-led study, we relied on information from the local sponsor and the contractor on the cost of the award. In addition, for the six case study peer reviews, we analyzed information on costs associated with managing the review process, including cost data and estimates provided by districts with regard to district and other staff time involved in peer review. In cases where we reported cost data including staff time associated with completing peer review, we asked knowledgeable officials about the data system and the quality of the data and determined that they were sufficiently reliable for our purposes. In cases where we reported estimates of these costs, we asked officials about how these estimates were developed and determined that they were sufficiently reliable for our purposes. To determine the extent to which the Corps’ process for determining if a study is subject to peer review is consistent with section 2034, we analyzed the legal requirements and relevant policy guidance for determining when to conduct peer review. We also reviewed documentation on decisions to conduct peer review included in review plans and documents requesting exclusion from peer review. In addition, we reviewed information on the characteristics of studies that underwent peer review, including date initiated, whether an environmental impact statement was included, and the type of study. We identified this information in review plans, study drafts, signed Chief’s reports, and other Corps study-related documents; Corps officials from relevant districts or divisions reviewed this information. To determine the process the Corps uses to ensure that the contractors it hires and the experts the contractors select are independent and free from conflicts of interest, we reviewed information on contractor selection obtained from Corps headquarters and the Institute for Water Resources. We also reviewed documentation from the contractors that outlined contractor and reviewer qualifications, as well as the National Academy of Sciences Policy on Committee Composition and Balance and Conflicts of Interest. To determine the extent to which peer review recommendations are incorporated into project studies, we reviewed information obtained from headquarters, the planning centers of expertise, divisions, and selected districts on how the Corps responds to peer review recommendations. We also reviewed all peer review recommendations contained in completed peer review reports, as well as all responses to peer review recommendations contained in the Corps’ published responses to the completed peer review reports. We conducted this performance audit from April 2011 to March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Forty-six of the 49 peer reviews completed in table 2 below were conducted by Battelle Memorial Institute; Noblis completed the Green Bay Dredged Material Management Plan and the Wood River Levee System General Reevaluation Report peer reviews, and the National Academy of Sciences conducted the Louisiana Coastal Protection and Restoration peer review. As table 2 shows, the studies that underwent peer review came under the areas of ecosystem restoration (19 of 49), flood risk management (15 of 49), deep draft navigation (7 of 49), coastal storm damage reduction (5 of 49), inland navigation (2 of 49), and water management and reallocation (1 of 49). According to our analysis of Corps documents, 32 of the 49 studies included an environmental impact statement (EIS), 19 of 49 were initiated after November 2005, and 42 of 49 had an estimated total project cost greater than $45 million. In addition to the individual listed above, Vondalee R. Hunt, Assistant Director; Darnita Akers; Elizabeth Beardsley; and Janice Ceperich made significant contributions to this report. Ellen Wo Chu, Cindy Gilbert, Richard P. Johnson, Ben Shouse, and Kiki Theodoropoulos also made key contributions.
|
Section 2034 of the Water Resources Development Act of 2007 requires that certain U.S. Army Corps of Engineers (Corps) civil works project studies undergo independent external peer review to assess the adequacy and acceptability of the methods, models, and analyses used. In the act, Congress established a 7-year trial period for this requirement and also required the Corps to submit two reports on its experiences with the peer review process. GAO was asked to examine (1) the number of Corps project studies that have undergone independent peer review in response to section 2034, (2) the cost of these peer reviews, (3) the extent to which the Corps process for determining if a project study is subject to peer review is consistent with section 2034, (4) the process the Corps uses to ensure that the contractors it hires and the experts the contractors select to review project studies are independent and free from conflicts of interest, and (5) the extent to which peer review recommendations have been incorporated into project studies. GAO reviewed relevant laws, agency guidance, and documents and interviewed Corps officials and contractors. Since enactment of the Water Resources Development Act of 2007, 49 project studies have undergone peer review but it is unclear how many were performed in response to section 2034 requirements because the Army Corps of Engineers (Corps) does not make specific determinations or track if a peer review is being conducted under section 2034. In February 2011, in response to section 2034, the Corps submitted its initial report to Congress summarizing its implementation of the peer review process. In its report, however, the Corps did not distinguish which studies had been selected for peer review in accordance with section 2034 and therefore, did not provide Congress information that would help decision makers evaluate the requirements of section 2034 at the end of the trial period. The 49 peer reviews resulted in both direct and indirect costs. Specifically, these peer reviews resulted in direct costs of over $9 million in contract costs and fees. In addition, Corps staff resources were used to manage the reviews, although these costs are not fully quantifiable. Furthermore, the addition of peer review to the Corps study process has resulted in indirect costs by altering project study schedules to allow for time needed to complete peer reviews. In some cases where a peer review was not planned during the early stages of the study process, significant delays to project studies occurred while funds were sought to pay for the peer review. In contrast, according to some Corps officials, when project managers have built in time and identified funding for peer reviews early, the process has had less of an impact on project study schedules. The Corps process for determining whether a project study is subject to peer review is more expansive than section 2034 requirements because it uses broader criteria, resulting in peer reviews of studies outside the scope of section 2034. In addition, the process the Corps uses does not include the flexibility provided in section 2034, which allows for the exclusion of certain project studies from peer review. Moreover, some studies are undergoing peer reviews that do not warrant it, according to some Corps officials GAO spoke with. The Corps has a process to review general information on contractors conflicts of interest and independence when selecting them to establish peer review panels, but it does not have a process for reviewing project-level information on conflicts of interest and independence. As a result, it cannot be assured that contractors do not have conflicts at the project-level. In contrast, the Corps contractors do have a process for reviewing information related to conflicts of interest and the independence of experts selected for each peer review panel. The Corps has adopted and incorporated into its project study reports most of the peer review recommendations it has received. Doing so has resulted in some technical improvements to study reports but generally has not changed the Corps decisions about project alternatives, in part because the peer review process occurs too late in the project study process to affect decision making, according to some Corps officials GAO spoke with. As a result, some recommendations about alternatives may not have been implemented because the decision on the preferred design had already been made. GAO recommends that the Department of Defense direct the Corps to, among other actions, better track peer review studies, revise the criteria for determining which studies undergo peer review and the timing of these reviews, and improve its process for ensuring contractor independence. The department generally concurred with these recommendations.
|
Numerous international bodies with different missions and members have played a role in implementing the G20 financial regulatory reform agenda. In general, many of these bodies operate on a consensus basis and have no legally binding authority. Thus, financial reform agreements reached by these bodies must be adopted voluntarily by their member jurisdictions, such as through legislative or regulatory changes (or both), to take effect. Figure 1 depicts some of the international bodies involved in the G20 financial regulatory reforms and that partly comprise an international network often referred to as the “international financial architecture.” Within the international financial architecture, the G20 is a forum for international cooperation on global economic and financial issues. Its members include 19 countries and the European Union. The G20’s objectives are to coordinate policy among its members to achieve global economic stability and sustainable growth; promote financial regulations that reduce risks and prevent future financial crises; and modernize the international financial architecture. The G20 was established in 1999 as a forum for finance ministers and central bank governors in the aftermath of the financial crisis of 1997-1998. The G20 was elevated to the political leader level in 2008, when its member countries’ heads of state or government first met to respond to the global economic and financial crisis. G20 member jurisdictions account for approximately 90 percent of world gross domestic product, 80 percent of world trade, and are home to two-thirds of the world’s population. Since 2008, the G20 leaders have met at least annually. The presidency of the G20 rotates annually among its members, and the host government supplies the staff for the secretariat that runs the agenda and hosts meetings that year. In addition to the G20 leaders meetings, the G20 finance ministers and central bankers, and the Sherpas, who are representatives of the leaders, meet on a regular basis. As the G20 operates on the basis of consensus, its commitments reflect the agreement of its members, including the United States. The G20 established FSB in 2009 as the successor to the Financial Stability Forum to coordinate at the international level the work of national financial authorities and international standard-setting bodies in order to develop and promote the implementation of effective regulatory, supervisory, and other financial sector policies. FSB member institutions include finance ministries, financial regulatory authorities, and central banks of the G20 members, as well as those of Hong Kong Special Administrative Region (SAR), the Netherlands, Singapore, Spain, and Switzerland. FSB members also include international bodies—such as IMF, the Organisation for Economic Co-operation and Development, and the World Bank—and international standard-setting bodies, such as the Basel Committee on Banking Supervision (Basel Committee), International Accounting Standards Board (IASB), International Organization of Securities Commissions (IOSCO), and International Association of Insurance Supervisors (IAIS). According to FSB, it seeks to support the multilateral agenda for strengthening financial systems and the stability of international financial markets. FSB’s mandate includes (1) assessing vulnerabilities affecting the global financial system and identifying and overseeing actions needed to address them; (2) promoting coordination and information exchange among authorities responsible for financial stability; (3) undertaking reviews of the policy development work of the international standard- setting bodies to ensure their work is timely, coordinated, focused on priorities, and addressing gaps; and (4) collaborating with IMF to conduct early warning exercises. In addition, FSB has instituted a framework for monitoring implementation of the G20 financial reforms and reports periodically to the G20 about standards development and implementation progress. Organizationally, a 70-person Plenary, composed of one to three representatives from each represented jurisdiction or organization, is the sole decision-making body of FSB. The Plenary approves reports, principles, standards, recommendations, and guidance developed by FSB, and approves work programs and the FSB budget. The FSB Plenary is led by the FSB Chairman. A Steering Committee coordinates work in between Plenary meetings. FSB’s work is carried out through the activities of standing committees, including the Standing Committee on Assessment of Vulnerabilities, the Standing Committee on Standards Implementation, and the Standing Committee on Supervisory and Regulatory Cooperation. The FSB Plenary is reviewing the structure of representation in FSB, which is to be completed by the next leaders summit in November 2014. As outlined in the FSB charter, FSB Plenary seat assignments are meant to reflect the size of the national economy, financial market activity, and national financial stability arrangements in a member jurisdiction. The staff of FSB members carry out the majority of FSB’s work. The FSB Secretariat has approximately 30 staff members who also support FSB’s work. IMF is an organization of 188 member jurisdictions. Founded in 1944, IMF’s primary purpose is to safeguard the stability of the international monetary system—the system of exchange rates and international payments that enables countries (and their citizens) to buy goods and services from one another. IMF’s main activities include (1) providing advice to members on adopting policies that can help them prevent or resolve a financial crisis, achieve macroeconomic stability, accelerate economic growth, and alleviate poverty; (2) making financing temporarily available to members to help them address balance of payments problems; and (3) offering technical assistance and training to countries, at their request, to help them build the expertise and institutions they need to implement sound economic policies. As part of its surveillance activities, IMF conducts surveillance of its members’ financial sectors at the bilateral and multilateral levels and research and analysis of macroeconomic and financial issues. A comprehensive and in-depth review of individual members’ and jurisdictions’ financial sectors is undertaken by the mandatory financial stability assessments (mandatory FSAs) or the Financial Sector Assessment Program (FSAP), while Article IV staff reports (and associated selected issue papers) cover financial sector issues at a higher frequency and often follow-up on mandatory FSA or FSAP recommendations. IMF’s multilateral surveillance appears in the form of regular reports, such as the Global Financial Stability Report, World Economic Outlook, Fiscal Monitor, and Spillover reports.IMF has worked with FSB and international standard-setting bodies to develop standards and guidance, to the extent those activities are consistent with its mandate. In addition to these activities, the G20 has tasked IMF and FSB with the responsibility for conducting early warning exercises, which typically take place twice a year, to assess risks to global financial stability. The Basel Committee is the primary global standard-setter for the prudential regulation of banks and provides a forum for cooperation on banking supervisory matters. Established in 1975, it sets supervisory standards and guidelines to promote global financial stability. The standards have no legal force but are developed and issued by Basel Committee members, with the expectation that individual national authorities will implement them. The Basel Committee members include central banks or bank supervisors for 27 jurisdictions. The Basel Committee expanded its membership in 2009, adding the financial authorities of Argentina, Hong Kong SAR, Indonesia, Saudi Arabia, Singapore, South Africa, and Turkey to its membership. With the G20’s support, the Basel Committee recently established a more active program to monitor members’ commitments to implement Basel Committee standards. The Basel Committee works with FSB, of which it is a member, and other international standard-setting bodies to address financial reform issues within its mandate. The Basel Committee is a sponsor organization of the Joint Forum, which also includes IOSCO and IAIS, and which coordinates work on issues of common concern. IOSCO sets global standards for the securities sector to protect investors, ensure efficient markets, and address systemic risks. Its members include more than 120 securities regulators. It also has affiliated members, including 80 other securities markets participants (such as stock exchanges). Established in 1983, IOSCO develops, implements, and promotes adherence to internationally recognized standards for securities regulation. It works with the G20 and FSB to develop standards and guidance to implement the global regulatory reforms that apply to securities markets and institutions. IOSCO is a founder of the Joint Forum, along with the Basel Committee and IAIS. IOSCO also works with the Committee on Payment and Settlement Systems (CPSS) on reform efforts related to financial market infrastructures, including central clearing counterparties. IAIS is the international standard-setting body responsible for developing and assisting in the implementation of principles, standards, and other supporting material for the supervision of the insurance sector. Established in 1994, IAIS’s objectives are to promote effective and globally consistent supervision of the insurance industry; to develop and maintain fair, safe, and stable insurance markets; and contribute to global financial stability. Its members include insurance supervisors and regulators from more than 200 jurisdictions in approximately 140 Nongovernmental organizations countries, including the United States.and private sector entities also participate in IAIS activities as observers. As noted above, IAIS is a member of the Joint Forum. CPSS is a standard-setting body for payment, clearing, and securities settlement systems. Established in 1990, it also serves as a forum for central banks to monitor and analyze developments in domestic payment, clearing, and settlement systems. Its members include 25 central banks responsible for payment and settlement systems. CPSS is a member of FSB and cooperates with other groups, including IOSCO and the Basel Committee, to address issues of common concern. IADI is the global standard-setting body for deposit insurers. IADI activities include developing principles, standards, and guidance to enhance the effectiveness of deposit insurance systems, methodologies for the assessment of compliance with its principles, standards, and guidelines, and facilitating assessment processes. IADI also provides guidance for establishing new—and enhancing existing—deposit insurance systems, and encourages international contact among deposit insurers and other interested parties. IADI has 73 member organizations from 71 countries. It recently worked with the Basel Committee to produce the Core Principles for Effective Deposit Insurance Systems, which was designated by FSB as one of the 12 key standards for sound financial systems. IASB is the standard-setting body of the IFRS Foundation, an independent, nonprofit, private sector organization working in the public interest. Established in 2001, IASB carries out the IFRS Foundation’s stated objective of developing a single set of high-quality, understandable, enforceable, and globally accepted accounting standards. IASB members are responsible for the development and publication of International Financial Reporting Standards (IFRS). IASB members are independent experts, and the board is required to reflect geographical diversity. The IFRS Foundation also has as principal objectives promoting the use and rigorous application of IFRS; taking account of the reporting needs of emerging economies and small and medium-sized entities; and promoting and facilitating the adoption of IFRS through the convergence of national accounting standards and IFRS. Various jurisdictions also have formed informal coalitions to address specific multilateral financial issues. For example, according to U.S. regulatory officials who are involved in the group, the OTC Derivatives Regulators Group (ODRG) is an informal group of regulators in jurisdictions that account for the most significant derivatives activity around the world: Australia, Brazil, the European Union, Hong Kong SAR, Japan, Ontario and Quebec (Canada), Singapore, Switzerland, and the ODRG focuses more strategically than the committees United States. and activities under FSB and IOSCO on addressing critical cross-border issues in the OTC derivatives markets. ODRG reports periodically to the G20 finance ministers and central bank governors about its progress in identifying and addressing cross-border regulatory issues. Securities are regulated at the province level in Canada. expanded the areas covered by their reform agenda.summarizes the key sectors and functional areas covered by the G20 financial reform commitments. The G20 leaders generally have called on their national authorities— along with FSB; standard setting bodies, such as the Basel Committee and IOSCO; and other bodies—to convert their broad financial reform commitments into more specific standards (including policies, principles, practices, or guidance). Although the standards are developed under the auspices of FSB or standard setting bodies (or both), the work of many of these entities largely is carried out by staff of finance ministries, central banks, and financial sector regulators of the member jurisdictions. Because international standards are not legally binding, individual countries or jurisdictions must voluntarily adopt them, such as through legislative or regulatory changes, or both, for the standards to take effect. For example, a jurisdiction may need to pass legislation and adopt regulations to implement one standard but only a regulation to implement another standard. In that regard, the legal and practical abilities of the G20 leaders to commit to legal and regulatory changes can vary widely, depending on the structure of the regulatory system in their jurisdictions (i.e., whether there are independent regulatory agencies) and on the nature of the relationship between the executive and legislative branches in their jurisdictions. Figure 2 illustrates this multistep process. Although the G20 is serving as the main forum at the international level for reforming financial regulations, some academics have questioned the reliance on the G20 and other international bodies to reform international financial standards, citing various potential challenges with the current approach. For example, some maintain that international financial standards existed before the 2007-2009 financial crisis and were intended to reduce systemic risk but failed to prevent or mitigate the recent crisis.However, U.S. regulators and others have pointed to gaps or weaknesses in the international framework or standards that existed before the crisis and that the G20 reforms are intended to address. Additionally, some academics have questioned the potential effectiveness of international financial agreements or standards, arguing that their informal and nonbinding nature allows members to suffer limited consequences for noncompliance. In contrast, they also note that the informal approach influences behavior, with many governments adopting international financial standards into domestic law, or maintain that a formal approach to enforcement would not necessarily be more effective and could raise domestic sovereignty issues. Finally, some academics have commented that FSB’s limited and skewed geographic membership—despite its expansion to include some emerging countries—still may affect perceptions about its legitimacy. However, FSB’s charter includes provisions for FSB to consult with nonmembers on strategic plans, principles, standards, and guidance, and to allow nonmembers to participate, on an ad hoc basis, in its working groups and committee meetings. As discussed earlier, FSB has a work plan to review the structure of its representation, which is to be completed by the November 2014 summit. Congress passed the Dodd-Frank Act in 2010 in response to the regulatory and oversight weaknesses identified after the 2007-2009 financial crisis. As summarized on the Senate Banking Committee’s website, the act seeks to (1) address risks to the stability of the U.S. financial system, in part through the creation of the Financial Stability Oversight Council (FSOC), (2) end too-big-to-fail bailouts of large, complex financial institutions, (3) increase transparency and regulation for certain complex financial instruments, and (4) strengthen protections for consumers and investors. The act requires federal agencies to issue hundreds of regulations to implement the act’s requirements. Regulators have proposed many of the rules, but many of the Dodd-Frank Act As discussed rulemakings had yet to be finalized as of December 2013.later in this report, many of the Dodd-Frank Act’s provisions are similar to the international financial reform commitments agreed to by the G20 leaders. The United States has been active in the international financial regulatory reforms intended to address regulatory and other weaknesses revealed by the 2007-2009 financial crisis. Through its participation in the G20, the United States has helped set the G20 financial regulatory reform agenda. Moreover, through their participation on various international bodies, U.S. financial regulators and, where relevant, Treasury have helped develop standards to implement the G20 reform agenda. However, U.S. financial regulators have faced challenges in implementing the G20 financial reform agenda. The United States has played an important role in elevating the G20 summits to the level of head of state (or government) and in setting G20 agendas for reforming international financial regulation. According to Treasury officials, during the acute phase of the financial crisis in 2008, the United States proposed elevating the G20 forum from the traditional level of finance ministers and central banks to the level of heads of state or government. To that end, the G20 leaders, including the U.S. President, held a summit for the first time in Washington, D.C., in November 2008. The main objective for elevating the G20 forum was to help the world’s major economies cope with the then-ongoing financial crisis and establish a framework to help prevent future financial crises. Among other things, the G20 leaders established principles for financial regulatory reform and developed a list of initial reform commitments. The U.S. President has attended the subsequent G20 leaders’ summits and has continued to play an active role in helping to support or expand the G20 financial reform agenda. The G20 members’ finance ministers and central bankers also have been meeting regularly to advance the reform agenda. In addition to the U.S. President’s direct participation in the G20 summits, the United States has helped to set the G20 financial reform agenda. For example, as host to the G20 summit in Washington (2008) and Pittsburgh (2009), U.S. officials were responsible for coordinating the preparation of the summit agendas and reform agreements. Agreements reached at the Pittsburgh summit included commitments by the G20 leaders to regulate the OTC derivatives markets and establish procedures to manage the failure of systemically important financial institutions. Moreover, U.S. officials have helped support and advance specific reform proposals for other summits. For example, in the lead-up to the London G20 summit in 2009, the United States publicly supported increasing capital requirements for banks, creating FSB, and expanding the scope of regulation to systemically important institutions and markets. In the lead- up to the Toronto summit in June 2010, the United States reiterated its support for more stringent capital and liquidity requirements for banks. At the Toronto summit, leaders pledged to endorse the forthcoming capital reforms (i.e., the Basel III capital standards) at their summit in Seoul in November 2010. The United States also has been coordinating with international bodies and regulators to put in place domestic financial reforms. For instance, after the G20 summit in Seoul in November 2010, U.S. officials noted that the passage of the Dodd-Frank Act put the United States at the forefront of global financial reform. U.S. officials also highlighted that a number of the Dodd-Frank Act’s provisions aligned with the G20 reform commitments. These included provisions for (1) a resolution regime, (2) a framework of oversight and reporting for OTC derivatives markets, (3) regulation of all firms that pose the most risk to the financial system, and Further, the (4) a registration requirement for advisers to hedge funds. officials noted that the United States is working closely with the European Union and others to ensure that the G20’s agenda for regulatory reform is implemented. White House Fact sheet on the Seoul summit, available at http://www.whitehouse.gov/the-press-office/2010/11/12/g-20-fact-sheet-us-financial- reform-and-g-20-leaders-agenda. FSB: FSB coordinates implementation of the G20 financial reform agenda. The Federal Reserve, SEC, and Treasury serve on FSB’s Steering Committee and Plenary, which is FSB’s decision-making body. These agencies also chair or are members of three key standing committees and, with CFTC, FDIC, the Federal Reserve Bank of New York, and OCC, have participated in or chaired other FSB working groups. Basel Committee: The Basel Committee develops prudential standards for banks. FDIC, the Federal Reserve, and OCC are members of the Basel Committee. OCC chairs one of two key subcommittees—the Supervision and Implementation group. FDIC officials noted that the agency chairs a task force exploring options to improve the simplicity and comparability of the capital framework. IOSCO: IOSCO sets global standards for the securities sector. SEC and CFTC have served in leadership roles in IOSCO and informal groups. For example, SEC and CFTC are both members of the IOSCO Board. In addition, CFTC and SEC co-chaired an IOSCO OTC derivatives task force that established standards for mandatory clearing. CFTC also co-chairs a separate IOSCO committee on commodity derivatives. CPSS: CPSS sets global standards for payment, clearing, and securities settlement systems. The Federal Reserve and Federal Reserve Bank of New York are members of CPSS. They have participated in a number of CPSS and CPSS-IOSCO efforts, including the development of risk management standards for financial market infrastructures. IAIS: IAIS sets global standards for the insurance industry. The U.S. Treasury’s Federal Insurance Office chairs an IAIS committee that leads the development of prudential standards. The Federal Insurance Office also has served in leadership roles and as a member of other IAIS committees, subcommittees, and working groups. IADI: IADI sets global standards for deposit insurers. The FDIC is an active member of IADI, sits on its Executive Council, and chairs and participates in a number of IADI committees, subcommittees, and working groups. Figure 3 shows which U.S authorities are members of selected international bodies (as represented by the solid lines). Although CFTC, FDIC, and OCC are not members of FSB, they have participated in or chaired FSB working groups (as represented by the dashed lines). As members of FSB and international standard setting bodies, U.S. financial regulators and Treasury have been actively involved in developing many of the international financial standards (including policies, principles, practices, or guidance) that implement the G20 financial reform commitments. Since 2008, FSB and international standard setting bodies have developed an array of new or revised standards covering a broad range of issues including banking, OTC derivatives, compensation practices, shadow banking, and SIFIs and resolution regimes. (See appendix III for a more detailed list of reform areas and objectives, related standards, and the participation of U.S. agencies.) Examples of standards that U.S. authorities helped develop include the following: Basel capital standards: FDIC, the Federal Reserve, Federal Reserve Bank of New York, and OCC, as U.S. representatives to the Basel Committee, helped develop the Basel III capital standards, which set higher levels for capital requirements for banks and introduced a new global liquidity standard. The Basel Committee released Basel III in December 2010, in part in response to the G20 leaders’ calls for higher standards for capital and enhanced supervision. OTC derivatives reforms: CFTC, the Federal Reserve, or SEC helped develop standards issued by IOSCO or CPSS (or both) for financial market infrastructures, central clearing of OTC derivatives, and reporting of OTC derivatives. These standards respond to the G20 leaders’ commitment in 2009 to ensure that all standardized OTC derivative contracts would be traded on organized platforms, where appropriate, and cleared through central counterparties; and all OTC derivatives contracts would be reported to trade repositories. CFTC, FDIC, the Federal Reserve, OCC, and SEC helped develop standards issued by the Basel Committee and IOSCO on margin requirements for non-centrally cleared OTC derivatives, as requested by the G20. Enhanced supervision of SIFIs and resolution regimes: FDIC, the Federal Reserve Bank of New York, and Treasury helped develop standards issued by FSB for resolution regimes. The regimes would help enable authorities to resolve failing financial firms in an orderly manner and without exposing the taxpayer to the risk of loss. According to FSB, these standards respond to a 2009 request by the G20 leaders to address too-big-to-fail problems associated with SIFIs—that is, when the threatened failure of a SIFI leaves public authorities with no option but to provide public funds to avoid widespread financial instability and economic damage. Moreover, the standards are part of a broader SIFI framework intended to reduce both the probability and impact of SIFIs failing. In our discussions with U.S. federal financial regulators, they identified time or resource constraints as key challenges in helping to develop international financial standards. One regulator said its staff members feel tension between allocating time to their routine regulatory duties and their international work. A U.S. regulator also said that since 2008, there has been a constantly increasing number of work streams, groups, and projects flowing from the G20, many with relatively short deadlines. Moreover, the number of overall projects tends to increase over time as old projects evolve into new areas and new projects are initiated. One of the regulators estimated that the number of projects had at least doubled since the start of the 2007-2009 financial crisis. Two of the regulators also told us they have faced resource constraints, such as not having the travel funds to attend meetings. The number of members from any one country is intended to be representative of the size of national economies, financial market activities, and national financial stability arrangements. However, two U.S. regulators told us FSB’s selection of members has created additional coordination work for U.S. federal authorities. According to a U.S. regulator, U.S. membership in international bodies does not always reflect the significance of the U.S. economy. For example, while U.S. firms have a dominant share of the OTC derivatives markets, CFTC—one of the primary U.S. regulators responsible for overseeing derivatives markets— has no representation on the FSB Plenary, the FSB decision-making body. As a result, U.S. regulators have had to devote time and effort coordinating input and responses from U.S. regulators not represented on FSB. Treasury officials told us that FSB must limit the number of representatives from any member jurisdiction to prevent its membership from becoming too unwieldy, but recognized that this limitation creates additional coordination work for U.S. financial regulators. Treasury staff told us that to address this limitation Treasury established a liaison, who coordinates closely and regularly with all U.S. financial regulators to keep them informed of FSB’s activities and work products and to obtain their input. The United States and other jurisdictions report having made progress implementing the G20’s international financial reforms, but most reforms have not been implemented by all jurisdictions. Under its mandate, FSB is responsible for coordinating and promoting the monitoring of the implementation of the G20 reform commitments and reporting on the implementation progress to the G20. In collaboration with standard setting bodies, FSB established a framework in 2011 to monitor and report on the implementation of the G20 financial reform commitments, including In addition, FSB and IMF the related international financial standards.have programs to assess members’ compliance with international financial standards and foster a level playing field. However, a broad range of legal, economic, and political factors can create implementation challenges for jurisdictions. The failure to implement the international reforms consistently could, among other things, hinder the ability of national authorities or international bodies to protect against developments affecting national and international financial stability. FSB has selected priority reform areas that undergo more intensive monitoring and detailed reporting than other reform areas, and the list of priority areas is reviewed annually by FSB and revised as needed. FSB selects the areas based on the importance of their consistent and comprehensive implementation to global financial stability. For each priority area, an FSB working group or standard setting body is responsible for monitoring implementation progress and periodically preparing a progress report.Basel framework; (2) OTC derivatives market reforms; (3) policy measures for SIFIs; (4) resolution regimes, (5) compensation practices; Currently, FSB’s priority areas are (1) the and (6) shadow banking.vary in their implementation stage of the priority area reforms. At their 2010 summit in Seoul, the G20 leaders endorsed Basel III capital standards and committed to adopt and implement the standards. According to the Basel Committee’s progress report issued in October 2013, 11 of its 27 members had implemented in full the Basel capital framework, which includes Basel II, 2.5, and III standards (see table 2). Specifically, 12 jurisdictions reported that they had issued final Basel III capital rules that were legally in force. At that time, the United States and 14 other jurisdictions reported that they had issued final rules to implement the Basel III capital standards, but the rules had not yet taken The remaining member jurisdiction reported that a regulation on effect.Basel III was to be issued in 2013. As previously discussed, although the Basel Committee members have adopted or will adopt rules to implement the Basel capital standards, the adoption of the standards does not necessarily ensure that they will be applied consistently across banks and jurisdictions. In that regard, the Basel Committee established the Regulatory Consistency Assessment Program in 2012. The committee monitors the transposition of Basel III standards into domestic regulations semiannually based on information provided by each of its member jurisdictions. The aim of such monitoring is to ensure that the internationally agreed timeline remains on track. The committee publishes its results in regular progress reports (discussed earlier). The Basel Committee also assesses the consistency of implementation of the Basel III standards. These assessments are done on a jurisdictional and thematic basis. Member jurisdictional assessments review the extent to which domestic Basel III regulations in each member jurisdiction are aligned with the Basel III standards. The assessments examine the consistency and completeness of the adopted standards, including the significance of any deviations in the standards, and provide an overall assessment of compliance using a four-grade scale: compliant, largely compliant, materially non-compliant, and noncompliant. As of year-end 2013, the committee had completed seven jurisdictional assessments. In its assessments of Brazil, China, Switzerland, Singapore, and Japan, the committee found their rules generally to be compliant with the Basel standards. The committee conducted assessments of the European Union’s and United States’ proposed Basel III rules but did not assign them a grade because of the draft nature of the rules. Thematic assessments review regulatory outcomes to ensure that the prudential ratios calculated by banks are consistent across banks and jurisdictions and predominantly reflect differences in risk rather than in practice. The committee initially focused its thematic assessments on analyzing how banks were weighing (or valuing) assets based on their risk level, because differences in the application of the Basel standards can lead to variations in the amount of capital banks have to hold. In that regard, the objective of the assessments generally has been to obtain a preliminary estimate of the potential for variation in risk-weighted assets across banks and highlight aspects of the Basel standards that contribute to this variation. The committee’s two assessments on this issue found considerable variation across banks in the average risk-weighted assets for market risk in the trading book and credit risk in the banking book. Part of the variation was attributed to differences in supervisory practices or decisions. According to a Basel Committee Chairman, information from the studies is being used by national supervisors and banks to take action where needed, such as to improve consistency. The Basel Committee also plans to use the results as part of its ongoing policy work. At the 2009 summit in Pittsburgh, the G20 leaders committed that all standardized OTC derivatives should be traded on organized trading platforms and centrally cleared; all OTC derivatives should be reported to a repository; and all noncentrally cleared OTC derivatives should be subject to higher capital requirements. At the 2011 summit in Cannes, the G20 leaders further agreed to require noncentrally cleared OTC derivatives to be subject to margin requirements—the posting of collateral to offset losses caused by the default of a derivatives counterparty. FSB and standard setting bodies, including the Basel Committee, CPSS, and IOSCO, have issued most of the standards needed to implement the G20 OTC derivatives reforms. According to FSB’s sixth progress report on implementation of OTC derivatives market reforms issued in September 2013, over half of FSB’s 19 member jurisdictions, including the United States, reported having proposed or enacted legislation to require OTC derivatives transactions to be centrally cleared, traded on organized trading platforms, and reported to trade repositories. As shown in table 3, five jurisdictions, including the United States, reported having proposed or enacted legislation to implement the margin requirements. Table 3 also shows that the United States is the only jurisdiction with regulations in force and operating for the central clearing, organized platform trading, and trade reporting requirements (for at least part of the market), but many jurisdictions reported having adopted regulations for the trade reporting requirement. Only one jurisdiction reported having adopted regulations for the margin requirement. According to FSB, the schedule for further changes in legislative and regulatory frameworks is uneven across jurisdictions and commitment areas. In its progress report, FSB also noted that in light of the global nature of OTC derivatives markets, cross-border coordination was needed to avoid unnecessary duplicative, inconsistent, or conflicting regulations. FSB plans to publish a progress report by April 2014 that provides, among other things, an updated assessment of reform implementation, including any remaining issues in the cross-border application of regulations. While cross-border coordination issues persist, we reported in late 2013 that CFTC and SEC took steps to coordinate with foreign authorities on several rulemakings related to Dodd-Frank swap reforms, which include clearing, exchange trading, and reporting requirements.CFTC issued many swap-related rules and noted that it coordinated with international bodies, such as the European Securities Markets Authority, European Central Bank, and regulators in the United Kingdom, Japan, Hong Kong SAR, Singapore, Sweden, and Canada. On the swap entities rule, CFTC and SEC staffs said that they participated in numerous conference calls and meetings with international regulators. The 2007-2009 financial crisis revealed weaknesses in the existing regulatory framework for overseeing SIFIs, which FSB defines as institutions of such size, market importance, and interconnectedness that their distress or disorderly failure could destabilize the financial system and result in severe economic consequences. According to FSB, when the threatened failure of a SIFI leaves public authorities with no option but to provide public funds to avoid financial instability and economic damage, the SIFI can be considered too big—or too important—to fail. The knowledge that this can happen could encourage SIFIs to take excessive risks (referred to as moral hazard) and may represent a large implicit public subsidy of private enterprise. At the Pittsburgh Summit in 2009, G20 leaders called on FSB to propose measures to address the systemic and moral hazard risks associated with SIFIs. FSB developed a framework intended to reduce the probability and impact of SIFIs failing.The SIFI framework recommends new international standards for national resolution regimes (called “Key Attributes of Effective Resolution Regimes for Financial Institutions”) and requirements for banks determined to be globally systemically important to have additional loss absorption capacity to reflect the greater risk they pose to the global financial system. At the 2010 summit in Seoul, the G20 leaders endorsed the FSB’s SIFI framework. In its April 2013 peer review report on resolution regimes, FSB reported that some of its member jurisdictions developed new or revised existing For example, FSB noted that the United States has resolution regimes.implemented a new resolution regime—referred to as Orderly Liquidation Authority (OLA)—aligned with FSB’s key attributes through its passage of the Dodd-Frank Act. OLA includes broad authorities to wind-up failing financial companies that meet certain systemic criteria. FSB also noted that Australia, Germany, Mexico, Netherlands, Spain, Switzerland, and the United Kingdom have amended their resolution regimes through legislative changes.member jurisdictions need to take further legislative measures to implement the key attributes fully in substance and scope. In its report entitled Report on Progress and Next Steps Towards Ending Too-Big-To- Fail (TBTF) issued in September 2013, FSB noted that it will coordinate with IMF, the World Bank, and international standard setting bodies to finalize a methodology to assess implementation of the key attributes at the national level for use by IMF and the World Bank in their Standards and Codes Initiative (discussed later). At the same time, FSB noted that many of its In addition to legislative changes, FSB found that sector-specific regimes for restructuring or winding down financial firms exhibited a broad range of practices in terms of scope and authorities. According to FSB, this is to be expected, because the key attributes do not prescribe the specific form of the resolution regime as long as the regime is consistent with the key attributes. All FSB members, including the United States, reported that they have specific powers to restructure or wind up banks (or both) that are distinct from ordinary corporate insolvency (see table 4). However, the extent to which the resolution regimes also cover insurers, investment or securities firms, and financial market infrastructure varies across jurisdictions. Additionally, in its April 2013 peer review report on resolution regimes, FSB reported that the resolution regimes of most of its members neither require nor prohibit cooperation with foreign resolution authorities. FSB regards legal frameworks for cross-border cooperation as a key attribute of resolution regimes. According to FSB, eight jurisdictions have statutory provisions that explicitly empower or strongly encourage resolution authorities to cooperate with foreign authorities (Australia, Hong Kong SAR, Indonesia, Japan, Spain, Switzerland, United Kingdom, and United States) while several others indicated that it is their policy to cooperate where possible. In 2012, we reported that international coordination remains a critical component in resolving the failure of a large, complex financial company and that regulators have been taking steps to address this issue, including through their work with FSB. More recently, in a 2013 report, we examined the advantages and disadvantages of certain proposals to revise the U.S. Bankruptcy Code for financial company bankruptcies, including proposals to change the role of financial regulators in the bankruptcy process and the special treatment of qualified financial contracts, such as derivatives. We recommended that FSOC should consider the implications for U.S. financial stability of changing the role of regulators and the treatment of qualified financial contracts in financial company bankruptcies. Although our recommendation continues to have merit, FSOC has not yet implemented the recommendation. FSB made related recommendations in its peer review on resolution regimes (discussed later)—namely designating a lead authority for resolving domestic entities of the same group and introducing powers to impose a temporary stay on the exercise of contractual acceleration or early termination rights in financial contracts, subject to suitable safeguards. Identification of Largest and Most Complex SIFIs Complementing the resolution regime reforms, FSB, the Basel Committee, IAIS, and others have taken steps to reduce the probability of the failure of SIFIs in part by working to establish requirements aimed at increasing SIFIs’ capacity to absorb losses. In November 2013, FSB published its updated annual list of global systemically important banks (G-SIB), which generally comprise the largest and most complex internationally active banks. As shown in table 5, there were 29 G-SIBs headquartered in 11 countries: 8 in the United States; 4 each in France and the United Kingdom; 3 in Japan; 2 each in China, Spain, and Switzerland; and 1 each in Germany, Italy, Netherlands, and Sweden. G- SIBs are grouped into one of five buckets based on their systemic importance, which correspond to increasing levels of additional loss absorbency requirements. The requirements are to be updated shortly and implemented by jurisdictions and phased in from January 2016, with full implementation by January 2019. According to the Basel Committee’s August 2013 progress report, Canada and Switzerland have issued final regulations for G-SIBs and domestic systemically important banks (D-SIBs) and begun to enforce them. Ten of the Basel Committee members have issued final D-SIB regulations that were not yet in force (South Africa and EU member states). The remaining member jurisdictions, including the United States, had not yet issued draft rules. At the time, U.S. regulators expected to issue a notice of proposed rulemaking for G-SIBs by year-end 2013.of February 2014, the U.S. regulators had not issued a rule proposal to implement regulation for the Basel Committee G-SIB risk-based capital surcharge framework. FSB and standard setting bodies also have been extending the SIFI framework to other institutions. First, in response to a request by the G20 leaders, FSB extended the G-SIFI framework to domestic systemically important banks. In October 2012, the Basel Committee issued its framework for dealing with such banks, which focuses on the impact that the distress or failure of banks will have on the domestic economy. Second, IAIS developed a methodology to identify global systemically important insurers. In July 2013, FSB, in consultation with IAIS and national authorities, identified nine insurers (including three U.S. insurers) as global systemically important insurers, which will be subject to a set of policy measures consistent with the SIFI framework.FSB, in consultation with IOSCO, has developed draft methodologies to Third, identify nonbank, noninsurance G-SIFIs, which were issued for public consultation in January 2014. According to FSB’s second progress report issued in August 2013, all but two FSB jurisdictions (Argentina and Indonesia) have implemented FSB’s compensation principles and standards in their national regulation or supervisory guidance. The principles and standards for significant financial institutions include having a board remuneration committee as an integral part of their governance structure, ensuring that total variable compensation does not limit institutions’ ability to strengthen their capital base, and providing annual reports on compensation to the public on a timely basis. In light of the implementation status, FSB concluded that national implementation of the principles and standards can be considered largely complete and noted that the focus now is on effective supervision and oversight of firms. In addition, the report noted that while good progress continues to be made, more work needs to be done by national authorities and firms to ensure that implementation of the FSB principles and standards is effectively leading to more prudent risk-taking behavior. The report also noted that there still is some way to go before the improvements in compensation practices can be deemed effective and sustainable, particularly given the practical challenges to embedding risk management in firms’ compensation practices. According to FSB, several authorities noted that firms still were expressing some concerns about a level playing field with respect to jurisdictions that may not have fully implemented the principles and standards. At the same time, FSB noted that national authorities have yet to see any real evidence that the implementation of the principles and standards has impeded or diminished the ability of supervised institutions to recruit and retain talent. In 2012, FSB established the bilateral complaint handling process—a mechanism for national supervisors from FSB jurisdictions to bilaterally report, verify, and, if needed, address specific compensation-related complaints by financial institutions based on level playing field concerns. According to FSB, no firm had submitted a complaint, as of August 2013. FSB plans to continue to monitor the implementation of the principles and standards. The United States and other FSB members reported that they have implemented or are in the process of implementing most of the G20 financial reform commitments in the nonpriority areas. FSB generally monitors the implementation of the G20 reforms in the nonpriority areas less intensively—primarily through annual surveys of its members. Specifically, FSB’s 2013 survey of its members covered the G20 financial reform commitments in the nonpriority areas. The data are self-reported by FSB member jurisdictions, and FSB generally does not evaluate the survey responses to verify the accuracy or assess the effectiveness of implementation. Although the priority and nonpriority areas overlap in some areas, the reform commitments in the nonpriority areas cover a broader range of sectors and functions, including shadow banking, hedge funds, securitization, credit rating agencies, financial markets, and supervision. As shown on table 6, implementation of the G20 financial reform commitments varies by nonpriority reform area. All FSB members reported that they implemented or have been implementing 15 of the commitments. For example, all members reported making progress in implementing commitments to strengthen oversight of shadow banking, register hedge funds, regulate credit rating agencies, and enhance supervision, accounting standards, and financial consumer protection. In contrast, one or more FSB members reported no action to implement 11 nonpriority commitments, which include strengthening supervisory requirements for investment in structured products, enhancing disclosure of securitized products, strengthening national deposit insurance arrangements, and enhancing market transparency in commodity markets. Table 6 also shows the United States reported that it has taken action to implement all but one of the nonpriority G20 reform commitments— strengthening of supervisory requirements or best practices for investment in structured products—and has completely implemented 19 of the 27 nonpriority reform commitments.reported being the furthest along—completely implementing 23 of the 27 nonpriority commitments. Additionally, 21 of the 24 FSB member jurisdictions reported having completely implemented 16 or more of the nonpriority commitments. Although the FSB survey data provide a broad picture of the implementation status of the G20 reform commitments in the nonpriority areas, the survey and its data have limitations. Importantly, the data are self-reported by FSB members. According to an FSB official, the FSB Secretariat has followed up bilaterally in a small number of cases to collect additional information and clarify certain responses, but FSB generally does not evaluate the survey responses to verify the accuracy or assess the effectiveness of implementation. As a result, the survey findings do not allow straightforward comparisons between jurisdictions or across reform areas. Also, some commitments are broadly defined and, to an extent, open to interpretation. For example, one reform commitment for hedge funds is enhancing counterparty risk management, and FSB reported that 19 members effectively implemented the reform. One of these members reported addressing the issue partly through an annual hedge fund survey; in contrast, another member reported adopting legislation and regulation to implement the reform. To interpret each G20 reform commitment, FSB added a new field in its 2013 survey that identifies the international standard associated with a particular reform commitment. While the Basel Committee conducts reviews of its members (discussed earlier), IMF and FSB also have programs to monitor the implementation of international financial standards and review the effectiveness of the supervision. These programs include IMF and the World Bank’s Financial Sector Assessment Program (FSAP) and Reports on the Observance of Standards and Codes (ROSC) assessments, as well as FSB thematic and country peer reviews. FSAP provides the framework for comprehensive and in-depth assessments of a country’s financial sector. FSAP assessments gauge the stability of the financial sector and assess its potential contribution to growth. Historically, participation in FSAP has been voluntary, but in 2010 IMF made financial stability assessments under FSAP a mandatory part of the surveillance for members with systemically important financial sectors. As of November 2013, IMF has identified 29 jurisdictions, including the United States, as having such a sector, in part based on the size and interconnectedness of each country’s financial sector. Similarly, following the 2007-2009 financial crisis, the G20 countries committed to undergo an FSAP every 5 years. For the first time, the United States underwent an FSAP between 2009 and 2010, when the Dodd-Frank Act was being deliberated and before it was enacted. The FSAP report discussed, among other things, the U.S. experience with and recovery from the recent financial crisis, factors that contributed to the crisis, and legislative actions being undertaken by the United States to reform its financial system. The report included a number of recommendations broadly intended to institutionalize and strengthen systemic risk oversight; redesign the regulatory architecture; strengthen micro-prudential regulation and supervision; strengthen oversight of market infrastructure; enhance crisis management, resolution, and systemic liquidity arrangements; and address too-big-to-fail issues and the future of the housing government-sponsored enterprises. The report recognized that the Dodd-Frank Act was largely consistent with the FSAP recommendations but noted effective implementation would be key. IMF and the World Bank also have a program to assess member compliance with international financial sector standards, the results of which are summarized in a ROSC. IMF and the World Bank have recognized international standards in policy areas identified as key for sound financial systems and deserving of priority implementation in consideration of a country’s circumstances. The standards include those developed by the Basel Committee, CPSS, IOSCO, IAIS, IADI, and IASB. These assessments are voluntary, even in jurisdictions for which an FSAP stability assessment is mandatory. ROSCs can be done on a stand-alone basis or as part of an FSAP. For example, the FSAP review of the United States included ROSCs covering international banking, securities, insurance, clearing, and settlement standards. assessment of the U.S. supervisory system against international codes identified many positive aspects but also some important shortcomings. International Monetary Fund, “United States: Publication of Financial Sector Assessment Program Documentation— Reports on Observance and Codes,” IMF Country Report No. 10/250 (July 2010). playing field. Under the FSB charter, member jurisdictions have committed to undergo periodic peer reviews. FSB’s peer program includes two types of reviews: thematic and country. Thematic reviews focus on the implementation and effectiveness of FSB- endorsed international standards deemed important for global financial stability. The objectives of the reviews are to encourage consistent cross- country and cross-sector implementation, evaluate (where possible) the extent to which standards have had their intended results, and identify gaps and weaknesses in reviewed areas. As previously discussed, FSB has conducted thematic peer reviews in the priority reform areas, such as compensation and resolution regimes. It also has conducted peer reviews in nonpriority reform areas, including the following: The ongoing peer review on credit ratings was assessing FSB member progress implementing FSB’s Principles for Reducing The interim report Reliance on Credit Rating Agency Ratings.identified several areas where accelerated progress is needed, including the need for FSB members to provide incentives to financial institutions to develop their own independent credit assessment processes. It also identified challenges that need to be addressed, which include reducing undue reliance on credit ratings in international standards, identifying suitable alternative standards of creditworthiness, and addressing constraints in the development of internal risk assessment systems. The peer review on risk governance found that national authorities, since the crisis, have taken several measures to improve regulatory and supervisory oversight of risk governance at financial institutions. These measures include developing or strengthening existing regulation or guidance, raising supervisory expectations for the risk management function, engaging more frequently with the board and management, and assessing the accuracy and usefulness of the information provided to the board to enable effective discharge of their responsibilities. It also made four recommendations targeting areas in which more substantial work was needed, including strengthening regulatory and supervisory guidance, reviewing principles for risk governance, and exploring ways to formally assess risk culture at financial institutions. FSB’s country reviews focus on the implementation of international standards and their effectiveness nationally. The reviews examine steps taken or planned by national authorities to address FSAP and ROSC recommendations of IMF and the World Bank. (FSB peer reviews take place 2 to 3 years following an FSAP.) Unlike an FSAP, an FSB country review does not comprehensively analyze a jurisdiction’s financial system structure or policies, or its compliance with international financial standards. According to an FSB official, both country and thematic peer reviews have the inherent limitation of being primarily “desktop-based” reviews, which constrains the review team’s ability to engage in on-site interactions to assess implementation progress, challenges, and impact. FSB’s peer review handbook notes that country peer reviews will include a brief on-site visit in the reviewed jurisdiction to meet with the authorities, and subject to the agreement of the reviewed jurisdiction, the on-site visit Such a also may include meetings with relevant market participants.visit should support the peer review’s objective and be consistent with equal treatment of members under the peer review process, and its expected benefits should outweigh the resource costs. As part of this commitment, the United States volunteered to undergo a country peer review in 2013. The review found that U.S. authorities had made good progress in following up on FSAP recommendations, particularly in regard to systemic risk oversight arrangements and the supervision and oversight of financial market infrastructures. At the same time, the review included recommendations targeted at systemic risk oversight arrangements, supervision and oversight of financial market infrastructures, and insurance supervision. For example, FSB recommended that FSOC develop a more systematic, analytical, and transparent macroprudential framework for coordinating efforts and incorporating the bottom-up views of member agencies to address systemic risk. In addition, FSB recommended that FSOC develop a more in-depth and holistic analysis of the systemic risks to financial stability. Similarly, in 2012, we reported that FSOC’s establishment of a Systemic Risk Committee to facilitate coordination among its member staffs can help FSOC analyze known risks, but the approach does not take full advantage of FSOC member agency resources to identify new threats to the financial system. We also reported that FSOC identifies a number of potential emerging threats to financial stability in its annual reports, but does not use a systematic, forward-looking approach to identify such threats. To address these weaknesses, we recommended, among other things, that FSOC develop (1) a monitoring approach that includes systematic sharing of key financial risk indicators across FSOC members and member agencies to assist in identifying potential threats for further monitoring or analysis; and (2) a more systematic, forward-looking approach for reporting on potential emerging threats to financial stability in annual reports. In its comment letter, Treasury stated that officials will carefully consider the report’s findings and recommendations. As of March 2014, Treasury had taken some steps to implement the recommendations, but the recommendations had not been fully implemented. While FSB noted that federal and state authorities in the United States began addressing FSAP recommendations on the insurance sector—for example, establishing the Federal Insurance Office—it also noted that significant additional work is needed. According to FSB, the structure and characteristics of insurance supervision in the United States—the multiplicity of state regulators, the absence of federal regulatory powers to promote greater regulatory uniformity, and the limited rights to preempt state law—constrain the federal government’s ability to ensure regulatory uniformity in this sector.should promote greater regulatory uniformity in the insurance sector, including by conferring additional powers and resources at the federal level where necessary. As recognized by regulators, industry associations, and academics, a broad range of legal, economic, and political factors can create implementation challenges for jurisdictions. For example, differences in economic development of countries and differences in philosophy or ideology between jurisdictions can make it difficult for the international standards to be implemented consistently across jurisdictions. Representatives from one industry association told us that when standard setting bodies set narrow or detailed principles, such principles can become difficult to implement consistently because of jurisdictional differences. In addition, legislatures and industry groups may support more or less stringent requirements than called for by the standards. Finally, domestic regulators may apply or interpret the standards differently from other domestic regulators. As discussed previously, the Basel Committee’s thematic reviews found variation in the application of the capital standards across jurisdictions and partly attributed the variations to differences in supervisory practices. As recognized by the G20 leaders, international bodies, industry associations, and others, the failure to implement the international financial standards consistently across jurisdictions could have a number of negative consequences. Most importantly, such inconsistencies could hinder or weaken the ability of national authorities or international bodies to protect against developments affecting national and international financial stability and help prevent or mitigate future financial crises. Moreover, a regulator stated that inconsistent implementation could lead to an unlevel playing field for financial institutions or regulatory arbitrage. For example, financial markets or services could migrate to less-regulated or unregulated jurisdictions. It also could impose a variety of avoidable costs on financial institutions with negative consequences for customers, investors, and national and global economies. For example, financial institutions operating in multiple jurisdictions could be subject to conflicting or duplicative rules and, thus, higher compliance costs. Some academics and industry associations also have noted that complete global consistency across all financial regulations is not necessarily possible or preferable. Two academics suggest that if jurisdictions face significant limitations in their ability to reach agreement, harmonization efforts might lead to agreement on only weak global standards. An academic we interviewed said that harmonized regulations across all jurisdictions may provide a level playing field but could be problematic, in part by not providing jurisdictions with the flexibility to respond to their differences. He also said that standardizing regulations could cause financial institutions to behave in the same way and unintentionally concentrate risk (e.g., holding the same types of assets). Similarly, an industry association noted that international consistency does not require uniformity but an appropriate level of similarity, comparability, and predictability of regulatory outcomes across jurisdictions. According to the association, international consistency also means striking a balance between consistency and the need for sensible local differences and supervisory discretion. In light of the potential for inconsistent implementation in areas that may result in unnecessary negative consequences, the review programs operated by FSB, IMF, and international standard-setting bodies likely will play an important role in addressing this issue. Indeed, some academics and industry associations view FSB’s peer reviews as an important mechanism for monitoring and encouraging compliance with international financial standards. Moreover, one academic suggests that the reviews may help deepen commitment to the standards by domestic officials by holding member jurisdictions accountable not only to an international body but also to each other. However, some express concern about the potential effectiveness of FSB peer reviews, in part because any of their recommendations, like the international financial standards, are not binding. We provided a draft of this report to CFTC, FDIC, FSB, the Federal Reserve, IMF, OCC, SEC, and Treasury, for their review and comment. All of the agencies except for the Federal Reserve provided technical comments, which we have incorporated, as appropriate. Treasury and SEC provided written comments that we have reprinted in appendixes V and VI, respectively. In commenting on our draft report, Treasury noted that the G20, FSB, and international standard-setting bodies have been cooperating since the financial crisis on advancing the international financial reform agenda and strengthening the global financial system. Treasury further noted that the United States has played a leadership role in designing and implementing this agenda. Finally, Treasury agreed with the report that international reform efforts are not complete. Treasury noted that it would continue to work with other regulators to forge high-quality, compatible rules, encouraging reforms in other jurisdictions as strong as those in the United States, and would continue to promote greater consistency and convergence. SEC noted that it welcomed GAO's review of the international reform efforts and valued GAO’s perspective in this area. SEC stated that the report correctly notes that international standards are not legally binding and rely on the decision of national authorities to implement the standards (reflecting, among other considerations, appropriate respect for national sovereignty). SEC also noted that it was pleased that the report acknowledges that while negative consequences can flow from varying degrees of implementation of international standards, there also can be good reasons behind such differences, such as avoiding a movement to less robust standards or the unintentional concentration of risks. SEC agrees with the report’s discussion that there may be reasons to take into account variations in national legal and market structures and conditions, including differences in economic development and enforcement authority. SEC also comments on a number of specific issues in the report. SEC noted that it does not share the view that international organizations “implement” international standards, nor should they have that authority. Our report summarizes the international financial reform process, drawing a clear line between the development of international standards under the auspices of FSB or standard-setting bodies (or both) and the voluntary adoption of rules or policies consistent with these standards by jurisdictions, such as through legislation or regulations. SEC also noted that interconnections permitted disruptions to spread quickly across borders but was unsure that such interconnections increased systemic risk as stated in our report. No single definition for systemic risk exists, but systemic risk has been viewed as the possibility that a single event could broadly affect the entire financial system, causing widespread losses rather than just losses at one or a few institutions. As we recently reported in January 2013, the 2007-2009 financial crisis illustrated the potential for systemic risk to be generated and propagated outside of the largest financial firms (such as by money market mutual funds), in part because of interconnections not only between firms but also between markets. SEC also noted the particular status of the accounting standard- setting bodies. In particular, both IASB and the Financial Accounting Standards Board are independent, private-sector organizations. Although the IASB is an FSB member, the legitimacy of the accounting standards that these bodies set depends, among other things, on those bodies’ ability to set accounting standards free from political interference. We have added a clarifying note in figure 4 in appendix II to reflect this comment. Finally, SEC noted that our report uses the term “shadow banking” and it is not appropriate to use this term to refer to market-based financing, which serves a credit intermediation function. Our report discusses shadow banking, because it is one of the G20 financial reform commitments—expanding the regulatory perimeter, including strengthening of the regulation and oversight of shadow banking. Our report does not define shadow banking but rather uses FSB’s Policy Framework for Addressing Shadow Banking Risks in Securities Lending and Repos (see appendix III, table 7) as a reference document. According to FSB, this policy was intended to help strengthen oversight and regulation of the shadow banking system. The policy notes that the “shadow banking system” can broadly be described as “credit intermediation involving entities and activities (fully or partially) outside the regular banking system.” Therefore, the use of the term shadow banking is appropriate for this report. We are sending copies of this report to CFTC, FDIC, the Federal Reserve, OCC, SEC, and Treasury, interested congressional committees and members, and others. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. This report reviews the (1) U.S. role in the international financial reform efforts, including the development of international financial standards, and (2) implementation status of recent international financial reforms in the United States relative to other jurisdictions and challenges or concerns that any uneven progress could present. To address the first objective, we reviewed and analyzed declarations, communiqués, and other statements issued by the G20 leaders about their agreed-to commitments to reform financial regulations. In addition, we reviewed and analyzed reports or other documents issued since 2008 by various international bodies—including the Financial Stability Board (FSB), International Monetary Fund (IMF), Basel Committee on Banking Supervision, International Organization of Securities Commissions, Committee on Payment and Settlement Systems, and International Accounting Standards Board—about their role in implementing the G20 reforms, such as through the development of international financial standards, or monitoring the implementation status of the G20 reforms at the international and jurisdictional levels. We also reviewed press statements, policy documents, or other material issued by U.S. and other jurisdictions about their work to support the G20 financial reforms. We interviewed U.S. financial regulatory authorities, including the Commodity Futures Trading Commission (CFTC), Federal Deposit Insurance Corporation (FDIC), Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency (OCC), Securities and Exchange Commission (SEC), and Department of the Treasury—and FSB and IMF officials about their role in the G20 reform efforts, including implementation challenges. To gain insights about the G20 financial reforms and associated implementation challenges, we also reviewed numerous studies by academics and other experts and interviewed four professors in the fields of law, economics, and political science and two industry associations representing banks or over-the-counter (OTC) derivatives market participants. We judgmentally selected these parties based on studies or other material they issued on the G20’s international financial reforms, and the results are not generalizable. To address the second objective, we reviewed, analyzed, and summarized progress reports, peer reviews, surveys, or other material prepared by FSB and international bodies, including the Basel Committee on Banking Supervision and IMF, on the implementation status of the G20 reforms in the priority areas at the jurisdictional level. Similarly, we also reviewed and analyzed FSB’s annual surveys of its member jurisdictions on their implementation status of the G20 reforms in the nonpriority areas. We reviewed the accuracy of U.S. responses to questionnaires administered by FSB or a standard setting body that covered U.S. progress implementing Basel II, 2.5, and III and the G20’s OTC derivatives reforms, including the requirements for OTC derivatives transactions to be centrally cleared, traded on organized trading platforms, and reported to trade repositories, and we generally found the U.S. responses were accurate. We also asked audit offices of 14 jurisdictions that are members of FSB—and which are participating in an International Organization of Supreme Audit Institutions working group on financial reforms—to do the same for their regulators’ responses. Finally, as identified above, we interviewed officials representing U.S. regulators, FSB, IMF, industry associations, and academics about challenges or concerns that uneven implementation of the G20 financial reforms across jurisdictions could present. We conducted this performance audit from March 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As shown in figure 4, a variety of international bodies are part of the international financial architecture. The Bank for International Settlements provides a forum for international cooperation among central banks and within financial and supervisory communities. Its members as of July 2013 are central banks or monetary authorities of 59 economies plus the European Central Bank. The Bank for International Settlements acts as a bank for central banks; publishes economic and monetary research; and acts as a counterparty for central banks in their financial transactions and as agent or trustee in connection with international financial operations. It also hosts other international financial organizations and groups, such as FSB and the Basel Committee. The Committee on the Global Financial System is a central bank forum to monitor issues relating to financial markets and systems. It works to identify and assess potential sources of stress in global financial markets. The Financial Action Task Force is an organization of 36 jurisdictions that sets standards and promotes effective implementation of legal, regulatory, and operational measures for combating money laundering, terrorist financing, and other related threats to the integrity of the international financial system. The Organisation for Economic Co-operation and Development is a membership organization of 34 countries that promotes economic growth and employment among its members, while maintaining financial stability. It has cooperated with the G20 in areas related to promoting economic growth, and in areas of financial regulation such as developing principles on consumer protection. It also is involved in global tax standards development. The World Bank, established in 1944, provides financial and technical assistance to developing countries with the goals of ending extreme poverty and promoting shared prosperity by fostering income growth of the bottom 40 percent of every country. Headquartered in Washington, D.C., with 10,000 staff in 120 offices worldwide, the bank provides financial assistance products and services and engages in a range of knowledge-sharing activities. The Monitoring Board of the IFRS Foundation oversees the work of the International Accounting Standards Board (IASB). As noted in the background section of this report, the IASB carries out the IFRS Foundation’s stated objectives of developing a single set of high-quality, understandable, enforceable, and globally accepted accounting standards. IASB members are responsible for the development and publication of International Financial Reporting Standards (IFRS). The members of the Monitoring Board are the Growth and Emerging Markets Committee of the International Organization of Securities Commissions, the Financial Services Agency of Japan, the European Commission, and the U.S. Securities and Exchange Commission. Additional members of the Monitoring Board selected in January 2014 are the Comissão de Valores Mobiliários of Brazil and the Financial Services Commission of Korea. The Basel Committee on Banking Supervision is an observer. The Group of Twenty (G20) leaders have committed to undertake a broad range of financial regulatory reforms at the various summits held since 2008. The G20 leaders generally have tasked their national authorities— along with the Financial Stability Board (FSB), standard-setting bodies, such as the Basel Committee on Banking Supervision, the International Organization of Securities Commissions (IOSCO), International Association of Insurance Supervisors (IAIS), and other bodies—with converting their broad financial reform commitments into more specific standards (including policies, principles, practices, or guidance). Although the standards are developed under the auspices of FSB or standard setting bodies (or both), the work of many of these entities largely is carried out by staff of finance ministries, central banks, and financial sector regulators of the member institutions. As shown in table 7, various U.S. agencies have participated in the development of standards needed to implement the G20 reform commitments, and all have been involved in their review. The Financial Stability Board (FSB) is responsible for coordinating the implementation of the Group of Twenty’s (G20) financial reform commitments and reporting implementation progress to the G20. In collaboration with standard setting bodies, FSB established a framework in 2011 to monitor and report on the implementation of the G20 financial reform commitments. FSB has selected priority reform areas that undergo more intensive monitoring and detailed reporting than other reform areas, and the list of priority areas is reviewed annually by FSB and revised as needed. Priority reform areas are selected based on the importance of their consistent and comprehensive implementation toward global financial stability. For each priority area, an FSB working group or a standard setting body is responsible for monitoring implementation progress and periodically preparing a progress report. Currently, FSB’s priority areas are (1) the Basel II, 2.5, and III framework; (2) OTC derivatives market reforms; (3) compensation practices; (4) policy measures for global SIFIs; (5) resolution regimes; and (6) shadow banking. For the G20 financial reform commitments in the nonpriority areas, FSB generally monitors their implementation less intensively, primarily through annual surveys of its member jurisdictions.member jurisdictions covered 27 G20 financial reform commitments. The survey data are self-reported by FSB member jurisdictions, and FSB generally does not evaluate the survey responses to verify the accuracy or assess the effectiveness of implementation. FSB’s 2013 survey of its Figure 5 provides country profiles that summarize information on the implementation status of selected G20 financial reform commitments by the 11 countries that are home to global systemically important banks, as identified by FSB. The country profiles include information on the implementation status of G20 reform commitments in priority and nonpriority areas. The profiles also provide examples of how countries have implemented certain nonpriority reforms, such as through legislative or regulatory changes. These examples are excerpts taken from FSB’s 2013 surveys completed by the jurisdictions and reflect the differences in the approaches taken by the jurisdictions. Finally, each profile includes information on a country’s population, gross domestic product (GDP), and global competitiveness index. In addition to the contact named above, Richard Tsuhara (Assistant Director), Rudy Chatlos, Catherine Gelb, Camille Keith Jennings, Thomas McCool, Thomas Melito, Marc Molino, Akiko Ohnuma, Barbara Roesmann, and Jessica Sandler made key contributions to this report.
|
Cross-border interconnections in the financial markets and other factors helped spread disruptions during the 2007-2009 financial crisis and increased systemic risk. In response to the crisis, the G20 positioned itself as the main international forum for reforming financial regulations. In 2008, the G20 leaders committed to implement a broad range of reforms designed to strengthen financial markets and regulatory regimes. In light of the G20's reform efforts and the potential implications of the reforms for the United States, GAO examined (1) the U.S. role in the international financial reform efforts and (2) the implementation status of recent international financial reforms in the United States relative to other countries and challenges that uneven implementation may present. To address these issues, GAO reviewed and analyzed reports or other documents issued by the G20, FSB, IMF, and other international bodies since 2008 and studies on the G20 reforms by academics, industry associations, and others. GAO reviewed the accuracy of U.S. responses to select questionnaires administered by FSB and asked other countries' national audit offices to do the same for their regulators' responses. Finally, GAO interviewed officials representing U.S. agencies, FSB Secretariat, IMF, industry associations, and academics. GAO is not making any recommendations in this report. The United States has played an active role in helping to reform financial regulations to address weaknesses revealed by the 2007-2009 financial crisis. According to Treasury officials, during the acute phase of the crisis, the United States proposed elevating the Group of Twenty (G20) forum—representing 19 countries (including the United States) and the European Union—from the level of finance ministers and central banks to the level of heads of state or government. In 2008, the U.S. President and other G20 leaders held their first summit in Washington, D.C., in part to establish a framework to help prevent financial crises. The G20 leaders established principles for financial regulatory reform and agreed on a series of financial reforms, which they have revised or expanded at subsequent summits. To implement their reforms, the G20 leaders generally have called on their national authorities—finance ministries, central banks, and regulators—and international bodies, including the Financial Stability Board (FSB) and standard setting bodies, such as the Basel Committee on Banking Supervision. In 2009, the G20 leaders established FSB to coordinate and promote implementation of the financial reforms, which typically involves standard setting bodies developing international standards (e.g., principles, policies, or guidance) and then jurisdictions voluntarily adopting rules or policies consistent with these standards, such as through legislation or regulations. As members of FSB and international standard setting bodies, U.S. federal authorities have actively helped formulate the standards that implement the G20 reforms and cover, among other things, banking, derivatives, and hedge funds. The United States and other jurisdictions have made progress implementing many of the G20 financial reform commitments, but most reforms have not been fully implemented by all jurisdictions. FSB and standard setting bodies collaboratively monitor and report on the implementation status of the G20 reforms. According to recent progress reports, the United States, like most FSB members, has implemented or is implementing G20 reforms that FSB designated as a priority based on their importance to global financial stability—including higher capital standards, derivatives reforms, compensation practices, policy measures for systemically important financial institutions, and regimes for resolving failing financial institutions. However, implementation varies among jurisdictions. For example, according to a September 2013 progress report, only the United States reported having rules at least partly in effect to implement the G20 reforms requiring derivatives to be centrally cleared, traded on organized trading platforms, and reported to trade repositories, while many other jurisdictions reported having rules in effect for only some of these reforms or adopted or proposed legislation to implement the reforms. To promote and monitor the adoption of the international standards by each jurisdiction, such as to ensure a level playing field, FSB, the International Monetary Fund (IMF), and the Basel Committee have established programs to review and assess their members' implementation of the standards. At the same time, legal, economic, and political factors can create implementation challenges for jurisdictions. For example, regulators in different jurisdictions may apply or interpret the standards differently. However, in some cases, inconsistent implementation of international financial standards could lead to certain activities migrating to less regulated jurisdictions (regulatory arbitrage) or adversely affect financial stability.
|
Each year, Congress appropriates new discretionary funds for DOD across a number of appropriation accounts with different purposes, including appropriations for operation and maintenance, RDT&E, procurement, and military construction, among others. Depending on the type of appropriation, DOD may have several accounts for each appropriation type in a given fiscal year. For example, each active and reserve military component as well as other DOD components has its own operation and maintenance accounts. Separately, there are individual RDT&E and procurement accounts for the military services, and a consolidated appropriation for other defense-wide programs. Operation and maintenance appropriations fund civilian pay, deployments, training, and maintenance, as well as a variety of other activities such as food, fuel, and utilities. RDT&E appropriations fund contractors and government installations to conduct research, development, testing, and evaluation for, among other things, equipment and weapon systems. Procurement appropriations generally fund the purchase of capital equipment such as ships, aircraft, ground vehicles, and other items after their development. Military construction appropriations fund construction, development, conversion, or extension carried out with respect to a military installation, whether to satisfy temporary or permanent requirements, subject to certain exceptions. DOD’s appropriations have different periods of availability for new obligations. For example, operation and maintenance funding is typically available for incurring new obligations for one fiscal year. RDT&E funding is typically available for two years. Procurement funding is typically available for three years, and military construction funding is typically available for obligation for five fiscal years. Subject to law and DOD financial management regulations, DOD has the authority to transfer funds between appropriation accounts and to reprogram funds within an appropriation account. For fiscal year 2013, the Consolidated and Further Continuing Appropriations Act, 2013 provided DOD with $7.5 billion in broad authority to transfer funds between appropriation accounts. Of this amount, $3.5 billion was special transfer authority for purposes related to overseas contingency operations and $4 billion was general transfer authority. These amounts were generally consistent with the amounts of broad transfer authority that Congress provided to DOD in fiscal years 2011 and 2012. In addition to its transfer authority and subject to certain limitations, DOD also has the authority to reprogram funds within an appropriation account. DOD guidance requires that it seek approval from the congressional defense committees to reprogram funds above certain thresholds and for other specific types of transfers or reprogrammings.specifies circumstances in which the department may reprogram funds This guidance also without prior congressional approval if the cumulative increase or decrease of funds is within established thresholds. The absence of legislation to reduce the federal budget deficit by at least $1.2 trillion triggered the sequestration process in section 251A of the Balanced Budget and Emergency Deficit Control Act of 1985, as amended, and the President ordered the sequestration of budgetary resources on March 1, 2013. Following this order the Office of Management and Budget calculated the amount of DOD’s budget authority subject to sequestration across its appropriation accounts— known as the sequestrable base—and reduction amounts based on the annualized amount set out in the continuing resolution then in effect. On March 26, 2013, the Consolidated and Further Continuing Appropriations Act, 2013 was enacted, providing different amounts of budget authority than were provided by the continuing resolution. For DOD, the amount of nonexempt discretionary resources subject to sequestration in fiscal year 2013 was about $527.7 billion. This amount reflected DOD’s fiscal year 2013 appropriations, which included base and overseas contingency operations funding plus any unobligated balances in multiyear accounts from prior fiscal years. Ultimately, these resources were reduced by about 7 percent, or $37.2 billion, as a result of sequestration (see fig. 1 below). The Balanced Budget and Emergency Deficit Control Act of 1985 (Pub. L. No. 99-177, as amended) required DOD to apply sequestration reductions evenly at the program, project, and activity level for each of its accounts. The definition of programs, projects, and activities differs based on the appropriation account. For operation and maintenance accounts, the program, project, and activity level was defined at the appropriation account level, such as the Operation and Maintenance, Navy and Operation and Maintenance, Army accounts. For RDT&E, procurement, and military construction accounts, the program, project, and activity level was defined as the most specific budget item identified in the Consolidated and Further Continuing Appropriations Act, 2013, classified annexes and explanatory statements to that act, or certain agency budget justification materials, and this level of detail would include individual weapon systems and military construction projects. Prior to and following the President’s March 2013 sequestration order, DOD took various actions to plan for and implement sequestration. Initially, in September 2012 the Deputy Secretary of Defense released a memorandum instructing components to continue spending at normal levels and not to take steps in anticipation of sequestration. By December 2012, DOD officials said they had begun actively planning for sequestration. On January 10, 2013, the Deputy Secretary of Defense issued an additional memorandum that identified departmental priorities and provided approved actions for DOD components to take in response to the uncertain budgetary environment. The memorandum directed DOD components to prioritize activities such as wartime operations and Wounded Warrior programs, and instructed components to take near- term actions, reversible if possible, such as imposing hiring freezes and curtailing travel, training, and conferences. DOD issued further implementation guidance in the months that followed. For example, on May 14, 2013, DOD notified managers to prepare to furlough most DOD civilians for up to 11 days, and on August 6, 2013, DOD reduced the number of civilian furlough days from 11 to 6. In addition, the military services issued guidance to their commands and components, in line with the department’s priorities. For example, both the Army and the Air Force issued memorandums in January 2013 outlining certain near-term actions for their commands to take to reduce expenses but stated that any actions must be reversible to minimize harmful effects on readiness. Figure 2 provides a detailed timeline of DOD, Office of Management and Budget, and legislative actions taken to plan for and implement the fiscal year 2013 sequestration. DOD faced a challenging budgetary environment prior to and during the implementation of sequestration in fiscal year 2013 stemming from a continuing resolution, difficulties in determining the total amount of the sequestration reduction, and higher-than-expected costs for overseas contingency operations. For example: DOD was operating under a continuing resolution from October 1, 2012 through March 26, 2013, when the full-year appropriation was enacted. The continuing resolution held funding near fiscal year 2012 levels, and limited DOD’s budget authority and flexibility to transfer funds. Thus, when the President ordered the sequestration of budgetary resources, DOD had already spent the first five months of fiscal year 2013 uncertain of its funding level. In our prior work, we found that DOD faced difficulties determining the total amount of its funding that would be subject to sequestration, and consequently the total size of the reduction, because the Office of Management and Budget’s initial estimates of sequestration reductions were based on an amount generated by annualizing the funding available under the continuing resolution in place at the time. These estimates were ultimately revised based on different budget amounts provided in the Consolidated and Further Continuing Appropriations Act, 2013. As a result, DOD did not know the final amount subject to sequestration until May 2013, which affected its ability to finalize decisions on allocating funding reductions. DOD also experienced higher-than-projected costs for overseas contingency operations than originally planned in fiscal year 2013 due to changing assumptions, such as the drawdown in contract-related services in Afghanistan. In response to the President’s sequestration order and OMB’s implementing report, DOD’s nonexempt discretionary resources were reduced, including those within the operation and maintenance, procurement, RDT&E, and military construction appropriation accounts.DOD’s use of prior year unobligated balances to meet sequestration reductions varied by appropriation. Because the military services’ accounts received a majority of DOD’s funding relative to other DOD components, their accounts were reduced by the largest amount to achieve DOD’s sequestration reductions. DOD’s nonexempt discretionary resources experienced sequestration reductions in fiscal year 2013, while the amount and percentage of reductions within accounts varied, based on our analysis of data from a June 2013 DOD report.annualized continuing resolution amounts upon which the initial sequestration reductions were based, and the enacted full year appropriations, the size of the percentage reductions for nonexempt discretionary resources differed (see table 1). For example, among the appropriation accounts that we reviewed, RDT&E had the largest reduction as a percentage of its sequestrable base (8.1 percent), while military construction had the smallest (4.4. percent). to achieve DOD’s The use of prior year unobligated balancessequestration reductions varied by appropriation, ranging from 4.2 percent of the operation and maintenance reduction (about $860 million) to 42 percent of the procurement reduction (about $4.1 billion), based on our analysis of data from a June 2013 DOD report. The distribution of fiscal year 2013 sequestration reductions to nonexempt discretionary resources between prior year unobligated balances and fiscal year 2013 funds for each appropriation is shown in figure 4 below. The amount and availability of prior year unobligated balances within some appropriation accounts, such as RDT&E and procurement, is due to the multiyear nature of projects and programs funded by these appropriations. For example, as of March 2013, the total amount of available prior year unobligated balances was about $5 billion for RDT&E and about $36.7 billion for the procurement accounts. DOD’s use of prior year unobligated balances to help meet sequestration reductions varied by appropriation account type. For example, DOD used about 13 percent, or about $633 million, of available prior year unobligated balances in the RDT&E accounts and about 11 percent, or about $4.1 billion, of available unobligated balances in the procurement accounts to achieve sequestration reductions. As a result of the fiscal year 2013 sequestration, the military services’ appropriation accounts were reduced by the largest share relative to other defense accounts because the military services’ accounts received a majority of DOD’s funding relative to other DOD components. Specifically, according to our analysis of data from a June 2013 DOD report and DOD’s operation and maintenance budget execution report for the fourth quarter of fiscal year 2013, the military services’ accounts were reduced by about $28.3 billion of the total DOD sequestration reduction of $37.2 billion (or 76 percent of the reduction). Among the appropriations, sequestration reductions within the military services’ accounts included reductions of about $14 billion for operation and maintenance (or about 69 percent of the reduction within DOD’s operation and maintenance accounts) and about $9.1 billion for procurement (or 93 percent of the reductions within the procurement accounts). Figure 5 illustrates the amount of the reduction that the military services’ and other defense accounts absorbed within each appropriation type due to the fiscal year 2013 sequestration. As discussed above, for the RDT&E, procurement, and military construction accounts, the military services applied the same percentage reduction within an account to each budget line item for their individual weapon systems or other acquisition programs and military construction projects. In contrast, within their operation and maintenance accounts, the military services had the flexibility to allocate sequestration reductions to specific functions and activities. As shown in figure 6, we found that the military services applied varying sequestration reductions across 11 categories funded by their operation and maintenance accounts. In particular, we found that four of these categories— operational tempo and training; base operating support; maintenance and weapon systems support; and operations support and transportation—were reduced by approximately $12 billion. This amount accounted for about 85 percent of the military services’ total operation and maintenance reduction. To implement sequestration in fiscal year 2013, DOD and the military services took steps to preserve certain key programs and functions, while making spending reductions to other lower-priority programs, projects and functions. In interviews and documents we reviewed, DOD and service officials identified negative effects of sequestration across our case studies. Many of the identified effects were interrelated and varied among service components. DOD officials stated that some long-term effects of sequestration were difficult to quantify and assess. DOD and the military services provided guidance to their subordinate commands and components identifying near-term actions to help plan for and implement sequestration, and the components took a variety of actions in response to this guidance. For example, a Deputy Secretary of Defense January 2013 memorandum directed the components to minimize harmful effects on people, operations, and unit readiness when carrying out their spending reductions. To that end, the memorandum directed DOD components to fully protect, among other things, funding for wartime operations, and to protect, to the extent feasible, funding most directly associated with readiness and family programs. The memorandum also directed that the components take steps to minimize disruption and additional costs to acquisition programs and military construction projects. In response to this memorandum, DOD components took steps to protect funding for those higher priorities. For example, based on direction to preserve military readiness and wartime operations, military service officials told us that they protected funding for training for units that were deploying or next to deploy in support of ongoing operations. To ensure child development centers—a type of family program—had enough care providers to maintain accreditation, DOD exempted personnel who worked at these centers from the 6-day administrative civilian furlough. Further, service officials told us they did not cancel any weapon system or other acquisition program, nor did they cancel, defer, or reduce the scope of any major military construction projects, pursuant to verbal guidance from Office of the Secretary of Defense officials. Secretary of Defense Memorandum, Furloughs (May 14, 2013). documents we reviewed, DOD and service officials identified some negative effects from these and other steps taken to implement fiscal year 2013 sequestration reductions. The effects identified within and across our case studies were generally related to: Costs and spending: future financial costs related to contracts or activities and/or inefficient allocation of resources due to the timing or availability of funding. Delayed time frames and cancelled activities: schedule delays; increases in the amount of time necessary to complete planned activities or functions and/or cancelled activities. Decreased availability of forces and equipment: reduced global presence and/or limited capabilities and capacities of both military personnel and equipment. Within a given case study, some DOD components identified little to no effect overall, while others components reported a combination of effects related to costs and spending, time frames or cancellations, and the availability of forces. Appendix I provides additional information about effects from sequestration that were identified by each of the service components across our five case studies. Some actions that DOD and the military services took to reduce expenses in fiscal year 2013 increased costs and spending in other areas of the budget during fiscal year 2013 or in a subsequent fiscal year. The following are examples of sequestration-related effects that DOD and service officials identified across our case studies: The Navy identified an overall increase in operational costs totaling about $7.6 million as a result of DOD’s decision to delay the deployment of the USS Harry S Truman Carrier Strike Group by 4 months. Navy officials explained that the additional cost was associated with maintaining readiness for the carrier strike group by continuing ship and air operations during the deployment delay. The Army reported deferring about $630 million of costs from fiscal year 2013 to fiscal year 2015 to perform maintenance on equipment returning from overseas contingency operations. This amount included maintenance funding for about 13,000 pieces of equipment, or about 9 percent of the approximately 142,000 equipment items the Army planned to repair in fiscal year 2013, among other things. Program officials with 4 of the 19 weapon systems we reviewed indicated that increased costs to particular aspects of their activities were due, at least in part, to the fiscal year 2013 sequestration. For example, Navy P-8A Poseidon officials reported that sequestration, in combination with congressional reductions, led to delays in establishing depot maintenance repair capabilities that are anticipated to result in cost savings. According to the officials, the delay in establishing these depot capabilities will defer such cost savings, resulting in a cumulative increase in overall life cycle costs of about $191 million, of which about $56.7 million was directly attributed to sequestration. Actions that DOD and the military services took to reduce spending in fiscal year 2013 resulted in some cancelled activities, schedule delays in beginning activities or projects, or increases in the amount of time necessary to complete them. DOD officials reported the actions also had longer-term effects on weapon systems and plans to restore military readiness in some cases. The following are examples of sequestration- related effects identified by DOD officials across our case studies: All four of the military services cancelled or reduced participation in training exercises in fiscal year 2013. For example, the Army ultimately cancelled a total of 7 of 14 planned Combat Training Center exercises in fiscal year 2013, including training for 5 active duty and 2 Army National Guard brigade combat teams. Similarly, the Air Force cancelled or reduced participation in 32 of 48 of its large-scale planned exercises, including two of its key multinational training events. According to service officials, these lost opportunities limited the number of trained individuals and units and contributed to an expected delay in achieving the goal of restoring readiness to forces that have been heavily deployed supporting overseas contingency operations. Program officials from 15 of the 19 weapon systems we reviewed reported experiencing delays, in part, due to the fiscal year 2013 sequestration. For example, according to officials from the Army’s AH- 64E Apache helicopter program office, the combined effects of the fiscal year 2013 sequestration and the continuing resolution affected the timeline for acquisition decisions for the AH-64E Apache in fiscal years 2013 and 2014, which resulted in contract changes and delays to time frames for evaluating and negotiating the system’s contract. DOD and service officials stated that all five DOD military construction accounts with sequestration reductions reported delays in awarding contracts for construction projects appropriated in fiscal year 2013. For example, the Navy did not award contracts for 33 out of 54 construction projects funded in fiscal year 2013. In contrast, the Navy did not award contracts for 17 out of 57 projects funded during fiscal year 2012. Project management officials from the service components stated that fewer projects were awarded than planned in fiscal year 2013—which could lead to corresponding delays in project completion and increased costs—but were unable to quantify the longer-term effects on time frames or costs. Some actions the services took to reduce spending in fiscal year 2013 decreased the availability of forces and equipment, reduced global U.S. military presence, and increased risk by limiting some service capabilities and capacity for responding to contingencies or other emergencies. The following are examples of sequestration-related effects identified by DOD officials across our case studies: The Navy cancelled or delayed some planned ship deployments in fiscal year 2013, which resulted in a 10 percent decrease in its deployed forces worldwide. For example, due to spending reductions, the Navy cancelled the deployments of the USNS Comfort and its supporting medical units, the USS Kauffman, and a maritime civil affairs team to the U.S. Southern Command area of responsibility. The Navy also postponed other deployments, such as delaying by 4 months the deployment of the USS Harry S Truman Carrier Strike Group to the U.S. Central Command area of responsibility. This delay reduced the Navy’s presence in the region to one carrier strike group. Naval Air Systems Command reduced funding to perform maintenance on and recertify about 800 weapons and weapon components—about 50 percent of those planned at the beginning of fiscal year 2013. According to Navy officials, deferring maintenance on these weapons and weapon components contributed to shortfalls in the availability of some weapons and necessitated the transfer of weapons across ships to conduct planned training and operations. Five of the eight active component Air Force commands we interviewed told us that some of their installations experienced reduced levels of fire and emergency response personnel or related equipment, fewer security force personnel and vehicles than needed, or both. Air Force officials said the shortfalls decreased their response capability for attending to critical incidents like aircraft fires or fuel spills, and to the air base defense program. However, officials were unable to quantify the specific number of personnel shortfalls or risk based on decisions to reduce funding for these base services. Program officials for 9 of 19 weapon systems we reviewed reported reduced or deferred system development or procurement efforts as a result of fiscal year 2013 sequestration reductions, which in turn delayed the release of these enhanced systems to the warfighter. For example, Army MQ-1C Gray Eagle unmanned aircraft system program officials told us that a reduction in procurement funds due to sequestration resulted in deferrals and delays for procuring a number of upgrades to the system, including radio upgrades, new shipping containers, and an engine lifetime extension. These deferrals could, in turn, delay the eventual fielding of the upgraded aircraft to the warfighter, since they increase the risk that the system may not receive necessary certifications that it is safe and suitable for use. Our analysis of DOD- and service-identified actions found that many of the reported sequestration-related effects were interdependent and overlapped. For example, delays in scheduled time frames often led to an additional cost or a spending increase in future fiscal years. Similarly, both increased costs and delayed time frames were also related to the reduced availability of forces and equipment in some cases. Based on interviews with service officials and our analysis of related documentation, we found some instances of interrelated effects across our case studies. For example: Due to spending reductions on some base operating support activities, the Navy limited its port operations to normal business hours. As a result, one Navy command estimated that it cost an additional $135,000 over its budgeted operating expenses for three ships to delay their arrival to port and auxiliary steam because they could not connect to shore power outside of the restricted port hours. Officials from the Navy’s CH-53K King Stallion helicopter program office told us that sequestration reductions contributed to a 2-month delay to the program’s schedule, including the start of low-rate initial production, where small quantities of the system are produced for testing and evaluation before producing greater quantities for fielding. These officials told us the delays affected acquisition milestones and the fielding of a more capable helicopter, and estimated that sustaining the program for an additional 2 months would increase estimated program costs by about $20 million to $30 million. Within our case studies, we also found that sequestration effects varied in type among different services and their components. For a given case study area, some components identified little to no effect overall, while other components reported a combination of effects related to costs and spending, time frames or cancelled activities, and to the availability of forces. For example, some service command officials we interviewed told us that they were not aware of any significant negative effects on base operating support within their command or component with regard to the availability of personnel or equipment. While some Air Force commands did report negative effects due to sequestration, as noted earlier, four other Air Force commands reported to us that they were able to accomplish their missions in fiscal year 2013 without any critical disruptions to the delivery of base support services. Also, officials from the Marine Corps and Marine Corps Reserve told us that there were no significant effects to base operating support due to sequestration. Based on our review of service documentation and interviews with service officials, sequestration reductions resulted in some effects that are difficult to quantify and assess and are therefore undetermined at this time. These types of effects include, among others, a decline in morale, the ability to hire and recruit a high-caliber civilian workforce, and the ability to build and maintain partner nation trust. In addition, our prior work found that, according to service officials, the 6-day civilian furlough during fiscal year 2013 negatively affected morale among civilian employees as well as service members. Officials from three of the military services also told us they believe the fiscal year 2013 sequestration has continued to affect their ability to recruit civilian and military personnel, but the effects on recruitment were undetermined at the time of our review and may not be quantifiable. For example, Navy officials told us they believe that the cancellation of fleet weeks and 27 of 30 Blue Angel squadron flight demonstrations in fiscal year 2013 could affect their future recruitment rates because those events are critical to their recruitment strategy. Further, officials from the Air Force and Navy said that reducing and cancelling exercises and deployments can negatively affect their ability to build and maintain partner nation trust, which is difficult to quantify. For example, Pacific Air Force documentation shows that the command reduced or cancelled their participation across several bilateral and multilateral training exercises. Officials said this likely affected their ability to build trust and partner capacity in the region and moreover, could give the appearance to other partner nations that the United States is an unreliable or uncommitted partner. Pacific Air Forces officials said that they would consider making different choices should sequestration occur again, because of concerns about the United States appearing unreliable or uncommitted to its partners, and the effect that lost trust could have on future U.S. participation in the region. Similarly, Pacific Fleet officials said that reductions to fuel as a result of sequestration limited participation in exercises and foreign country port visits in Seventh Fleet, which is assigned to support U.S. Pacific Command, and that cancelled deployments limited participation in support of partnership events in Fourth Fleet, which is assigned to U.S. Southern Command. Pacific Fleet officials said that these cancelled or reduced commitments would affect the Navy’s ability to engage and build relationships with partner nations. The fiscal year 2013 sequestration resulted in other effects that may not be known for years, such as the future costs associated with facilities repair and equipment maintenance projects that were deferred during fiscal year 2013. For example, in fiscal year 2013 the Army reduced funding for facilities sustainment projects, including preventative maintenance and repairs, by nearly $1 billion dollars, which represented about 40 percent of its fiscal year 2013 base budget request. Officials told us there may be an increased future cost to restore facilities to standards, but were unable to determine the additional cost. Likewise, Navy officials stated that the deferral of many non-emergency maintenance and sustainment activities may eventually diminish facility life cycles and lead to higher future costs for restoration or demolition, but these officials were unable to determine the increased costs. DOD and the military services generally relied on previously existing processes and funding flexibilities, such as the ability to reprogram and transfer funds, to mitigate the effects of the fiscal year 2013 sequestration. Our review identified some limited efforts to document decisions or lessons learned from implementing the fiscal year 2013 sequestration, but DOD and the military services did not comprehensively document, assess, or share best practices or lessons learned from their experiences. DOD did not receive specific additional authorities to help manage fiscal year 2013 sequestration reductions, but according to DOD and military service officials, they relied on guidance and previously existing processes and flexibilities for managing reduced resources to help mitigate the effects of sequestration. Guidance provided before and after the President’s sequestration order emphasized that federal agencies should identify appropriate steps to manage budgetary uncertainty while minimizing any adverse effects to agency missions. For example, an Office of Management and Budget memorandum on planning for budgetary uncertainty in fiscal year 2013 directed federal agencies to use any available flexibility to reduce operational risks and minimize effects on the agency’s core mission. Similarly, a DOD memorandum on handling budgetary uncertainty authorized its components to begin implementing near-term actions, reversible if possible, to mitigate the risks caused by the continuing resolution in place at the time, and potential sequestration. In response to this guidance, DOD and the military services took various actions to mitigate the effect of sequestration, such as establishing processes to identify priorities and evaluate alternatives for spending reductions. In some cases, the military services leveraged existing approaches, such as ranking programs and functions, to manage sequestration reductions within their commands and program offices. For example, according to Army budget officials, the Army utilized a process referred to as a sequestration “Rehearsal of Concept” drill to identify priorities Army-wide and implement reductions. According to Army officials, a Rehearsal of Concept drill is generally used to inform operational decisions, but this drill was used for the fiscal year 2013 sequestration to involve relevant stakeholders and establish priorities across the range of programs and activities that would be affected by sequestration reductions. Army Forces Command officials informed us that in addition to the Rehearsal of Concept drill, the Army also relied on a process referred to as a “Focus Area Review Group” to manage sequestration reductions in an effort to maintain readiness and minimize risks to the Army’s forces and missions. Service officials also noted that broadening some of their existing processes removed stove-pipes to planning and allowed them to integrate requirements and plan command- or service-wide rather than by individual functional area. For example, to implement the fiscal year 2013 sequestration reductions at the major command level, Air Combat Command officials adapted their existing planning process by grouping all of the command’s functions and activities into three categories based on their relative funding priority. Officials told us that considering requirements command-wide rather than by directorate or functional area, as they had done prior to fiscal year 2013, gave them better visibility over the interrelationship of funding and allowed them to make more informed decisions about what functions and services were needed to maintain their commitment to readiness. For example, command officials said this allowed them to consider and balance the need for base operating support funding for utilities and building leases against other priorities such as their flying hour program. Using these processes to prioritize funding, the services were able to mitigate some effects to those activities deemed the most critical based on DOD and service guidance, while reducing funding to lower priority activities. Within the areas we selected for more in-depth review, we found that the services prioritized areas, such as training and equipment maintenance in support of deployed and next-to-deploy forces and base services like family and warfighter support programs. For example, we found the military services prioritized funding for base support services over funding for facilities, sustainment, restoration, and modernization projects because base support services fund essential functions like family programs, civilian salaries, and utilities. According to our analysis of DOD’s fiscal year 2013 budget data, the services reduced facilities, sustainment, restoration, and modernization funds by about $2.8 billion or almost 27 percent of the enacted funding amount, which was almost three times as much as the approximately $1 billion, or 4 percent reduction to base operating support services. Furthermore, the military services prioritized funding to support training and equipment maintenance for currently deployed and next-to-deploy forces, while cancelling or curtailing training and maintenance for non-deploying units. For example, all four of the military services reported being able to fulfill combatant commanders’ requests for forces in fiscal year 2013, but said that reductions in training for non-deploying units affected the readiness of these forces. DOD also used existing funding flexibilities to manage sequestration reductions and other budgetary constraints in fiscal year 2013, such as the ability to establish funding priorities for certain accounts, use prior year unobligated balances to achieve some portion of the sequestration reductions, and use reprogramming and transfer authorities to realign funds between and within accounts. For example, with regard to the operation and maintenance accounts, DOD officials said they had more flexibility in allocating sequestration spending reductions than they did for other accounts. Specifically, the program, project, and activity for operation and maintenance was defined at the overall account level. According to officials from DOD and some military services, this provided the flexibility to establish funding priorities for specific activities within accounts and reduce funding for lower priority activities. This is in contrast to the program, project, and activity definitions for the RDT&E, procurement, and military construction appropriation accounts. For these accounts, DOD and the services had to apply reductions evenly across each budget line item for their individual weapon systems or other acquisition programs and military construction projects. As discussed earlier, DOD and the military services also reported using prior year unobligated balances to help meet fiscal year 2013 sequestration reductions within their RDT&E, procurement, and military construction accounts. According to some DOD and service officials, the use of unobligated balances within the RDT&E and procurement accounts helped them offset some sequestration reductions and minimize the effect those reductions may have otherwise had. For example, according to Air Force officials the use of prior year unobligated balances, among other factors, allowed them to protect their top weapon systems and other acquisition programs and avoid some schedule delays. DOD also used its transfer and reprogramming authorities to help mitigate the effects of sequestration and other budgetary constraints in fiscal year 2013. DOD officials said that transfer and reprogramming flexibilities are used annually to address funding priorities. However, the use of transfers and reprogrammings helped them mitigate reduced resources as a result of sequestration as well as to cover expenses related to overseas contingency operations shortfalls and emergent operational requirements, among other factors. Our review of DOD data found that the department used most of its available transfer authority and realigned most of these funds into the operation and maintenance accounts from other types of accounts. Specifically, according to data from Office of the Under Secretary of Defense (Comptroller) officials, of the $7.5 billion in transfer authority available to DOD for fiscal year 2013, DOD utilized $6.8 billion, or about 91 percent of the authority in total. Using these authorities, DOD had the flexibility to move funds between appropriations and in doing so provided additional resources to the operation and maintenance accounts. Our analysis also found that DOD transferred about $5.7 billion into the operation and maintenance accounts from other appropriations, primarily from the military personnel and procurement accounts. The use of transfers and reprogrammings allowed the services to mitigate or reverse some actions that were taken initially after the March 1, 2013 sequestration order. For example, in July 2013, the Air Force resumed flight operations for 17 active duty combat units that had initially ceased flying in April 2013. The Navy also restored planned maintenance for eight surface ships that had been initially deferred. Similarly, DOD used transfer or reprogramming authorities to move funds from prior years and cancelled projects unrelated to sequestration, to offset the $821 million sequestration reduction within the military construction accounts. DOD and service officials stated that as a result of this flexibility, no construction projects were delayed, reduced in scope, or cancelled as a result of sequestration. Notwithstanding the flexibility to transfer and reprogram funds, some actions taken in response to sequestration could not be reversed, and some of the programs we reviewed within the RDT&E and procurement accounts also had their funding further reduced by transfer and reprogramming actions. For example, Army training officials stated capacity constraints at their Combat Training Centers and the timing of funds reprogrammed later in the fiscal year affected the Army’s ability to reschedule cancelled Combat Training Center rotations. In addition, we found that several acquisition programs for weapon systems included within our RDT&E and procurement case study had their funding reduced as a result of transfers or reprogrammings beyond the sequestration reductions, including the AH-64E Apache helicopter and F-15 and F-22 aircrafts. According to Air Force F-15 officials, about $24 million in RDT&E and procurement funding was transferred to support critical readiness shortfalls within the Air Force’s operation and maintenance account. Consistent with GAO’s March 2014 recommendation, the Office of Management and Budget updated its guidance to federal agencies in November 2014 to include a section specific to sequestration. This guidance instructs federal agencies to record decisions about how sequestration is implemented to maintain consistency from year to year, inform efforts to plan for sequestration in future years, and build institutional knowledge. Although the Office of Management and Budget’s guidance was revised after the end of fiscal year 2013 and does not explicitly require agencies to record decisions regarding the fiscal year 2013 sequestration, federal internal control standards also highlight the importance of documenting significant events in a timely manner. Specifically, these standards state that agencies should identify, record, and distribute pertinent information to the right people in sufficient detail, in the right form, and at the appropriate time to enable them to carry out their duties and responsibilities and ensure that communications are relevant, reliable, and timely. During our review, we found that DOD and the services had taken some steps to document decisions and actions taken in response to reduced resources in fiscal year 2013. For example, according to officials from the Office of the Under Secretary of Defense (Comptroller), their office had documented decisions on sequestration reductions at the program, project, and activity level with the release of their June 2013 report, DOD Report on the Joint Committee Sequestration for Fiscal Year 2013. These officials also told us that throughout the implementation of sequestration in fiscal year 2013, their office collected information from the military services on programs, projects, and activities that were cancelled due to sequestration and reported this information to the Office of Management and Budget. In addition, some officials within the services’ budget offices confirmed that sequestration reductions at the program, project, and activity level had been documented in their financial management systems. Officials from all of the services also informally identified some lessons learned from their experiences implementing sequestration. Officials told us that prior to sequestration, they had not considered or were not fully aware of the interdependency of certain programs and activities, the order in which certain functions would need to have funding restored to accomplish intended results, or the potential for unintended consequences as a result of some funding decisions. For example, Army officials told us that shortfalls in funding for training ranges and facilities affected the Army’s ability to conduct training for some units whose resources had been reduced due to sequestration. Army Forces Command officials explained that unit readiness continued to decline through fiscal year 2014 even though funding had been restored for its units until ammunition distribution, maintenance, transportation, and training range services were also restored. Further, Air Force officials identified the need to balance reductions between operational and individual training requirements, and noted that both preserving funding for individual training and education requirements and maintaining a commitment to provide ready forces for operations are important to the long-term health of the force. Similarly, officials from the Navy said that some actions taken, such as not exempting all shipyard civilians from furloughs or not performing preventative maintenance, had unintended consequences for maintenance schedules or resulted in increased costs overall. Specifically, Navy officials told us a decision to defer preventative maintenance repairs to a damaged landing ramp later resulted in an approximately $600,000 cost to repair a landing craft when loose concrete damaged its engine. While officials said it is difficult to know which decisions may lead to higher costs, they noted the importance of understanding the interrelationship between funding and potential consequences from funding decisions. Further, according to a Marine Corps budget official, planning for sequestration in fiscal year 2013 allowed the Marine Corps to better understand the potential effects that reductions would have across their commands and within functional areas, which informed their budgetary planning in fiscal years 2014 and 2015. Our review found that the Joint Staff and two of the services had undertaken initiatives to document lessons learned or best practices from implementing sequestration. Specifically: Officials from the Joint Staff Manpower and Personnel directorate said that in June 2013 they gathered effects and lessons learned specific to DOD’s civilian furlough in fiscal year 2013. Officials told us that these efforts were not formally documented in a report or the Joint Staff’s Lessons Learned Information System, but the lessons and effects identified would help to inform decision-making should another civilian furlough occur. Similarly, in November 2013, the Navy Warfare Development Command completed a review of the effects and lessons learned stemming from the civilian furlough. This review identified costs, savings, and effects associated with furloughing civilians in fiscal year 2013, as well as lessons learned and recommendations should civilian furloughs occur again. For example, the review found that almost half of the savings from furloughing Fleet Forces Command and Pacific Fleet civilians was lost due to costs from schedule delays or lost productivity, and recommended that the Navy fully consider the interdependencies between the reductions in civilian workforce and the Navy’s capacity to meet fleet requirements should a furlough occur again. At the time of our review, the Air Staff Lessons Learned directorate was finalizing its review of information gathered from its active and reserve components on the Air Force’s implementation of sequestration, its effect on readiness and infrastructure, and any lessons learned that could inform future decision-making should sequestration occur again. Air Force officials told us that they plan to release a final report identifying their observations and lessons learned at the end of May 2015, which they expect to share across the Air Force and on the Joint Staff’s Lessons Learned Information System. The Joint Staff, Navy, and Air Force’s initiatives represent positive steps towards documenting lessons learned and best practices from implementing sequestration. However, the information produced through these and other DOD efforts is limited in scope and purpose, are still ongoing, and have not been widely shared across the services. For example, the scope of the Joint Staff’s and Navy’s reviews was limited to lessons learned from the civilian furlough and the Air Force’s initiative is still ongoing. As a result, it remains unclear whether or how applicable either of the services’ lessons learned will be in informing their future budgetary planning and decision making. Moreover, officials from the Army, Marine Corps, and Office of the Under Secretary of Defense (Comptroller) told us they were unaware of the Joint Staff, Navy, and Air Force’s initiatives, suggesting that some information on lesson learned efforts is not being disseminated across DOD and the services. There are existing processes in place to share information on lessons learned across DOD and the services. For example, three of the services’ lessons learned offices told us that in addition to maintaining their own lessons learned databases, the Joint Staff’s Joint Lessons Learned Information System can be used to document and share lessons learned identified across the services. Further, officials from Navy Warfare Development Command told us that they learned about the Air Force’s efforts to document sequestration-related lessons learned through a quarterly Joint Lessons Learned Program review. According to officials from the Navy Warfare Development Command, the services’ lessons learned offices participate in these quarterly reviews, which are led by the Joint Staff’s Lessons Learned directorate and can be used to share lessons learned and best practices across the services. Although DOD and some services have independently taken some steps to document decisions and lessons learned from sequestration, they did not establish requirements for their commands and components to document or assess information on best practices or lessons learned, as identified by the Office of Management and Budget’s guidance and federal internal control standards. According to DOD and some service officials, as of February 2015, they were unaware of or had not taken steps to comply with the Office of Management and Budget’s guidance to document the decisions concerning implementation of sequestration. In February 2015, officials from the Office of the Under Secretary of Defense (Comptroller) told us that, in their opinion, documenting decisions on sequestration reductions at the program, project, and activity level within their financial management systems effectively complied with the Office of Management and Budget’s guidance and that they did not plan to take any additional steps to document lessons learned in response to the guidance. Officials with the Office of the Under Secretary of Defense (Comptroller) and some of the military services also told us they did not see the value in documenting or assessing past decisions or gathering such information beyond the efforts they have already made. For example, officials with the Office of the Under Secretary of Defense (Comptroller) explained that the weekly reports provided to the Office of Management and Budget on actions taken in response to sequestration in fiscal year 2013 have not been used to provide a comprehensive assessment of sequestration’s effects. According to these officials, they consider each sequestration event to be unique and said that that they would issue subsequent guidance to the components on how to implement any future instances of sequestration at that time, should it occur. However, these officials did acknowledge that consolidating policy memorandums and documentation regarding management actions taken during the fiscal year 2013 sequestration so that these decisions are easily accessible across the department might be beneficial to planning for a possible future sequestration. DOD’s efforts to document sequestration decisions within their financial accounting systems could provide some visibility over how the department allocated sequestration reductions to inform future planning efforts. Yet these decisions are not inclusive of the broader principles and practices used by the department to manage sequestration reductions in fiscal year 2013 and do not account for any lessons learned during the implementation of sequestration. Without documenting, assessing, and sharing DOD’s and the services’ best practices and lessons learned from implementing sequestration, including, for example, strategies for evaluating the interdependence of various funding sources subject to budgetary reductions, DOD is missing an opportunity to gain valuable institutional knowledge that would help facilitate future decision making about budgetary reductions should sequestration occur again. DOD received relief in fiscal years 2014 and 2015 from the spending caps established by the Budget Control Act of 2011, but under current law, DOD could experience sequestration again in future fiscal years, depending on the appropriations enacted for fiscal year 2016 and beyond. In fiscal year 2013, DOD was able to reduce the effects of sequestration on programs that the department and the military services determined to be high priorities. However, the reductions that did occur had a variety of effects, including cancelled training exercises and delays in performing equipment maintenance, contracting for military construction projects, and developing and procuring weapon systems, among others, as well as longer-term effects that may be hard to determine. Given that some budget flexibilities the department used in 2013 to mitigate the size of reductions may be unavailable in future years—for example because of a decrease in available prior year funds for transfer or reprogramming—it is all the more important that DOD be able to use the institutional knowledge it gained when implementing sequestration in fiscal year 2013. In light of this possibility and other ongoing budget uncertainties, the department could benefit from a close examination of its experience with sequestration in fiscal year 2013. Some decision makers tasked with implementing the 2013 sequestration gained valuable insights into how to manage budget reductions, for example by gaining visibility over the interrelations between various budget accounts and the effect of the reductions to some accounts on carrying out activities funded by other accounts. However, without documenting, assessing, and sharing information on lessons learned and best practices in implementing the 2013 sequestration reductions across the department and leveraging existing mechanisms to share this information, decision makers at the program, DOD component, and department-wide levels may not benefit from such insights. To better enable DOD and the services to achieve informed decision making in future times of budgetary uncertainty, the Secretary of Defense should direct the Under Secretary of Defense (Comptroller) and the secretaries of the military departments to take the following two actions: Document and assess lessons learned and best practices from implementing sequestration in fiscal year 2013. These lessons could include such practices as evaluating the interdependence of different types of funding sources to better understand how those can be synchronized to optimize capacity and minimize disruptions to training and readiness in the event of future budgetary constraints; and Leverage existing information-sharing mechanisms to make these lessons learned and best practices available to decision makers within the services and across the department. We provided a draft of this report to DOD for review and comment. In its written comments, DOD concurred with our two recommendations. Specifically, DOD stated that the Office of the Under Secretary of Defense (Comptroller) will work with the military services to develop a repository of lessons learned and best practices gathered from implementing the fiscal year 2013 sequestration. DOD also stated this office will develop a Web portal accessible from across the department to house the lessons learned and best practices. DOD stated the target date for completion of both efforts is December 2015. DOD’s comments are reprinted in their entirety in appendix IV. DOD also provided technical comments, which we incorporated into this report, where appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense (Comptroller); the Secretaries of the Army, Navy, and Air Force; and the Commandant of the Marine Corps. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Johana R. Ayers at (202) 512-5741 or [email protected], or Michael J. Sullivan at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our review included the sequestration reductions applied by the Department of Defense (DOD) in fiscal year 2013 to its base and overseas contingency operation funding within the following nonexempt appropriation accounts: operation and maintenance; research, development, test and evaluation (RDT&E); procurement; and military construction.of the five nongeneralizeable case studies included in our review. We selected these case studies by identifying five types of expenses or investments to represent each type of nonexempt appropriation, to include: This appendix contains more detailed information for each 1. operation and maintenance accounts: military service components’ operational tempo and training; 2. operation and maintenance accounts: military service components’ maintenance and weapon systems support; 3. operation and maintenance accounts: military service components’ 4. RDT&E and procurement accounts: a selection of defense-wide and military services’ acquisition programs for weapon systems; and 5. military construction accounts: defense-wide and military services’ major military construction projects. More detailed information on our approach to selecting the case studies can be found in appendix II. In this appendix, for each case study, we provide a summary that includes information on the following elements: Overview: A description of the types of programs, projects, and/or activities funded within the case study and the corresponding budgetary resources for fiscal year 2013. Allocation of sequestration reductions: A summary of how sequestration reductions were allocated within the case study area, including differences in how reductions were applied within DOD components. Sequestration effects: A description of sequestration-related effects within each case study area are generally grouped by categories of costs and spending; delayed time frames or cancelled activities; availability of forces and equipment; and, where appropriate, effects that are undetermined or difficult to quantify. Mitigation efforts: A summary of the flexibilities applied and actions taken by DOD components to mitigate the effects of sequestration reductions within the various case studies, including such things as the use of prior year unobligated balances, transfer and reprogramming authorities, and other case-study- or component- specific initiatives. The case study findings presented in this appendix provide illustrative examples of fiscal year 2013 sequestration effects and mitigation strategies across the department. Whenever possible, we corroborated testimonial evidence from interviews with DOD officials with data or other documentary evidence regarding the effects (including expected future effects) of sequestration on programs, projects, and activities within the case study areas. However, data were unavailable to support some of the anticipated future effects that officials described to us, such as the degree of deterioration of infrastructure from reduced sustainment funding. While the findings of the five case studies cannot be generalized to all DOD programs, projects, and activities, they reflect a wide range of perspectives across the department. In implementing sequestration, the service components reduced fiscal year 2013 funding for the operational tempo and training category by about $2.7 billion, representing about 5 percent of the service components’ enacted amount for the category according to our analysis of DOD’s budget execution data (see fig. 8). The active and reserve components allocated varying amounts and percentages of sequestration reductions within the operational tempo and training category, as shown in figure 9. The active components of the Army, Navy, and Air Force applied larger fiscal year 2013 sequestration reduction amounts—in dollar terms—to the operational tempo and training category than did these services’ reserve components, which reflects the larger size of the active components’ enacted amounts relative to those of the reserve components. The active components’ reduction amounts ranged from $73 million for the Marine Corps to $783 million for the Army. By comparison, reserve components’ reductions in the operational tempo and training category ranged from $8 million for the Marine Corps Reserve to $414 million for the Air National Guard. However, on average, the active components’ reduction to the category as a percentage of the enacted amount (about 3 percent) was smaller than that of the reserve components’ reduction (about 9 percent). Based on our review of DOD’s budget execution data, service training documents and data, and interviews with training officials, we found that the service components took steps during the fiscal year 2013 sequestration to protect resources for certain priorities, such as deployed units or those preparing to deploy for ongoing operations, in response to DOD’s memorandum. As a result, officials from all four military services reported being able to fulfill combatant commanders’ requests for forces in fiscal year 2013. To preserve funding for these priorities, officials from service component headquarters and commands reported making reductions to spending in lower priority areas, such as training and exercises for units not scheduled to deploy. Officials from some of the service components identified some effects resulting from sequestration reductions. However, the type of effects identified varied by component, with some components indicating that they did not experience significant negative effects. For example, Marine Corps Forces Command officials told us the command avoided cancelling deployments or major exercises and reported no readiness effects as a result of sequestration. As a result of actions taken to reduce spending for lower priority areas in fiscal year 2013, service component officials identified negative effects, which based on our analysis are related to increased costs in fiscal year 2013 or a subsequent fiscal year; cancelled or reduced training activities and delayed time frames to restore readiness; and a decreased availability of forces or equipment to support operations and training. Some of the effects identified were interrelated, while others were difficult to quantify. In some cases, reduced spending for certain activities in fiscal year 2013 led to increased costs for planned activities. For example, according to officials from the Navy’s Fleet Forces Command, sequestration reductions contributed to their decision to delay the deployment of the USS Harry S Truman Carrier Strike Group by four months, which resulted in an approximately $7.6 million increase in the carrier strike group’s overall operational cost. These officials told us that the additional cost was the result of maintaining the carrier strike group at a deployable readiness level during the four-month delay, which required additional spending on ship and air operations. Reduced spending for training in fiscal year 2013 also led to increases in planned spending in a subsequent fiscal year. For example, documents from the Air Force’s Air Combat Command show that the command ultimately reduced spending on its flying hour program by about $315 million in fiscal year 2013, which led to a decrease in the combat readiness of some units. readiness for units affected by sequestration, among other factors, Air Combat Command officials stated the Air Force has increased spending for the flying hour program more than previously planned for fiscal years 2014 through 2018. According to Air Combat Command documents, the command initially reduced its flying hour program by about $592 million, or 18 percent, following the March 1, 2013 sequestration order. In May 2013 Headquarters Air Force resumed flying operations for some units at an estimated cost of $69 million by reducing funding and increasing risk in other areas of the budget. In addition, in July 2013 Headquarters Air Force reprogrammed about $200 million, which allowed Air Combat Command to resume flying operations for all units that had ceased flying earlier in the fiscal year. example, Army headquarters officials said that cancelling combat training center rotations in fiscal year 2013 further limited professional development opportunities for commanders that have had their combat training center rotations focused on mission-specific training since 2001, such as counter-insurgency skills. Further, Army headquarters officials explained that cancelled combat training center rotations may also have long-term consequences to units’ training and leadership expertise for certain skills. For example, officials noted that officers and noncommissioned officers in senior command positions who have received limited training across the full range of operations may not have sufficient expertise and experience to teach these skills to the junior officers and noncommissioned officers they are expected to lead, adding to a gap in expertise for some service personnel. In addition, according to Air Combat Command documents and officials, the Air Force stood down 17 of their 62 operational squadrons for 3 months in fiscal year 2013, and reduced flying hours for 10 other squadrons for a period of 1 to 3 months each. Air Force officials told us that the stand-down of the squadrons and reduced flying hours created several effects. For example, Air Combat Command officials said that pilots experienced deterioration in the proficiency of critical skills and combat readiness that needed to be restored once the squadrons resumed flying operations. Specifically, as of July 2013, an Air Combat Command document reported a 13 percent decrease in reported combat readiness due to reduced flying hours. In addition, pilots were unable to execute more advanced training because they had to redo previously completed training to regain lost proficiency. Air Combat Command officials also told us that sequestration reductions resulted in the cancellation of some training courses that may affect officer career progression and the availability of these skill sets. For example, Air Force documents show that a cancelled course for weapons instructors prevented more than 100 weapons officers from being available for assignment and will decrease the Air Force’s ability to fill weapons instructor positions through at least fiscal year 2016. Due to fiscal year 2013 sequestration reductions, the services also cancelled some joint exercises, which led to lost opportunities to perform training across services or combined training with other nations. For example, officials from the Navy’s Pacific Fleet told us they cancelled their biennial Northern Edge 13 joint training exercise. According to Pacific Fleet officials, this joint exercise is designed to include Navy, Marine Corps, and Air Force service participation and is one of two regularly scheduled joint exercises in the U.S. Pacific Command’s area of responsibility. These officials noted that its cancellation resulted in a four-year gap in holding the event, limiting opportunities to conduct joint training within the command. Further, Air Combat Command officials told us that the Air Force cancelled or reduced participation in 32 of 48 large- scale planned exercises, ultimately effecting training for 283 units and 13 partner nations. Of those exercises cancelled, two were the Air Force’s joint and multinational “Red Flag” exercises designed to emulate the full spectrum of operations. Air Combat Command and Air National Guard officials told us these lost training opportunities affected both active and reserve units’ ability to conduct combined training and build relationships with partner nations. The cancellation of exercises and reduced training opportunities also resulted in reported delays for meeting some of the services’ goals to restore readiness for units affected by a high pace of combat operations. For example, according to DOD budget documents, the Army planned to begin refocusing the training for brigade combat teams undergoing combat training center rotations in fiscal year 2013 on skills necessary to perform full spectrum operations. However, Army headquarters officials stated that the cancellation of six training exercises, along with other reductions to training, delayed their goal of achieving readiness for full spectrum operations for brigade combat teams from fiscal year 2019 until at least 2020. Similarly, an Air Force headquarters official told us that it took squadrons that were stood down an average of 9 months to regain pilot proficiency and recover lost readiness. As a result of being stood down and the amount of time spent regaining proficiency, Air Combat Command officials reported that some pilots were only able to complete mission-specific training prior to deploying and were unable to train for other missions across the full spectrum of operations. Some actions the service components took to reduce spending in fiscal year 2013 reportedly decreased the availability of forces and equipment to support emergent needs or for other purposes. For example, Army headquarters and Forces Command officials told us they reduced training funds for their non-deploying units, which required these units to focus resources on individual- and squad-level training and resulted in fewer units trained and available for deployment than planned. According to testimony by the Chief of Staff of the Army, 85 percent of brigade combat teams were not ready for combat in fiscal year 2013, if required. Army Forces Command officials told us that their training plans are designed so that half of active component brigade combat teams are ready to deploy if required. However, these officials told us that three brigade combat teams with the required training were available to meet surge requirements at the end of fiscal year 2013. Air Force officials also reported effects on the number of units available to respond to emergent requirements and the availability of equipment for training. For example, Air Combat Command officials told us that from April to July 2013 when the Air Force stood down 17 operational squadrons and reduced flying hours for 10 more squadrons, it had 1 squadron with the required combat training available to deploy for emergent requirements. Furthermore, according to internal summary reports by two Air Force commands, these commands chose to limit their supply purchases for squadrons to reduce spending, including purchases of spare parts for equipment and weapon system repairs, to those considered essential for fiscal 2013, and to defer any other purchases to future years. Air Force maintenance officials told us that the reduction in the stockpile of repair parts generally led to increased repair times in fiscal year 2013, although the specific duration of those delays was unknown. The officials also noted that the resulting shortfalls of available spare parts sometimes delayed maintenance completion on equipment and weapon systems, thereby reducing the availability of those items to units for training and operations. In addition, due to reduced resources in fiscal year 2013, the Navy postponed or cancelled some planned deployments, which resulted in a 10 percent decrease in its deployed forces worldwide. For example, as noted above, the Navy delayed the deployment of the USS Harry S Truman carrier strike group, which according to a Navy headquarters official reduced the Navy’s presence in the U.S. Central Command’s area of responsibility to one carrier strike group. This official told us that the delay of the Truman also affected the deployment of a subsequent carrier strike group, which decreased the Navy’s ability to respond to contingency operations. Furthermore, due to spending reductions in fiscal year 2013, officials from the Navy’s Pacific Fleet told us they reduced funding for their ship fuel program, which led to cancelled deployments and reductions to training. According to Pacific Fleet data on fourth quarter fuel reductions, Fourth Fleet, which is assigned to support U.S. Southern Command, received fuel for about 55 percent of its scheduled training and operational requirements. As a result of fuel and other spending reductions, officials from Fleet Forces Command and Pacific Fleet told us the Navy cancelled a number of deployments including: The USNS Comfort and its supporting medical units, the USS Kauffman, and a maritime civil affairs team; The USS Rentz and USS Jefferson City, which would have supported The USS Pearl Harbor, which would have supported partnership activities in the region. Based on our review of DOD data and interviews with service component officials, we found that the services relied on internal prioritization processes to manage fiscal year 2013 sequestration reductions by applying reductions to lower-priority areas and also used existing funding flexibilities, such as reprogramming and transfer authorities to mitigate the effects of sequestration. By using funds transferred or reprogrammed into the operational tempo and training category, some service officials reported being able to fund some unplanned requirements or reverse some actions initially taken in response to sequestration reductions. For example, according to DOD reprogramming documents: As discussed earlier, in July 2013 DOD transferred about $200 million into the Air Force’s operation and maintenance account to mitigate shortfalls in its flying hour program. According to Air Combat Command officials, this action allowed the Air Force to resume some flying operations for squadrons that had been stood down. Our analysis of fiscal year 2013 Air Force flying hour data shows that, after declining from April through June, active duty combat units began increasing their execution of flying hours in July and August. DOD transferred about $135 million to the Navy’s operation and maintenance account to restore some flying hours and support unbudgeted missions, among other things. According to an official from the Navy’s financial management office, this funding allowed the Navy to restore tactical flying hours and fund unbudgeted ship operations in the Middle East. The use of transfers and reprogrammings gave the services some flexibility to manage reductions, but did not allow them to restore some actions taken in response to the fiscal year 2013 sequestration. Specifically, Army and Air Force officials told us that because some of the transferred or reprogrammed funds did not become available until later in the year, some cancelled exercises and training classes could not be restored. For example, Army Forces Command officials said that because of capacity limitations at combat training centers, they would have been unable to reschedule cancelled exercises even if additional funds had become available later in the year. Additionally, Air Combat Command officials described to us the difficulty of spending reprogrammed funds because of the interrelationship of funding sources and activities. For example, these officials told us that when transferred or reprogrammed funds for flight hours became available, the Air Force had to first restore training for aircrew and maintenance personnel that had lost critical skills before pilots were able to resume flying hours. Beginning in fiscal year 2013, some service officials reported taking actions to help mitigate existing readiness shortfalls that were exacerbated by sequestration. For example, Army Forces Command officials told us that in response to concerns about the service’s ability to surge units during sequestration and only having three brigade combat teams available to meet surge requirements at the end of fiscal year 2013, the Army created the “Army Contingency Force.” According to DOD and Army budget documents, the Army Contingency Force will include a mix of fully trained brigades capable of providing an initial response and surge capability to respond to emerging requirements. Furthermore, as part of its ongoing efforts to address concerns about the pace of operations, length of deployments, and overall readiness, the Navy recently revised its operational schedule—referred to as the Optimized Fleet Response Plan—for its carrier strike groups. While this plan is not in direct response to sequestration, according to Navy documents and testimony from the Chief of Naval Operations, it is intended to help mitigate readiness and deployment challenges that were exacerbated by sequestration by providing more stable operational schedules to ensure that ships are able to adequately address their training and maintenance requirements. In implementing sequestration, the service components reduced fiscal year 2013 funding for the maintenance and weapon systems support category by about $2.7 billion, representing about 9 percent of the service components’ enacted amounts for the category according to our analysis of DOD’s budget execution data (see fig. 11). The active and reserve components allocated varying amounts and percentages of the fiscal year 2013 sequestration reductions within their maintenance and weapon systems support category, as shown in figure 12. The active Army, Navy, and Air Force components generally applied larger sequestration reduction amounts—in terms of dollars—to the maintenance and weapon systems support category than did the reserve components, which reflects the larger size of the active components’ enacted amounts relative to those of the reserve components. The active component reduction amounts ranged from $0 for the Marine Corps to about $1.3 billion for the Air Force. Reserve component reduction amounts ranged from $0 for the Air National Guard to about $125 million for the Air Force Reserve. However, as also shown in figure 12, the reductions in percentage terms varied substantially among both the active and reserve components. Based on our review of DOD’s budget execution data, internal maintenance records, service guidance, and interviews with maintenance officials, we found that the service components took steps during the fiscal year 2013 sequestration to preserve funding for maintenance activities most directly associated with equipment readiness for those units deploying or next-to-deploy in support of ongoing operations, and reduced spending on equipment maintenance for later-deploying units, in response to a DOD memorandum. As a result of their efforts to reduce spending on lower-priority maintenance activities for units that were not deploying in the near term, officials from service component headquarters and maintenance commands identified some effects related to increased costs and deferred spending for maintenance delayed to future fiscal years; delayed time frames associated with completion of ongoing maintenance during the year; and the reduced availability of equipment, supplies, and personnel for conducting maintenance work and training, based on our analysis. These effects varied by component. For example, the active Army, Navy, and Air Force components reported effects related to each of those three areas. However, officials from the Marine Corps’ active and reserve components told us there was little to no effect on equipment maintenance because they utilized supplemental overseas contingency operations funding to offset sequestration reductions. The active Marine Corps, in particular, received a large amount of overseas contingency operations funding for depot maintenance in fiscal year 2013 relative to the amount requested. Some actions the service components took to reduce their expenses in fiscal year 2013, such as deferring equipment maintenance, contributed to deferred spending and the potential for increased costs in future fiscal years. Service maintenance officials told us that from year to year, each service generally defers some portion of its planned equipment maintenance for a variety of reasons, such as capacity limitations at maintenance facilities, operational considerations that postpone the availability of equipment for maintenance, and requirements that exceed available funding. According to service officials, the total amount of deferred maintenance in any given year cannot be specifically attributed to one factor over another, including sequestration reductions in fiscal year 2013. However, officials from the Army’s and Navy’s maintenance commands told us that sequestration reductions contributed to the following examples of deferred maintenance and spending in fiscal year 2013: Maintenance officials from Army headquarters reported deferring about $630 million of costs from fiscal year 2013 to fiscal year 2015 to perform maintenance on equipment returning from overseas contingency operations. According to these officials, this amount included field-level maintenance for 28 aircraft and maintenance funding for about 13,000 pieces of equipment, or about 9 percent of the approximately 142,000 equipment items the Army planned to reset in fiscal year 2013. Naval Sea Systems Command officials told us they deferred from fiscal year 2013 to fiscal year 2014 at least 75,000 days of civilian labor and their associated expense for a variety of major projects, such as ship and submarine engineering overhauls. Based on our review of Navy budget documents, this amount represented about 2 percent of the 4.6 million days of labor planned for maintenance in fiscal year 2013, or the approximate equivalent to shipyard maintenance on two Los Angeles-class submarines for 6 months each. U.S. Pacific Fleet officials also told us that maintenance deferrals into fiscal year 2014 displaced other maintenance planned for that year on other surface ships or submarines, which in turn affected those vessels’ availability in the fleet for training and operations. However, officials could not quantify the precise backlog of ship and submarine maintenance in 2014 or the affect on training or deployment schedules due to sequestration as opposed to other factors. In connection with the reported instances of deferred spending and maintenance, Army, Navy, and Air Force officials expect that deferred maintenance will lead to future increased costs that could not be quantified at the time of our review. For example, officials from Headquarters Air Force told us that, within acceptable risk levels, aircraft continued to fly past their scheduled maintenance time frames in fiscal year 2013. These officials further explained that they anticipate the future maintenance and repair will be more expensive because of the additional wear and tear on the aircraft. Similarly, the Chief of Naval Operations testified in February 2013 that the cancellation of maintenance for ships and aircraft will reduce their service lives and increase the likelihood of breakdowns, leading to a higher cost for those additional future repairs. In September 2012, we found that the Navy has recognized that deferring maintenance can affect readiness and increase the costs of later repairs.studies have found that deferring maintenance on ballast tanks to the next major maintenance period will increase costs by approximately 2.6 times and a systematic deferral of maintenance may make it cost prohibitive to keep a ship in service. Some actions the service components took to reduce their expenses in fiscal year 2013, such as furloughing civilian employees and limiting purchases of spare parts and other supplies, reportedly delayed the completion of ongoing maintenance and, in some instances, affected time frames for maintenance work scheduled for future years. For example, Naval Air Systems Command officials told us that personnel shortfalls resulting from the 6-day civilian furlough and hiring freeze, among other factors, contributed to a delay in the completion of planned maintenance on 43 aircraft and 289 engines in fiscal year 2013. These officials told us that technicians completed all of the delayed work on those items in 2014, but that backlog in turn delayed maintenance on other aircraft and engines that was previously scheduled for fiscal years 2014 and 2015. Further, Naval Air Systems Command officials stated that recovery from the work backlog has been slowed by delays in hiring civilian personnel to restore the total workforce to pre-sequestration levels, which these officials expected to be complete by June 2015. In addition to delays in repairing naval aircraft, U.S. Pacific Fleet officials told us that some of their ships were affected by maintenance delays in the shipyards. For example, according to these officials, reduced spending and civilian personnel shortfalls contributed to a two-month delay in the completion of maintenance on the USS John C. Stennis aircraft carrier. The officials noted that the delay to the Stennis, along with other factors, led to a 2-month delay in the start of maintenance work on the USS Nimitz aircraft carrier, which began in January 2015. However, U.S. Pacific Fleet officials told us that the delay in the start of maintenance on the Nimitz did not affect its planned deployment schedule. Spending reductions reportedly also contributed to delays in Air Force maintenance. Specifically, as discussed earlier, officials from two Air Force commands told us that they reduced spending by limiting their purchases of spare parts for equipment and weapon system repairs to those considered essential for fiscal year 2013, and deferred any other Air Force officials said that the reduction in purchases to future years.the stockpile of repair parts generally led to increased repair times in fiscal year 2013, although those time frames were not specifically quantified. Reductions in maintenance funding that the service components implemented in response to the fiscal year 2013 sequestration contributed to some reported instances of decreased availability of equipment for conducting operations and training and shortfalls in supplies and personnel for performing maintenance. For example, officials from Naval Air Systems Command stated that the command reduced funding to perform depot maintenance work and recertification procedures on over 50 percent of weapons and weapon components planned at the beginning of fiscal year 2013—including critical missile systems like the Standoff Land Attack Missile-Expanded Response, Harpoon, Sidewinder, and Advanced Medium-Range Air-to-Air missiles. Further, these officials told us that the deferred maintenance on those approximately 800 missiles and components led to shortfalls in the availability of weapons for the fleet relative to ship inventory requirements for operations and training. Consequently, Naval Air Systems Command officials told us that the reduced availability of ready and certified weapons and weapon components necessitated transfers of weapons across ships to conduct planned training and operations with the required quantity of weapons. These officials noted that the Navy has budgeted for the completion of this deferred maintenance on weapons and components in fiscal years 2016 through 2020. In addition to the reduced availability of some weapons and equipment for training and operations, reduced maintenance spending led to reported instances of shortfalls in personnel needed to perform planned maintenance. Officials from the Air Force’s and Navy’s maintenance commands stated that a civilian hiring freeze and the 6-day civilian furlough, due in part to sequestration, affected the availability of personnel needed to perform maintenance work and related inspections. For example, Air Force Materiel Command officials reported that the combined effect of these personnel shortfalls led to depot work backlogs for aircraft and engine maintenance. Specifically, in an internal command report on the effects of the civilian furlough, Air Force Materiel Command estimated that in the fourth quarter of fiscal year 2013, it lost about 1 million hours of production, or 25 percent of its planned capacity. The lost production hours caused an estimated 33 percent reduction in depot efficiency and decreased the availability of aircraft to squadrons, including two aircraft during the fourth quarter that were delayed in being returned to their squadrons due to the restrictions in personnel overtime. Similarly, Naval Sea Systems Command officials explained that the hiring freeze exacerbated a pre-existing problem at Navy shipyards in terms of the planned maintenance workload exceeding the number and types of skilled civilian personnel (e.g., engineers) available in the workforce to perform the maintenance. U.S. Pacific Fleet officials also stated that the civilian furlough affected the availability of diesel engine inspectors and delayed by about 1 month the completion of maintenance on the USS Comstock amphibious dock landing ship, from August to September 2013. According to these officials, however, the delay in maintenance did not affect the ship’s planned deployment schedule in 2014. Officials with the military services told us that, in response to a DOD memorandum, they generally focused fiscal year 2013 spending reductions on repairs for equipment recently returned from deployment. For example, our analysis of fiscal year 2013 budget execution data showed that, as a result of sequestration and transfer or reprogramming actions, the Army applied substantial reductions to its equipment reset program, which restores equipment returning from overseas contingency operations for use by later-deploying units. Army budget officials told us they distributed the transferred and reprogrammed reset program funds to other emergent or higher priority activities or programs. Our analysis showed that together, the reprogramming and sequestration reductions decreased the reset program by about $1.7 billion relative to the fiscal year 2013 enacted amount of about $3.7 billion for the program. In contrast, the Army reduced funding for the base budget portion of its maintenance and weapon systems support category—which funds depot maintenance on equipment for deploying units—by only about 1 percent relative to the base enacted amount for fiscal year 2013 of $1.6 billion. The services also told us the use of existing funding flexibilities helped mitigate some negative effects from fiscal year 2013 sequestration- related spending reductions to maintenance and weapon systems support programs. For example, Navy officials told us that they applied funds transferred or reprogrammed from other accounts or activities to the maintenance and weapon systems support category to perform maintenance projects on eight ships that were originally targeted for deferrals in the beginning of fiscal year 2013. In addition, Naval Air Systems Command officials told us that the Navy transferred or reprogrammed about $4.9 million from other sources to fund urgent aircraft maintenance for the Navy Reserve—work that was expected to be deferred due to initial sequestration reductions in fiscal year 2013 funding. Our review of service documentation and interviews with officials showed that the services took a number of other steps to mitigate the effect of fiscal year 2013 sequestration reductions on their maintenance and weapon systems support activities. For example, officials from Naval Sea Systems Command told us they frequently leveraged a pre-existing process, which they referred to as “rebaselining,” in fiscal year 2013 to better align the shipyards’ workforce and capacity with the top priorities for maintenance within their workload. “Rebaselining” is a process of changing the cost, schedule, or performance associated with maintenance workloads. These officials stated that they utilized the rebaselining process more often in fiscal year 2013 than in prior years as a way to mitigate some effects of sequestration and fiscal uncertainty on the decreased availability of certain maintenance personnel due to hiring freezes and restrictions on overtime work. Additionally, these Naval Sea Systems Command officials reported that they petitioned for and received an exemption for shipyard workers from the civilian furlough, which protected a substantial portion of the shipyard workforce from the disruption of furlough days. Officials believe this furlough exemption also enabled them to maintain a substantial level of shipyard productivity. To reduce the effect of sequestration reductions, the Army temporarily reduced the standard at which non-deployed units were required to maintain their equipment, including vehicles, in fiscal year 2013, while still ensuring those items were safe to operate. The change in standard enabled unit commanders to reduce their expenses by delaying the purchase of repair parts and maintenance costs. According to Army maintenance officials, in fiscal year 2013, the Army also created the Army Maintenance Sequestration Working Group to assess depot workload requirements, reprioritize available funding, and make recommendations for reprogramming actions to meet Army equipment maintenance requirements within the budgetary constraints imposed by sequestration. However, Army officials told us that after fiscal year 2013, they no longer needed this working group and returned to managing the prioritization of budgetary resources for maintenance resources through preexisting processes. In implementing the fiscal year 2013 sequestration, the service components reduced fiscal year 2013 funding for the base operating support category by about $3.8 billion, representing about 11 percent of the service components’ total enacted amount for the category according to our analysis of DOD’s budget execution data (see fig. 14). The active and reserve components allocated varying amounts and percentages of the fiscal year 2013 sequestration reductions within their respective base operating support categories, as shown in figure 15. The active components of the Army, Navy, and Air Force generally applied larger sequestration reduction amounts—in dollar terms—within the base operating support category than did the reserve components, which reflects the larger size of the active components’ enacted amounts for base operating support relative to those of the reserve components. The active components’ reduction amounts for base operating support ranged from $30 million for the Marine Corps to $1.4 billion for the active Army. The reserve components’ reduction amounts to base operating support ranged from $0 for the Air Force Reserve to $102 million for the Army National Guard. As also shown in figure 15 above, the active components’ reduction to the category as a percentage of the enacted amount ranged from about 1 percent to nearly 15 percent and the reserve components’ reduction ranged from 0 percent to about 21 percent. Based on our review of DOD’s budget execution data, internal briefing documents and reports, service guidance, and interviews with installation management officials, we found that the service components took steps to preserve their operating support activities over infrastructure-related functions within the base operating support category while implementing sequestration in fiscal year 2013. In particular, service installation management officials told us that they protected funding for certain operating support functions that they considered essential, such as facility leases, utilities, and civilian salaries. These officials also told us that, in response to a DOD memorandum, they prioritized base operating support expenses that were related to family programs and warfighter support. For example, to ensure child development centers—a type of family program—had enough care providers to maintain accreditation, DOD exempted personnel working at these centers from the 6-day civilian furlough. Officials with most of the active and reserve service components identified some effects resulting from sequestration reductions, but some installation management officials we interviewed told us that they were not aware of significant negative effects within their command or component. For example, installation management officials we interviewed from four of eight Air Force active component commands reported to us that they were able to accomplish their missions in fiscal year 2013 without any critical disruptions to the delivery of base operating support services. Also, budget and installation management officials from the Marine Corps and Marine Corps Reserve told us that there were no significant effects to base operating support due to sequestration. Other active and reserve components reported to us that they reduced funding for their lower priorities, such as infrastructure projects they considered non-essential (e.g., repairs or other projects not related to the protection of health or safety). As a result of their efforts to reduce spending on lower-priority activities within the base operating support category in fiscal year 2013, active and reserve component officials identified certain negative effects that, based on our analysis, were generally related to deferred spending to a subsequent fiscal year, delayed time frames for completing infrastructure projects and repairs, and the reduced availability of personnel, equipment, and facilities for performing some emergency response duties or training and operations. Officials we interviewed from 7 of the 10 service components told us they deferred spending on some base operating support contracts, infrastructure projects, or both as a result of fiscal year 2013 sequestration reductions. According to these officials, the deferred spending shifted the planned costs for those activities and projects to future fiscal years. Six of eight active component Air Force commands that we interviewed reported that they cancelled or changed the terms of some base service contracts to reduce their fiscal year 2013 expenses for services such as dining, custodial services, or grounds keeping, and deferred some of those contract costs into fiscal year 2014. For example, according to an internal summary report by Air Force Materiel Command, the command adjusted some service contracts for its bases, including one for a dining facility on Robins Air Force Base that deferred $1.9 million of planned costs in fiscal year 2013 to fiscal year 2014. In some instances, the service components’ deferrals of infrastructure spending in fiscal year 2013 may lead to increased costs that cannot be determined at this time. For example, according to installation management officials from the Army and internal reports, the Army deferred a substantial amount of infrastructure-related spending from fiscal year 2013 to future years. Specifically, our analysis of budget justification and execution data showed that the Army reduced its fiscal year 2013 base funding for facilities sustainment projects—including preventive maintenance and repairs—by nearly $1 billion. This amount represented about 40 percent of its base budget request for fiscal year 2013 sustainment projects. Officials from the Army’s installation management command told us that the negative effects of reduced sustainment funding and preventive maintenance on service-wide infrastructure conditions and their expected life spans are not immediately apparent and will be unknown for several more years. Moreover, the amount of future increased costs that may be needed to restore infrastructure to required conditions was also undetermined at the time of our review. However, officials stated that reductions to preventive maintenance and repairs eventually necessitate increased investment to repair or replace deteriorated infrastructure or to demolish facilities that are no longer safe or require cost-prohibitive restoration. This is consistent with our prior work. In April 2008 and May 2009, we found that deferring sustainment of DOD facilities will likely result in continued facility deterioration and higher future costs. In addition, the services identified some instances of compressed time frames for awarding operating support and infrastructure contracts that may have led to higher contract costs. According to Army, Navy, and Air Force officials, uncertainty about funding levels for base operating support for much of fiscal year 2013 was exacerbated by sequestration. The duration of this uncertainty about base operating support budgets reduced the amount of time available for commands to award many of their contracts relative to the available time in prior years. For example, the Navy awarded about $570 million worth of various facilities sustainment, restoration, and modernization contracts within the last 2 weeks of fiscal year 2013. This amount represented a 10 percent increase over the dollar amount of contracts awarded during the same 2- week period of the prior year. Navy and Air Force officials told us that the limited time they had to review and negotiate contracts likely resulted in some higher prices, but stated that those additional costs cannot be determined. Officials from 7 of 10 service components told us or reported to DOD that they delayed the completion of infrastructure projects and repairs in fiscal year 2013 due to sequestration reductions. According to these officials, the delayed work exacerbated existing backlogs and in turn contributed to deferrals of other repairs and projects in fiscal year 2014. Service component officials told us that, from year to year, they generally defer some portion of infrastructure projects for a variety of reasons, such as the emergence of other competing priorities, the inability to design and execute projects during the year due to complications that arise (e.g., weather delays or environmental effect considerations), and requirements that exceed available funding. According to service installation management officials we spoke with, the total dollar amount or quantity of deferred infrastructure projects in any given year cannot always be attributed specifically to one factor over another, including sequestration reductions in fiscal year 2013. However, these officials also told us that sequestration-related spending reductions contributed to their decisions to defer facility maintenance and projects in fiscal year 2013, thus exacerbating an already growing backlog of projects and repairs over the past years. For example, Army installation management officials told us that certain utilities modernization and upgrade projects were not completed in fiscal year 2013 as a result of sequestration and the lower priority assigned to those types of projects compared to others related to health or safety, such as airfield runway repairs. Likewise, the Air Force and Navy told us that they deferred some energy efficiency projects that were expected to lead to longer-term savings and expedite the achievement of energy savings goals. According to testimony by the Chief of Staff of the Army in November 2013, sequestration reductions that the Army applied to its facility sustainment funding in particular, which totaled about $1 billion, contributed to a backlog of approximately 158,000 maintenance work orders at the end of fiscal year 2013—an estimated 500 percent increase over the prior year. However, Army installation management officials told us that the estimated number of unfilled work orders may understate the value and quantity of maintenance that was not performed in 2013 because the Army conveyed informal guidance directing base personnel to refrain from submitting non-essential work order requests that year due to the limitations on available sustainment funding across installations. Similarly, officials from the eight active-component Air Force commands that we interviewed and the Air National Guard reported various delays to routine maintenance, repairs, or infrastructure inspections related to reductions to sustainment funding. For example, Air Force Special Operations Command officials told us that some recurring tasks, such as airfield vegetation clearing and runway rubber removal and re-striping, were delayed to fiscal year 2014 to accommodate more urgent tasks or projects. They explained that this in turn contributed to the delayed completion of facility repair and construction projects that were planned for that year. Air Combat Command officials also reported that they delayed infrastructure inspections planned for two of the command’s bases in fiscal year 2013 by 1 year because of reduced funding. As a result, officials stated that they had to prioritize those bases’ infrastructure projects based on outdated facility condition assessments and the projects likely received lower priority for funding than others for which more current assessment ratings were available. Fiscal year 2013 sequestration-related spending reductions led to some shortfalls in personnel, equipment, and facility availability for certain base operating support functions and programs, which increased program risks and caused some disruptions to training and operations. For example, five of eight active component Air Force commands reported to us that some of their installations experienced reduced fire and emergency response personnel or related equipment, fewer security force personnel and vehicles than needed, or both. Air Force officials said the shortfalls decreased their response capability for responding to critical incidents like aircraft fires or fuel spills, and to the air base defense program. However, officials were unable to quantify the specific number of personnel shortfalls or risk based on decisions to reduce funding for these base services. Additionally, Air National Guard officials told us that sequestration affected the availability of some facilities for training. Specifically, Guard officials stated that they anticipated and planned for a sequestration reduction of about 35 percent (approximately $100 million) to the component’s infrastructure budget in fiscal year 2013. As a result of the expected budget reduction, the Air National Guard withheld facilities sustainment funding and reduced expenses. According to officials, these reductions, compounded by civilian personnel shortfalls due to the 6-day furlough, delayed the availability of certain facilities for training purposes until later in the fiscal year when the Guard allocated additional funds to its infrastructure budget. According to internal Navy briefings and summaries provided to service leadership, sequestration reductions to base operating support funding and the reduced availability of civilian personnel due to sequestration- related hiring freezes or furloughs led to decreased capacity in port and airfield operations. For example, across installations, port operations were restricted to normal business hours unless a flag officer exemption was granted to permit after-hours access. This restriction in turn led to some increased costs associated with additional steaming time for ships that arrived late or early to port. Specifically, Naval Surface Force Atlantic calculated that it cost the command an additional $135,000 for three ships to auxiliary steam when they could not connect to shore power because of the port-hour restrictions. Service officials told us they applied the fiscal year 2013 sequestration reductions more heavily toward funding for infrastructure projects that they considered to be non-essential as opposed to funding for certain operating support services, such as facility leases and utilities. The service components’ sequestration reduction to infrastructure-related subactivity groups (about $2.8 billion) was nearly three times higher than the reduction amount applied to operating support subactivity groups (approximately $1 billion). Based on our interviews with service budget officials and our review of budget execution data, the services also relied on existing funding flexibilities, such as transfer and reprogramming authorities, to mitigate the effects of sequestration on base operating support. In fiscal year 2013, the service components collectively transferred or reprogrammed about $1.5 billion of funds from other budget activities or accounts to the base operating support category. Service budget office officials told us that the additional funds enabled them to restore funding to some activities or projects that had been initially reduced or cancelled, as well as to fund emergent priorities that were not factored into the budget. Transfers or reprogramming of funds into the base operating support category sometimes reflected the services’ decisions to reprioritize and invest in infrastructure. For example, Army Reserve officials told us that funds transferred or reprogrammed into its infrastructure subactivity group within the base operating support category helped mitigate the effects of reductions to infrastructure funding earlier in the year that were made in planning for sequestration. In addition, officials from the Navy’s budget office told us they utilized transferred or reprogrammed funds from other areas of their budget to reinvest in facilities and other infrastructure during the fourth quarter of fiscal year 2013 at the behest of Navy leadership due to the critical importance of port and airfield conditions to fleet readiness. Transfers or reprogramming actions, as well as the receipt of some supplemental funding that Congress appropriated for disaster relief activities in the aftermath of Hurricane Sandy, contributed to the service components’ combined end-of-year obligation rate of about 98 percent relative to the amount enacted for the base operating support category. However, service installation management officials emphasized to us that the timing of these transfer or reprogramming actions and the subsequent availability of these funds limited the time available that they had to apply these funds to areas of greatest priority before the end of the fiscal year. The services took other actions in response to DOD’s guidance on implementing the fiscal year 2013 sequestration that helped them mitigate the effect of reduced resources on base operating support. For example, service installation management officials told us that they used mechanisms outlined in a DOD memorandum to request certain exceptions to civilian personnel furloughs, or to recall civilians from furlough, in order to mitigate personnel shortfalls in operating support services (among other areas) on a limited basis. This memorandum permitted requests for exceptions to the furlough to provide additional personnel to fulfill emergency services shortfalls and, as noted earlier, to ensure child development centers had enough care providers to maintain accreditation, among other things. In addition, the Army allowed borrowed military personnel to perform duties to mitigate reported shortfalls in base operating support personnel, such as gate guards and groundskeepers. In a February 2013 memorandum signed by the Assistant Secretary of Defense for Readiness and Force Management, DOD recognized the risk that the use of borrowed personnel may pose to readiness and training. The Secretary of the Army echoed this point in a March 2013 memorandum. Total fiscal year 2013 sequestration reductions for the RDT&E and procurement accounts were about $6.1 billion and $9.8 billion, respectively, which was a reduction of about 8.1 percent and 6.7 percent of the 2013 sequestrable base, and included both fiscal year 2013 funding and prior year unobligated balances, according to our analysis of DOD’s budget execution data (see fig. 17). Within the combined RDT&E and procurement accounts, DOD (in the defense-wide accounts) and the military services took sequestration reductions from either fiscal year 2013 funds, prior year unobligated balances, or a combination of the two. Within the RDT&E accounts, about 90 percent ($5.4 billion) of the sequestration reduction came from fiscal year 2013 funds, while the remaining $633.2 million came from prior year unobligated funds, as illustrated in figure 18. Also as shown in figure 18, the overwhelming majority of RDT&E sequestration reductions across all DOD components were from fiscal year 2013 funds. Within the procurement accounts, about 58 percent ($5.7 billion) of the sequestration reduction came from fiscal year 2013 funds, while the remaining $4.1 billion came from prior year unobligated funds, as illustrated in figure 19 below. As also shown in figure 19, the Army and Navy, while taking vastly different amounts of reductions in terms of dollars, took approximately equal proportions from their fiscal year 2013 funds and prior year unobligated balances while the majority of the Air Force and defense-wide reductions came from fiscal year 2013 funds. In our case study selection of the acquisition programs associated with 19 weapon systems, we found the fiscal year 2013 sequestration reduced either RDT&E or procurement funds or a combination of the two, based on our analysis of DOD data. Specifically, sequestration reduced RDT&E funds for 19 of the weapon systems by $713 million and it reduced procurement funds for 16 of the weapon systems by a total of $2.2 billion. DOD and the services took those sequestration reductions from a combination of fiscal year 2013 funds and prior year unobligated balances for 16 of the 19 weapon systems in our case study. All 19 weapon systems we reviewed used fiscal year 2013 funds to cover some portion of the sequestration reductions. Likewise, 16 weapon systems used available prior year unobligated balances to cover some of the sequestration reduction. Table 2 provides details on the sequestrable base, sequestration reduction, and the source of funds of the reduction for each of the 19 weapon systems in our case study. Officials associated with the majority of the 19 weapon systems we reviewed in our case study of the RDT&E and procurement accounts reported experiencing unplanned effects on their programs due, in part, to the fiscal year 2013 sequestration. As many programs aimed to preserve high priorities, such as procurement quantities, the effects of sequestration were still sometimes felt in other areas of the acquisition process. Specifically, RDT&E and procurement program officials across the services identified effects that can be categorized into three primary and interrelated areas—costs and spending, time frames, and system availability. In general, the effects may be interrelated as the development of one effect may have led to the occurrence of others. Additionally, some program officials noted that their weapon system programs may experience potential future effects due to sequestration. Table 3 shows the 19 weapon systems we reviewed and the categories of identified sequestration effects to those systems’ acquisition programs. Overall, officials from acquisition programs associated with 4 of the 19 weapon systems we reviewed identified effects in all three categories.2013 sequestration had no immediate effects—the Air Force’s KC-46 Tanker and the joint Air Force and Navy F-35 Joint Strike Fighter. Officials from the acquisition programs for both of these weapon systems stated that they were financially positioned to manage the sequestration reductions and withstand immediate effects. For example, officials stated the KC-46 was protected from any detrimental effects as they had built in buffer dollars that were initially set aside for potential risk incurred for changes in contracts and/or testing. This shielded them from potential sequestration reductions. According to KC-46 officials, the sequestration reduction of $143 million in RDT&E dollars was covered solely by fiscal year 2013 funds that were available due to a combination of unused engineering change orders and savings from an Aircrew Training System contract that was much lower than anticipated. Officials from acquisition programs associated with 4 of the 19 weapon systems we reviewed indicated increased costs to particular aspects of their activities were due, at least in part, to the fiscal year 2013 sequestration. For example: Air Force F-15 officials reported that sequestration resulted in late completion and delivery of software development to the integrating contractor. This, in turn, resulted in increased costs to particular aspects of its programmatic activities. According to program officials, the program is currently in negotiations on the exact dollar amount of these increased costs, but the contractor is seeking $4.2 million. Ultimately, according to these officials, the Air Force would have to pay the negotiated amount to the contractor. Navy P-8A Poseidon officials reported that sequestration, in combination with congressional reductions, led to delays in depot maintenance repair capabilities that are anticipated to result in cost savings. According to the officials, the delay in establishing these depot capabilities will defer such cost savings, resulting in a cumulative increase in overall lifecycle costs of $191 million, of which $56.7 million was directly attributed to sequestration. Navy Littoral Combat Ship officials reported possible future effects due to the fiscal year 2013 sequestration, which reduced its budget for ship construction changes. As a result, more expensive design changes may be carried out subsequent to delivery to the Navy, as the cost to execute these changes at a later stage will be higher. Officials from acquisition programs associated with 15 of the 19 weapon systems we reviewed reported experiencing delays due in part to the fiscal year 2013 sequestration. These included delays in testing, procurement, modernization efforts, and contract awards. For example: Army Apache AH-64E officials stated that the combined effects of the fiscal year 2013 sequestration and the continuing resolution affected the timeline for acquisition decisions for the AH-64E Apache in fiscal years 2013 and 2014 and fiscal year 2014 aircraft procurements, which resulted in contract changes and delays to time frames for evaluating and negotiating the system’s contract. Navy CH-53K officials reported that sequestration reductions contributed to a two-month delay to the program’s schedule, including the start of low-rate initial production, where small quantities of the system are produced for testing and evaluation before producing greater quantities for fielding. These officials told us the delays affected acquisition milestones and the fielding of a more capable helicopter, and estimated that sustaining the program for an additional 2 months would increase estimated program costs by about $20 million to $30 million. Navy AIM-9X Block II officials did not report an immediate effect due to the fiscal year 2013 sequestration. However, program officials stated that certain obsolescence redesign activities for outdated software and hardware and procurement of additional missile telemetry equipment were deferred because of sequestration reductions, which they indicated could result in a future production gap. Officials from acquisition programs associated with 9 of 19 weapon systems we reviewed reported experiencing reduced or deferred system development or procurement efforts as a result of fiscal year 2013 sequestration reductions, which in turn delayed the release of these enhanced systems to the warfighter. For example: Space Based Infrared System High (SBIRS High) officials stated that budget constraints from a $7.1 million sequestration reduction to procurement funds, along with the then-ongoing fiscal year 2013 continuing resolution, led them to re-plan their mobile acquisition strategy, which resulted in the procurement of fewer mobile ground platforms to meet full operational capability.order to ensure the SBIRS High system maintains capabilities under applicable threat environments, five upgraded mobile ground platforms are required. However, due to the re-plan and sequestration reductions, SBIRS High now has three mobile ground platforms. According to officials, the loss of two platforms reduces the ability of SBIRS High s to meet overall requirements. According to officials, in Army MQ-1C Gray Eagle unmanned aircraft system officials told us that a reduction in procurement funds due to sequestration resulted in deferrals and delays for procuring a number of upgrades to the system, including radio upgrades, new shipping containers, and an engine lifetime extension. These deferrals could, in turn, delay the eventual fielding of the upgraded aircraft to the warfighter, since they increase the risk that the system may not receive necessary certifications that it is safe and suitable for use. Navy H-1 officials stated that the Marine Corps helicopter fleet will rely on some older aircraft longer than originally intended. This delayed full fielding of H-1 capability means that the warfighter may have to meet some missions through continued use of legacy aircraft. Officials from acquisition programs for the 19 weapon systems we reviewed stated that their programs did not develop or implement sequestration-specific mitigation processes to manage fiscal year 2013 sequestration effects, but instead relied on internal prioritization processes and existing funding flexibilities. Prior to DOD finalizing sequestration percentage reductions, these program officials relied on existing priority-setting processes to determine what requirements necessitated immediate funding and what could be delayed. For a majority of the systems we reviewed, program officials stated the process was aimed at avoiding production breaks, procurement reductions, schedule effects, or fielding of capabilities. Prior to the finalization of the actual sequestration percentage, according to some program officials, programs were executing drills on a range of potential percentage (e.g. 10 to 30 percent) cuts to determine what priorities would be funded, deferred or eliminated and the overall effect that decision would have on the program. According to multiple program officials for the weapon systems we reviewed, this re-prioritization of requirements allowed officials to consider how best to prepare to manage the funding reductions to their programs. DOD and the services also utilized available budgetary processes to manage fiscal year 2013 sequestration reductions to the RDT&E and procurement accounts. Management actions included use of transfers or reprogrammings, use of fiscal year 2013 funds, and future year budget requests. For example, according to Navy officials, they used a combination of a $127 million reprogramming in fiscal year 2013 and an additional $227 million of appropriated funds provided in fiscal year 2014 to directly counter sequestration’s effects on the Virginia Class submarine. In fiscal year 2013, the Navy’s DDG 1000 also received a reprogramming in the amount of $70.3 million, which helped address funding shortfalls caused by the fiscal year 2013 sequestration. However, program officials reported that some effects, such as the delayed exercising of a contract option, had already been realized. According to program officials, the Air Force’s KC-46 managed its $143 million sequestration reduction by utilizing existing fiscal year 2013 funds. Similarly, SBIRS High officials stated that they requested additional funds in the fiscal year 2015 Presidential Budget to help offset fiscal year 2013 sequestration reductions. While the acquisition programs for the weapon systems we reviewed may have received additional RDT&E and procurement funding through reprogramming, transfers, or other means, the provision of those funds might not have entirely eliminated effects to those programs. For example, according to officials, the subsequent reprogramming alleviated but did not entirely eliminate an effect to SBIRS High. As noted above, according to program officials, the Air Force procured fewer SBIRS High ground vehicles than originally planned. However, program officials also expected a $44.2 million RDT&E sequestration reduction that would have forced delays to the completion of the ground element, which was considered unacceptable. Air Force officials stated that to address this concern they took funds from other programs to provide SBIRS High with additional funding. These officials said that doing so restored funding to the cost estimate necessary to preserve the ground vehicle completion schedule. According to officials, reprogramming was also used to restore sequestered funding for operational capability of a parabolic dish sub- system antenna for the program’s sustainment lab. These officials explained that the absence of the antenna would have reduced operational capability due to use of one operational vehicle to conduct any modification testing. In fiscal year 2013, the procurement accounts for the Air Force, Army, and Navy had approximately $2.0 billion of fiscal year 2013 funds transferred out from their procurement lines and realigned for other military needs. For RDT&E accounts across the services, $345.6 million of fiscal year 2013 funds was transferred out from their RDT&E lines and realigned for other military needs, according to budget documentation. The movement of those funds could have been in response to sequestration or for other reasons. Irrespective of the services’ motives for reprogramming the funds, the programs that received the funds could then apply them to the areas of the program that were affected by the fiscal year 2013 sequestration. Based on our analysis, by the end of fiscal year 2013, acquisition programs for 3 of the 19 weapon systems we reviewed had additional funds transferred or reprogrammed into them (Virginia class submarine, DDG 1000 and F-35). By contrast, programs for 3 other weapon systems had their funding reduced as a result of transfers or reprogrammings (AH- 64E Apache, F-15 and F-22). Acquisition programs for the remaining 13 weapon systems neither received nor forfeited funds as a result of transfers or reprogrammings. While the specific reasons for these transfers or reprogrammings for AH-64E Apache, F-15 and F-22 were not transparent, we could determine through budget analysis that, as noted above, the Navy’s Virginia class submarine and DDG-1000 received transferred or reprogrammed funds to mitigate shortfalls created by the fiscal year 2013 sequestration. Also, according to F-15 officials, their fiscal year 2013 RDT&E and procurement funding were decreased by a transfer of $10 million and $14.0 million, respectively, to provide funding for critical readiness shortfalls resulting from sequestration, specifically to pay for Air Force Operation and Maintenance shortfalls. Sequestration reductions to the military construction accounts in fiscal year 2013 totaled about $821 million, which represented a reduction of about 4 percent of the sequestrable base. Sequestration reduced budgetary resources within the military construction accounts of four military service components—Army Reserve, Army National Guard, active Navy, and Navy Reserve—and the defense-wide military construction account. By contrast, because the military construction accounts of the active Army, the active Air Force, Air Force Reserve, and Air National Guard had appropriated amounts significantly lower than the baseline set by the Office of Management and Budget, no sequestration reductions were made for those four accounts. DOD officials told us that the fiscal year 2013 sequestration reductions were applied evenly at the project level for major military construction projects and the budget activity level for other items like minor construction and planning and design within each of the affected military construction accounts. Accordingly, each applicable service component and the defense-wide account were given a fixed percentage by which the funds for each project and budget activity in their military construction accounts would be reduced. As shown in figure 21, about 74 percent ($604 million) of the $821 million fiscal year 2013 sequestration reduction was applied to major military construction projects. The remaining balance of sequestration reductions was applied to line items such as minor construction and planning and design. Of the $604 million reduction applied to major military construction projects, 58 percent consisted of defense-wide projects, and 42 percent were military service component projects. The five military construction accounts that experienced sequestration reductions had a total of 1,385 major military construction projects among them. The reductions to individual military construction projects varied by account and ranged between 3 percent and 8 percent. Applying these percentage reductions, reductions in individual major military construction projects ranged between approximately $1,000 and $23 million. Figure 22 shows the distribution of reductions across major military construction projects subject to sequestration. More than 50 percent of the 1,385 major military construction projects subject to sequestration had reductions of less than $100,000. Prior to implementing the sequestration reductions in March 2013, the Office of the Under Secretary of Defense (Comptroller) directed that, among other things, the scope of military construction projects should not be reduced and projects should not be deferred or cancelled due to sequestration. Consistent with this, officials from all four of the service components and the defense-wide account that had military construction projects subject to sequestration reductions told us that no projects were reduced in scope, deferred, or cancelled. While DOD and service component officials reported no cancellations, deferrals, or reductions in scope for their military construction projects, some component officials attributed delays in awarding contracts for fiscal year 2013 construction projects to sequestration, among other things. DOD and service officials said that planning for and implementing the sequestration reductions in fiscal year 2013 required significant staff labor, and they had less time to prepare requests for proposals and review bid submissions for projects. Service component officials told us that sequestration contributed to an increased number of contracts for projects that were not awarded in the fiscal year in which they were funded, and officials told us it is a best practice to award all contracts in the same fiscal year in which the project is funded. For instance, the Navy reported that 33 active component projects were not awarded contracts in fiscal year 2013 out of 54 projects funded. By comparison, in fiscal year 2012, 17 of the active Navy’s 57 funded projects were not awarded contracts and were instead awarded in the next fiscal year. Some service component officials told us that delays in awarding fewer contracts for projects than planned could lead to delays in project completion and increased costs to the projects affected, but were unable to quantify the longer-term effects on time frames or costs. Service component and defense-wide officials told us that using available funding flexibilities to reprogram funds helped to mitigate the effect of fiscal year 2013 sequestration reductions, but also stated that bid savings used as a source of reprogrammings may be unavailable to mitigate future budget reductions. In an effort to minimize the effect of sequestration on military construction projects, in May 2013 the Office of the Under Secretary of Defense (Comptroller) provided verbal direction that available bid savings should be reprogrammed to the extent possible for projects requiring additional funds. Further, according to DOD’s verbal guidance, each construction project should be assessed to determine if it could absorb a sequestration reduction, could be completed with below threshold reprogrammings, or would require prior congressional approval for above threshold reprogrammings. Our analysis of DOD’s reprogramming data for the military construction accounts showed, in line with the May 2013 direction, that the source for these reprogramming actions consisted primarily of available bid savings from projects appropriated in fiscal years prior to 2013. DOD and service officials stated that they were able to absorb the effect of sequestration reductions by executing reprogrammings of bid savings from prior year projects. Further, we found that the use of bid savings was more frequent in fiscal year 2013 than in fiscal year 2009 through 2012. For example, in fiscal year 2013, the Navy had 35 military construction projects that required prior approval reprogrammings because of sequestration reductions, whereas in fiscal years 2009 through 2012 the Navy had only 11 total prior approval reprogrammings. DOD and service officials partly attributed this increase in the use of reprogrammings to sequestration. Bid savings constituted the primary source of funding for reprogrammings in fiscal year 2013. However, DOD and service officials told us that they expect bid savings to accrue at a diminished rate in the future, which could affect their ability to mitigate future budget constraints through this means. Based on our review of DOD data, DOD and the services accrued about $2.4 billion in bid savings in fiscal year 2009, and in fiscal year 2014 they accrued $240 million. Some service officials told us that they attributed the decline to, among other things, a less favorable construction market, and they expect the downward trend in bid savings to continue because of market conditions. In addition, based on our review of DOD data, though the total bid savings accumulated between fiscal year 2009 and fiscal year 2014 was about $8.1 billion, more than $7.3 billion was used for a variety of purposes, including offsetting the fiscal year 2013 sequestration reduction of about $821 million, as well as rescissions, reductions and other expenses such as project cost overruns. According to DOD data and officials, this has reduced DOD’s accumulated bid savings and left about $790 million that DOD plans to use for other known expenses. As a result, DOD and service officials told us they would likely be unable to absorb another sequestration reduction of equal or greater size on their military construction accounts, and without the ability to use bid savings to offset future reductions, they would have to defer, cancel, or reduce the scope of projects. The joint explanatory statement accompanying the National Defense Authorization Act for Fiscal Year 2014 included a provision that we review the effects of the fiscal year 2013 sequestration. Further, the House Committee on Armed Services requested that we review the implementation and effects of 2013 sequestration on DOD. This report examines (1) how DOD, including the military services, allocated fiscal year 2013 sequestration reductions, (2) what effects, if any, DOD has identified from the fiscal year 2013 sequestration on selected DOD programs, services, and military readiness, and (3) the extent to which DOD took actions to mitigate the effects of the fiscal year 2013 sequestration. While funding requested as part of DOD’s base budget supports the normal, day-to-day operations of the department, DOD also receives additional funds, referred to as overseas contingency operations appropriations, to pay for incremental costs that have resulted from the war in Afghanistan and other contingency operations. Department of Defense Financial Management Regulation 7000.14-R, Vol. 12, Ch. 23 (September 2007), defines incremental costs as costs that would not have been incurred had the contingency operation not been supported. following sections describe our approach in selecting these case study areas to address each of our objectives. We selected a non-probability sample of five case studies of expenses or investments within DOD to review in detail. In selecting the case studies, we sought to encompass a significant share of the $37.2 billion in DOD’s discretionary resources ordered for sequestration on March 1, 2013, as well as the programs, projects, and activities with the largest expected effects from sequestration in terms of factors such as the amount of sequestration reduction applied and the relationship to military readiness. Based on these criteria, which are discussed in more detail below, we selected the following case studies to represent each type of nonexempt appropriation: 1. operation and maintenance accounts: service components’ operational tempo and training; 2. operation and maintenance accounts: service components’ maintenance and weapons system support; 3. operation and maintenance accounts: service components’ base 4. RDT&E and procurement accounts: a selection of defense-wide and military service acquisition programs for weapon systems; and 5. military construction accounts: defense-wide and services’ major military construction projects. Overall, these five case studies accounted for roughly $12.8 billion, or about 34 percent, of the total sequestration ordered for DOD’s discretionary budget resources on March 1, 2013, including nearly $9.3 billion for operation and maintenance reductions, $2.9 billion for RDT&E and procurement, and $604 million for military construction. The case study findings provide illustrative examples of sequestration effects and mitigation strategies across the department. While the findings of the five case studies cannot be generalized to all DOD programs, projects, and activities, they reflect a wide range of perspectives across the department. We used a multistep process to select the programs, projects, or activities within each case study for our review. To select the operation and maintenance case studies, we first grouped the service components’ 258 unclassified operation and maintenance account subactivity groups into 11 broad categories that we identified based on our review of DOD’s operation and maintenance budget request overview for fiscal years 2013 and 2015, as well as the service components’ budget justification materials for these years, which describe the activities and functions by each subactivity group (see table 4). By categorizing the subactivity groups, we narrowed our case study options while also ensuring those options would cover multiple, related subactivity groups and a larger share of the sequestration reductions to the operation and maintenance accounts. To ensure that the budget categories and the placement of subactivity groups therein were valid, we shared our approach with officials from the Office of the Under Secretary of Defense (Comptroller), who generally agreed with our categorization and made suggestions that we incorporated as appropriate. After categorizing the subactivity groups, we selected for our case studies 3 of the 11 operation and maintenance categories that were subject to large reductions across service components based on our analysis of DOD’s budget execution data for the fourth quarter of fiscal year 2013, and which, according to our review of DOD readiness reports and budget documents, were most closely linked with military readiness. The other 8 categories were subject to relatively smaller reductions or were less directly related to readiness. The 3 selected categories—operational tempo and training, maintenance and weapon systems support, and base operating support—accounted for about $9.3 billion, or roughly 66 percent, of the nearly $14.0 billion fiscal year 2013 sequestration reduction to the service components’ operation and maintenance accounts. Appendix III presents a list of the service components’ unclassified operation and maintenance subactivity groups, grouped within the 3 categories we selected for our operation and maintenance case studies. For the RDT&E and procurement case study, we reviewed data from a June 2013 DOD report on sequestration reductions applied to weapon systems or other acquisition programs. We also analyzed 2013 and 2014 budget data and DOD’s 2015 budget documents to identify weapon systems that reported experiencing the greatest sequestration-related effects. Based on this analysis, we chose the following 19 weapon systems managed by the Army, Navy, Air Force, or through a joint acquisition approach: Army (5 systems): AH-64E Apache Helicopter, MQ-1C Gray Eagle Unmanned Aircraft System, Paladin Integrated Management, Warfighter Information Network--Tactical (WIN-T) Increment 3, OH- 58D/OH-58F Kiowa Warrior Helicopter. Navy (8 systems): CH-53K King Stallion Helicopter, Littoral Combat Ship, DDG 1000 Zumwalt Class Destroyer, P-8A Poseidon Multi- Mission Maritime Aircraft, Virginia-Class Submarine, H-1 Helicopter, E-2D Advanced Hawkeye Aircraft, AIM-9X Block II Sidewinder Missile. Air Force (5 systems): KC-46A Tanker Aircraft, F-22 Raptor Aircraft, Space Based Infrared System High, Global Positioning System III, F- 15 Aircraft. Joint Program, Air Force and Navy (1 system): F-35 Joint Strike Fighter Aircraft. The 19 selected weapon systems accounted for about $2.9 billion of the approximate $15.8 billion sequestration reduction to DOD’s RDT&E and procurement accounts. For the military construction case study, we reviewed budget data presented in the June 2013 DOD report and identified five military construction accounts to which sequestration reductions were applied in fiscal year 2013. These accounts include those for the defense-wide agencies, the Army National Guard, Army Reserve, Navy, and Navy Reserve. We then included within the scope of this case study all major military construction projects funded by those sequestered accounts, as reported in DOD’s June 2013 report. To determine how DOD and the military services allocated fiscal year 2013 sequestration reductions, we reviewed data from the June 2013 DOD report to identify reductions applied to the operation and maintenance, RDT&E, procurement, and military construction accounts, including how reductions were applied within those accounts to fiscal year 2013 appropriated funds and to prior year unobligated balances still available from multi-year appropriations.maintenance accounts, we also reviewed data from DOD’s operation and maintenance budget execution report for the fourth quarter of fiscal year 2013 to identify how reductions were allocated among the service For the operation and components’ subactivity groups.maintenance categories that we developed to determine the amount of reductions applied within each category. Although the sequestrable base for each of the 11 operation and maintenance case study categories included both fiscal year 2013 enacted funding and any prior year unobligated balances, we did not include prior year unobligated balances in our analysis of sequestration data within those categories because operation and maintenance funding is generally available for obligation for one year only, and any unobligated balances within operation and maintenance accounts are relatively small. We assessed the reliability of sequestration data from the June 2013 DOD report and the DOD report on operation and maintenance budget execution by administering questionnaires and interviewing relevant personnel responsible for maintaining and overseeing the systems that supplied the data for these reports. Through these questionnaires and interviews, we obtained information on the systems’ ability to record, track, and report on these data, as well as the quality control measures in place. We found the data on fiscal year 2013 sequestration reductions to be sufficiently reliable for the purposes of this review. We also utilized the 11 operation and To determine what effects, if any, DOD has identified from the fiscal year 2013 sequestration on selected DOD programs and functions and military readiness, we examined relevant sequestration implementation guidance issued by DOD and the service components. For each of the three operation and maintenance case studies, we also reviewed DOD’s fiscal year 2013 budget execution data, and for each of the five case studies we reviewed documentation of effects and mitigation strategies. We collected and reviewed available program-, project-, or activity-level data, other summaries, or reports that documented and quantified sequestration effects. These data and information included, for example, changes in aircraft flying hours and deployment schedules, deferment or cancellation of training or exercises, delays in the induction of equipment into depots or deferral of maintenance, project backlogs, contract delays, and reductions in the number of civilian personnel or contractors available to perform work, among other things. When examples of increased spending are discussed in this report, we provided the gross rather than net changes as reported to us by DOD officials in interviews and related documentation. In cases where the data on sequestration effects were provided through interviews with relevant officials, we corroborated them where possible through other sources, including service documents and reports. Where possible, we identified differences between planned and executed activities and interviewed relevant service component and acquisition program officials about the extent to which these differences could be tied to the effects of fiscal year 2013 sequestration reductions. In addition, we reviewed congressional testimonies by senior DOD and military service officials, briefings, and commanders’ assessments prepared by the military services that presented evidence of any sequestration-related effects on the services’ programs and functions and overall military readiness. To determine the extent to which DOD took actions to mitigate the effect of the fiscal year 2013 sequestration, we reviewed information gathered within each of the five case study areas regarding mitigation efforts reported by DOD and the services, which included interviews with officials and other documentation. Further, we gathered documentation and interviewed DOD and service officials to identify any efforts taken to gather and apply lessons learned from their experiences implementing sequestration in fiscal year 2013 in response to a circular that was revised recently by the Office of Management and Budget. We also analyzed data on the use of funding flexibilities and military construction project bid savings to understand how these options were used to address the effect of sequestration-related reductions.Specifically, we reviewed data on the services’ use of reprogrammings for fiscal years 2009 through 2014 to determine any trends, as well as interviewed DOD and service officials to discuss how reprogrammings were used to address the effects of sequestration in fiscal year 2013. For these purposes, we limited the scope of our data analysis to transfers and reprogrammings that require prior approval from congressional We obtained the data on committees before they can be implemented.prior approval reprogrammings from reprogramming requests maintained on the website of the Office of the Under Secretary of Defense (Comptroller). We also obtained spreadsheets with data on reprogramming from that office and compared a nongeneralizeable sample of randomly-selected data points between the two data sources to assess the reliability of the data. We found these data to be reliable for the purposes of reporting on DOD’s reprogramming actions. For determining sequestration mitigation efforts within the military construction case study, we also reviewed and analyzed data from the Military Construction FY 2014 Fourth Quarter Bid Savings and and the Military Construction FY 2015 Unobligated Balances Update First Quarter Bid Savings and Unobligated Balances Update to assess changes in the amount of bid savings available to DOD components between fiscal years 2009 through 2014.with DOD component officials and discussed with them the extent to which bid savings helped offset sequestration reductions to military construction projects in fiscal year 2013, as well as the expected effect of the resulting changes in DOD’s accumulation of bid savings on DOD’s ability to respond to other current or future needs. In addition, we interviewed DOD officials responsible for updating and maintaining systems that track the bid savings data about the systems’ ability to record, track, and report on these data, and the quality control measures in place to ensure that the data are reliable for reporting purposes. We found the bid savings data to be sufficiently reliable to demonstrate trends in the accumulated amount of bid savings among DOD’s military construction accounts. To further our understanding of DOD’s allocation of fiscal year 2013 sequestration reductions and the extent to which those reductions affected selected case study areas or were mitigated by DOD efforts, we interviewed officials, or where appropriate, obtained documentation at the organizations listed below: Office of the Secretary of Defense Office of the Under Secretary of Defense (Comptroller) Office of Cost Assessment and Program Evaluation Office of the Deputy Chief Management Officer Manpower and Personnel Directorate Force Structure, Resources, and Assessment Directorate F-35 Joint Strike Fighter program Department of the Air Force Office of the Assistant Secretary of the Air Force, Financial Office of the Assistant Secretary of the Air Force, Installations, Air Force Civil Engineer Center Headquarters Air Force, Office of the Deputy Chief of Staff, Strategic Plans and Programs (A8) Headquarters Air Force, Studies and Analyses, Assessment, and Lessons Learned (A9) Headquarters Air Force, Office of the Deputy Chief of Staff, Operations, Plans and Requirements, Operations (A3O) Headquarters Air Force, Office of the Deputy Chief of Staff, Logistics, Installations and Mission Support, Security Forces (A47S) Headquarters Air Force, Office of the Deputy Chief of Staff, Logistics, Installations and Mission Support, Logistics (A4L) Air Combat Command Air Education and Training Command Air Force Materiel Command Air Force Materiel Command, Air Force Life Cycle Management Air Force Space Command, Space and Missile Systems Center Air Force Reserve Command Office of the Assistant Secretary of the Army for Financial Management and Comptroller, Army Budget Headquarters, Department of the Army, G-3/5/7 Operations and Plans Headquarters, Department of the Army, G-4 Office of the Deputy Chief of Staff for Logistics Office of the Assistant Chief of Staff for Installation Management U.S. Army Corps of Engineers U.S. Army Installation Management Command U.S. Army Forces Command U.S. Army Materiel Command U.S. Army Training and Doctrine Command, U.S. Army Combined Arms Center, Center for Army Lessons Learned U.S. Army Aviation and Missile Life Cycle Management Command U.S. Army Tank-Automotive and Armaments Command Life Cycle U.S. Army Communications-Electronics Command Assistant Secretary of the Navy, Financial Management and Office of the Deputy Chief of Naval Operations (Fleet Readiness and Logistics), Fleet Readiness Division (N43) Office of the Deputy Chief of Naval Operations Warfare Systems (N9) U.S. Fleet Forces Command Naval Air Systems Command Naval Sea Systems Command Naval Supply Systems Command Space and Naval Warfare Systems Command Naval Facilities Engineering Command Commander, Navy Installations Command Navy Warfare Development Command Office of the Chief of Navy Reserve Navy Reserve Forces Command Headquarters Marine Corps, Programs and Resources Headquarters Marine Corps, Plans, Policies, and Operations Headquarters Marine Corps, Installations and Logistics, Logistics U.S. Marine Corps Forces Command U.S. Marine Corps Logistics Command U.S. Marine Corps Installations Command U.S. Marine Corps Training and Education Command, Marine Corps U.S. Marine Corps Forces Reserve We conducted this performance audit from April 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 5 presents a list of the service components’ unclassified operation and maintenance subactivity groups within the base operating support, operational tempo and training, and maintenance and weapon systems support budget categories we selected for our operation and maintenance case studies. See appendix II for additional details regarding our selection of case studies and additional information regarding our scope and methodology. Johana R. Ayers, (202) 512-5741 or [email protected]; Michael J. Sullivan, (202) 512-4841 or [email protected]. In addition to the contacts named above, Matt Ullengren (Assistant Director); Bruce Thomas (Assistant Director); Clarine Allen; Natalya Barden; Melissa Blanco; Bruce Brown; Pat Donahue, Marcus Ferguson; Dayna Foster; Amber Gray; Jeffrey Harner; Sameena Ismailjee; Amie Lesser; Jonathan Mulcare; Bonita Oden; Meghan Perez; Carol Petersen; Steve Pruitt; Daniel Purdy; Kiran Sreepada; Shana Wallace; Erik Wilkins- McKee; and Michael Willems made key contributions to this report. 2013 Government Shutdown: Three Departments Reported Varying Degrees of Impacts on Operations, Grants, and Contracts. GAO-15-86. Washington, D.C.: October 15, 2014. Sequestration: Comprehensive and Updated Cost Savings Would Better Inform DOD Decision Makers If Future Civilian Furloughs Occur. GAO-14-529. Washington, D.C.: June 17, 2014. 2013 Sequestration: Selected Federal Agencies Reduced Some Services and Investments, While Taking Short-Term Actions to Mitigate Effects. GAO-14-452. Washington, D.C.: May 28, 2014. 2013 Sequestration: Agencies Reduced Some Services and Investments, While Taking Certain Actions to Mitigate Effects. GAO-14-244. Washington, D.C.: March 6, 2014. Sequestration: Observations on the Department of Defense’s Approach in Fiscal Year 2013. GAO-14-177R. Washington, D.C.: November 7, 2013. March 1 Joint Committee Sequestration for Fiscal Year 2013, B-324723 Washington, D.C.: July 31, 2013. Agency Operations: Agencies Must Continue to Comply with Fiscal Laws Despite the Possibility of Sequestration. GAO-12-675T. Washington, D.C.: April 25, 2012.
|
In March 2013, the President ordered across-the-board spending reductions, known as sequestration, for all federal agencies and departments. As a result, DOD's discretionary resources were reduced by about $37.2 billion over the remainder of FY 2013. The joint explanatory statement accompanying the National Defense Authorization Act for Fiscal Year 2014 included a provision for GAO to review DOD's implementation and effects of the FY 2013 sequestration. This report examines, for the FY 2013 sequestration, (1) how DOD allocated reductions, (2) what effects DOD has identified on selected DOD programs, services, and military readiness, and (3) the extent to which DOD took actions to mitigate the effects of sequestration. GAO analyzed DOD's FY 2013 budget and execution data and reviewed a nongeneralizeable sample of five types of expenses or investments—such as maintenance, and a selection of weapon systems and military construction projects—based on the magnitude of reductions and possible relation to readiness. For each area, GAO reviewed data on planned versus actual spending and reports on actions taken and interviewed DOD and service officials. To implement sequestration in fiscal year (FY) 2013, the Department of Defense's (DOD) discretionary resources were reduced in approximate proportion to the size of its appropriation accounts, with the largest reductions to DOD's largest accounts, operation and maintenance. The military services' accounts absorbed about 76 percent of DOD's reduction relative to other defense accounts. In contrast to other accounts, such as procurement, DOD and the services had some flexibility to allocate varying reductions to functions and activities funded by the operation and maintenance accounts. To implement sequestration reductions, DOD took near-term actions to preserve key programs and functions and reduced spending on lower priorities. Many effects that DOD officials attributed to the reductions were interdependent, with some difficult to quantify and assess. Effects DOD identified generally related to: Costs and spending : Some actions increased costs or deferred spending to subsequent years (e.g., procurement delays to the Navy's P-8A aircraft program resulted in an estimated $56.7 million life-cycle cost increase). Time frames or cancellations : Delayed or cancelled activities affected some plans to improve military readiness (e.g., the Air Force cancelled or reduced participation in most of its planned large-scale FY 2013 training events, and expects delayed achievement of longer-term readiness goals). Availability of forces and equipment : Some actions decreased the forces and equipment ready for contingencies (e.g., the Navy cancelled or delayed some planned ship deployments, which resulted in a 10 percent decrease in its deployed forces worldwide). DOD and the services relied on existing processes and flexibilities to mitigate the effect of sequestration in FY 2013, but did not comprehensively document or assess best practices or lessons learned from their experiences. For example, the services used authorities to reprogram and transfer funds, which allowed them to reverse some initial actions taken to reduce spending. GAO identified some DOD efforts to document lessons learned or best practices related to the implementation of the FY 2013 sequestration, but found them to be limited in scope and not widely shared. Without documenting and assessing lessons learned and best practices, such as strategies for evaluating interdependence of funding sources and programs, and leveraging existing mechanisms to share this information, DOD is missing an opportunity to gain institutional knowledge that would facilitate future decision making about budgetary reductions. GAO recommends that DOD document and assess lessons learned and best practices from implementing sequestration, as well as leverage existing mechanisms to share these lessons within the services and across the department. DOD concurred with GAO's recommendations. ; or Michael J. Sullivan at (202) 512-4841 or [email protected] .
|
SBA’s Disaster Loan Program, which has been a part of the agency since its inception in 1953, is the primary federal program for funding long-term recovery assistance. SBA’s Office of Disaster Assistance (ODA) responds to disasters and administers the program––which provides affordable, timely and accessible financial assistance following a disaster to homeowners, renters, businesses of all sizes, and nonprofit organizations. SBA does not provide disaster grants; rather, this financial assistance generally is available in the form of direct, low-interest loans and is the only SBA program not limited to small businesses. A Presidential disaster declaration puts into motion long-term federal recovery programs, such as the Disaster Loan Program, but SBA is not a “first responder” after a disaster. Rather, local government emergency services assume the role of first responders, with help from state and volunteer agencies. For catastrophic disasters, and if a governor requests it, federal resources can be mobilized through the U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) for search and rescue, electrical power, food, water, shelter, and other basic human needs. SBA typically responds to a disaster within 3 days by sending ODA field staff to the affected area to begin providing public information about SBA’s services. Once a disaster is declared, SBA, by law, is authorized to make two types of disaster loans: (1) physical disaster loans, and (2) economic injury disaster loans. Physical disaster loans are for the permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property. That is, SBA provides loans to cover repair costs that FEMA or other insurance has not already fully compensated or covered. The loans are intended for repair or replacement of the disaster victim’s damaged property to its pre-disaster condition. Interest rates are periodically adjusted and SBA calculates rates after each disaster. By law, the interest rates depend on whether each applicant has credit available elsewhere. If SBA determines the applicant is unable to borrow from non-government sources or does not have sufficient funds, then the applicant is considered to not have credit available elsewhere. SBA offers two levels of interest rates, a low rate for applicants who have no credit available elsewhere and a higher rate for applicants with credit available elsewhere. Economic injury disaster loans provide small businesses, including agricultural cooperatives and private nonprofit organizations, with necessary working capital until normal operations can resume after a disaster. Loan funds are intended to cover operating expenses small businesses could have paid had the disaster not occurred. The interest rates on an economic injury disaster loan cannot exceed 4 percent (see table 1). Immediately following a disaster, SBA public information officers are responsible for providing information and outreach to victims about SBA’s Disaster Loan Program and SBA customer service representatives are available to help home and business owners complete loan applications. However, certain restrictions and guidelines apply to SBA’s Disaster Loan Program. For example, individuals must first register with FEMA and obtain a registration number before SBA can issue an application. SBA has separate applications for home and business loans and offers these applications in both paper and electronic form. Furthermore, SBA only will make a disaster loan if there is reasonable expectation that the loan can be repaid—loan applicants must have a credit history acceptable to SBA and demonstrate their ability to repay all outstanding loans. They must also apply within certain time frames. Typically, loan applications for physical disaster loans must be received by SBA within 60 days from the date of the disaster declaration, while applications for economic injury disaster loans must be received within 9 months. In addition, SBA generally requires collateral for all loans greater than $14,000, recently increased from $10,000 pursuant to section 12065 of the Act. Once SBA receives a completed loan application, staff in its Loan Processing and Disbursement Center review eligibility, check credit, and calculate repayment ability. Applicants declined at this stage always receive notification in writing from SBA. The letter provides reasons for the declination and advises the applicant of its reconsideration rights. Applications that are not declined are assigned to an SBA loss verifier, who is responsible for contacting each applicant to make an appointment to verify the physical losses and estimate a dollar value for damaged real estate and personal property. Next, staff underwrite the application and review in greater depth the applicant’s credit history, repayment ability, and eligibility. Unless the application is withdrawn, SBA processes each loan application to an approved or declined status. SBA notifies approved applicants and makes arrangements to execute the loan closing. Before SBA can make any disbursements, the borrower must execute loan closing documents and return them to SBA within 60 days. Upon receipt of the closing documents, SBA issues the first disbursement of the unsecured portion of the loan––up to $14,000 for physical disaster loans. After SBA has verified that lien requirements on collateral property have been met, it can disburse the additional secured portion of the physical disaster loan based on need or construction progress. Because no physical repairs are associated with economic injury disaster loans, SBA generally makes full disbursement for these loans once collateral and insurance requirements are met. SBA monitors all disbursements to ensure that loan funds are used in accordance with the loan authorization and agreement. ODA and the newly created Executive Office of Disaster Strategic Planning and Operations (EODSPO), both headquartered in Washington, D.C., are responsible for responding to disasters, coordinating with other disaster recovery entities, and administering the agency’s Disaster Loan Program. ODA has four field offices, which are the Customer Service Center located in Buffalo, New York; two disaster Field Operations Centers located in Atlanta, Georgia and Sacramento, California; and a centralized Loan Processing and Disbursement Center located in Fort Worth, Texas. ODA also has a Personnel and Administrative Services Center and a DCMS Operations Center in Herndon, Virginia. Organizationally the associate administrator of ODA reports directly to the EODSPO chief; and the EODSPO chief reports to the SBA Administrator. In addition, ODA can utilize SBA district offices, SBDCs, and SCORE (formerly called Service Corps of Retired Executives) for local marketing and outreach efforts. Among the lessons learned from the 2005 Gulf Coast hurricanes was the need for a more organized, formal, and pre-planned approach for providing SBA services in response to a disaster. Members of Congress found that it was necessary for SBA to develop and implement a written, comprehensive disaster plan. Congress acted to signify the importance of an agency-wide plan that provided guidance and procedures governing preparations for, and response to, declarations of disasters of various dimensions, including catastrophic disasters, by including several related requirements in the Act. Thus, one section of the Act requires that SBA develop, implement, or maintain a comprehensive written disaster response plan and update the plan annually and following any major disaster when SBA declares eligibility for additional disaster assistance. Our prior work also revealed the need for SBA to conduct comprehensive disaster planning. For example, as we stated in our February 2007 report, SBA did not engage in or complete comprehensive disaster plans before the Gulf Coast hurricanes, and this limited logistical disaster planning likely contributed to the initial challenges the agency faced in responding. We recommended that SBA develop time frames for completing key elements of a disaster management plan and a long-term strategy for acquiring office space, and assess whether the use of disaster simulations or catastrophe models would enhance the disaster planning process. In August 2008, SBA provided information to us on how the agency had implemented our recommendation to use disaster simulations to enhance its disaster planning. Other GAO reports, reports by other investigative agencies, and disaster management experts long have stated that comprehensive planning can help organizations prepare for potential disasters and mitigate their effects. In the wake of the Gulf Coast hurricanes, SBA officials said that they recognized the importance of disaster planning—to improve planning, they created the agency’s first DRP and also conducted their first simulation. In creating the DRP, SBA acknowledged the need for a systematic approach to carry out the agency’s disaster assistance mission and ensure coordination, awareness, and support throughout the agency. The plan, which was issued on June 1, 2007, was designed to provide procedures to better handle future disasters of all sizes. Its major components–– infrastructure, human capital, information technology, and communications and outreach––are designed to help ensure that necessary resources are available, (including reserve corps, staff trained in disaster loan processing, office space, and information technology) and that SBA has established an enhanced approach for communicating with the public and coordinating with other disaster assistance groups. The Act comprises 26 provisions with substantive requirements for SBA; some with specific deadlines and some needing appropriations, and includes requirements that SBA must meet regarding disaster planning and response, disaster lending, and reporting. For instance, the Act includes provisions to improve SBA’s coordination with FEMA, require that the agency conduct biennial disaster simulations, create a comprehensive disaster response plan, and improve communication with the public when disaster assistance is made available. It includes requirements to improve ODA’s infrastructure, appoint an official to oversee the disaster planning and responsibilities of the agency, and establish reporting requirements for various reports to Congress. The Act also includes provisions to create new SBA disaster loan programs, such as the Immediate Disaster Assistance Program that would provide small dollar loans immediately following a disaster and the Expedited Disaster Assistance Loan Program that would provide expedited disaster assistance to businesses. The Act contains 9 provisions that establish deadlines for specific SBA actions that range from 30 days to 1 year after the Act’s enactment (see table 2). For example, the Act requires SBA to conduct a study of whether the standard operating procedures (SOP) for loans offered are consistent with the regulations for administering the Disaster Loan Program and report to Congress on the study findings within 180 days after the Act’s enactment. Additionally, the Act establishes multiple reporting requirements for SBA. One example of these reporting requirements is that SBA must submit an annual report to Congress on disaster assistance within 45 days after the end of each fiscal year. This annual report must include a report on the comprehensive disaster response plan, among other things. SBA has fully addressed requirements for 13 of 26 provisions of the Act, partially addressed 8, and took no action on 5 that are not applicable at this time. In addition, 9 of the 26 provisions are subject to deadlines and the agency has had limited success in meeting them. SBA officials told us the agency did not fully address requirements for some provisions because the agency has to make extensive changes to current programs or create new programs in order to comply with the Act’s requirements. SBA officials also told us that the agency needed time to pilot new programs, such as private disaster assistance programs, before making final decisions about implementation. Also, SBA has not issued its first annual report to Congress on disaster assistance, due November 2008, issued an annually updated DRP since its initial June 2007 plan, or addressed how it would market its Disaster Loan Program in different areas of the country and adapt likely scenarios for certain regions prone to disasters. Furthermore, the agency did not provide milestone dates for completing implementation of these requirements and, as a result, Congress does not have reliable information on the extent to which SBA has addressed the requirements and made improvements to its program. SBA has fully addressed 13 of the 26 provisions of the Act, partially addressed 8, and took no action on 5 that are not applicable at this time. For the 13 provisions SBA addressed, the agency’s actions included making improvements to the agency’s disaster loan planning and response; augmenting infrastructure, information technology, and staff; and improving disaster lending by increasing access to funds for loan applicants. For example, to improve the agency’s disaster loan planning and response, the agency conducted a study on the consistency between the Disaster Loan Program’s SOPs and regulations and reported its findings to Congress. SBA has also taken steps to improve its infrastructure, information technology, and staff by putting in place a secondary facility in Sacramento, California to process loans during times when the main facility in Fort Worth, Texas is unavailable and by making improvements to DCMS to track and follow up with applicants. Additionally, according to SBA officials, the agency increased DCMS’ capacity from 2,000 to more than 12,000 concurrent users and expanded their disaster reserve staff from about 300 to more than 2,000 individuals. Furthermore, the agency increased access to funds by making nonprofits eligible for economic injury disaster loans, increasing loan amounts from $10,000 to $14,000 without requiring collateral, and changing the appropriate maximum disaster loan amount from $1.5 to $2 million. See table 3 for other requirements of the Act that SBA has addressed. Based on discussions with SBA officials and our review, 4 of the 26 provisions require no action by SBA at this time due to their discretionary nature. More specifically, 1 provision provides SBA the discretion to offer persons receiving disaster loans an option to defer repayment on their loans and another provision provides SBA discretionary authority to refinance Gulf Coast disaster loans. Two additional provisions only can be triggered if the Administrator determines a disaster is a catastrophic event and authorizes additional assistance. The statute states that the determination is only to be made for the most extraordinary and devastating events. Accordingly, SBA officials told us that the agency needs to take no action unless a disaster is declared a catastrophic event and the Administrator authorizes additional funding. Agency officials told us SBA is able to carry out the requirements of these two sections and stated that after the September 11, 2001, terrorist attacks in New York, the agency carried out one of the requirements by issuing regulations and permitting loans to small businesses located outside of the disaster area. Additionally, we found that at least 1 provision––the Small Business Bonding Threshold––requires no action at this time because it would require the agency to obtain additional appropriations. For example, the provision states that the SBA Administrator may carry out the requirements of the section only with amounts appropriated in advance specifically to carry out the requirements. Accordingly, SBA would need to have an appropriation for implementation of that provision. However, the American Recovery and Reinvestment Act of 2009 (ARRA) generally increased the maximum contract amount for SBA bond guarantee to $5 million until September 2010. According to SBA’s Office of General Counsel, under ARRA, small business contracts up to $5 million are eligible for an SBA bond guarantee up to September 30, 2010. SBA partially addressed 8 provisions of the Act by taking some actions to implement the requirements. For example, 1 provision requires SBA to update the comprehensive DRP annually; while the agency originally issued a plan in June 2007 and agency officials have participated in leadership seminars to discuss revisions to the plan, SBA has failed to comply with the Act and issue an updated plan, as required by section 12075. Moreover, the existing plan does not include information on, nor is there separate information addressing, as the Act requires in section 12063, regional marketing information. Additionally, at least 4 provisions require SBA to either create new or make changes to existing programs. Three of these 4 provisions require SBA to issue regulations within 1 year of the Act’s enactment, but the agency only has established regulations in draft form and has not issued any final regulations. For the 8 partially addressed provisions, our analysis was based on actions described by SBA officials (see table 4). According to agency officials, SBA did not fully address requirements for some provisions because the agency has to make extensive changes to current programs or create new programs to comply with the Act’s requirements, and it takes time to implement these types of changes. More specifically, according to agency officials, SBA has not completely addressed some provisions because: Sections 12062, 12083, 12084, 12085: These 4 provisions require SBA to issue regulations or make amendments to its SOPs that either establish new disaster programs or make changes to an existing program, but the agency said it takes time to develop and issue regulations and, in some cases, it is developing pilot programs before making decisions about regulations. SBA officials told us they have requested funding to carry out requirements for two of these––the Immediate and Expedited Disaster Assistance Programs––in the fiscal year 2010 President’s Budget. According to SBA, the funds will be used to implement pilot programs with private commercial lenders. SBA officials told us that such a pilot would be necessary to see how private lenders would administer the programs. Section 12066: Requires coordination between SBA and IRS to ensure tax records are shared quickly, and the two agencies intend to meet on an ongoing basis and update processes, as necessary. Section 12075: Requires the agency to issue an updated comprehensive DRP, and while the agency has drafted its updated plan, the draft may undergo additional changes after the agency holds it next Senior Leadership Seminar in which it will conduct disaster simulation exercises– –scheduled for June 29-30, 2009––and then agency officials must submit the updated plan to the new Administrator for review and approval. Section 12091: Establishes a new reporting requirement that SBA submit an annual report to Congress on disaster assistance each fiscal year, but SBA has not issued an annual report because the agency is awaiting input from the new Administrator. Furthermore, EODSPO staff are responsible for developing and submitting the annual report to Congress, but SBA officials told us the office was not fully staffed in November 2008 when the first annual report on disaster assistance was due to Congress. Specifically, the Act requires that SBA report annually on the total number of SBA disaster staff, major changes to the Disaster Loan Program (such as changes to technology or staff responsibilities), a description of the number and dollar amount of disaster loans made during the year, and SBA’s plans for preparing and responding to possible future disasters. Additionally, we believe that SBA has partially addressed the provision in section 12063 mandating region-specific marketing and outreach. However, agency officials told us that their comprehensive DRP and Leadership Guide for Managing a Response to a Disaster include marketing and outreach components and satisfy the Act’s requirement, and therefore, they do not believe a separate plan is necessary. While SBA believes that this requirement has been met, the DRP and Leadership Guide do not provide region-specific marketing information or have steps in place to ensure that the information is available to SBDCs––as required by the Act. Specifically, the Act states the marketing and outreach plan must (1) encourage a proactive approach to disaster relief efforts; (2) make clear the services provided by SBA; (3) describe SBA’s different disaster loan programs, how they are made available, and the eligibility requirements for each; (4) provide regional marketing information, focusing on disasters occurring in each region, and likely scenarios for disasters occurring in each region; and (5) ensure the marketing and outreach plan is available at SBDCs and on SBA’s Web site. For example, lessons learned from the 2005 Gulf Coast hurricanes can provide a basis for developing marketing information for regions that may be prone to large scale disasters affecting large geographic areas. Based on our review, the DRP and Leadership Guide do not include regional marketing information, such as lessons learned from prior disasters, and it is unclear how SBA ensures availability of the information to SBDCs and the public through the agency’s Web site. Also, as we will describe later, officials with whom we spoke during our site visits to Iowa and Texas noted the importance of regional marketing and outreach information and suggested this type of information would be helpful prior to a disaster. By not developing region-specific information in its updated plan and clear mechanisms to share the information with SBDCs, SBA is not in compliance with requirements of the Act, and has not fully leveraged the efforts of regional entities, such as SBDCs and emergency management groups, to ensure that it and they will be better prepared for future disasters. SBA has had limited success in meeting the deadlines in 9 provisions of the Act. The agency met some deadlines for 4 provisions, missed one deadline by 27 days, and missed deadlines for the 4 remaining provisions– –in some cases, by many months. The statutory deadlines range from 30 days to 1 year after the Act’s enactment. Table 5 shows the status of SBA’s efforts to meet the deadlines, as of June 2009. As we discussed earlier, the Act requires that SBA address region-specific marketing and outreach requirements, but we believe that its current DRP and Leadership Guide for Managing a Response to a Disaster do not address all the requirements in the Act and, therefore, SBA missed this deadline. Additionally, the Act requires SBA to issue final regulations for two new programs––the Private and Expedited Disaster Assistance Programs––and regulations for SBA’s coordination of disaster assistance programs with FEMA. SBA officials told us the agency has developed draft regulations for these requirements, but missed the statutory deadlines to publish final regulations. According to SBA officials, they missed the deadlines because they needed time to issue new regulations, as well as create and pilot new disaster programs, and conduct an interagency review with FEMA before making final decisions about implementation. The Act also establishes multiple reporting requirements, and while SBA has met some deadlines, others were missed. For example, SBA successfully submitted monthly accounting, staffing, and activity reports to Congress, starting in December 2008. However, the agency missed deadlines for submitting its first annual report on disaster assistance, due November 2008, as noted earlier, and contracting and loan approval rate reports. According to officials, SBA is waiting for input from the newly confirmed Administrator––who also must review and approve the reports prior to their issuance. In addition, because SBA has not published an update to the DRP since the plan’s issuance in June 2007, we found that it contained obsolete information in some areas, and did not include information on many of the changes resulting from the Act or the agency’s own disaster reform efforts since 2007, such as the establishment of EODSPO and appointment of a chief as director of all disaster planning responsibilities, update to DCMS to track and follow up with applicants, the increase in the system’s capacity from 2,000 to more than 12,000 concurrent users, and incorporation of disaster simulations to enhance its disaster planning. As we noted earlier, agency officials may revise the plan following its leadership seminar in June 2009. Agency officials said the updated plan will likely be issued in August or September 2009 and will incorporate changes to the Disaster Loan Program resulting from the Act. Finally, the agency did not provide milestone dates for completing implementation of the requirements that have not been completely addressed. Because these actions and reports have been delayed and SBA did not have a plan detailing expected completion dates on the requirements that still need to be addressed, Congress does not have reliable information on the extent to which SBA is reforming its Disaster Loan Program. Furthermore, failure to produce the annual report can lead to a lack of transparency on the agency’s progress in reforming the program, and the delay in updates to the DRP limits SBA’s ability to adequately prepare for and respond to disasters. SBA’s initial response after the 2008 Midwest floods and Hurricane Ike aligned with certain components of its DRP, such as infrastructure, human capital, information technology, and communications and outreach. For example, many of the individuals we met in Iowa and Texas said that SBA staff provided outreach and public information about its Disaster Loan Program, distributed application information, assigned knowledgeable customer service representatives to various DRCs and BRCs, and assisted in the initial application process by answering questions, providing guidance, and offering one-on-one help. Individuals we interviewed and results from SBA’s 2008 Disaster Loan Program Customer Satisfaction Survey provided some positive feedback about SBA’s performance following recent disasters. However, interviewees and these same survey results indicated areas for improvement; in particular, these sources both indicated that the application paperwork was burdensome and that the application process needed improvement. SBA officials told us that they intend to improve the application process, but did not provide documentation of such plans and did not appear to take advantage of feedback from applicants, such as that received from the customer survey. Three major disasters struck our nation in 2008 that provided a limited test of SBA’s ability to plan for and respond to major disasters and tested the improvements stemming from recent disaster reform efforts––the Midwest floods and Hurricanes Ike and Gustav. First, beginning in late May 2008, tornadoes, severe storms, and flooding affected six Midwestern states (Iowa, Illinois, Indiana, Missouri, Nebraska, and Wisconsin). Notices of Presidential declarations of major disasters were issued in each state. Flooding continued into July 2008 in some areas, with Cedar Rapids, Iowa, being hardest hit, in terms of suffering the most physical damage and business losses. The floods left 13 dead and damage region-wide was estimated in the tens of billions of dollars. In addition to FEMA, state and local emergency management agencies, the American Red Cross, and the National Guard assisted the victims of flooding with disaster relief and evacuation. Second, in early September 2008, a major disaster struck the Gulf Coast states when Hurricane Ike made its way through Texas and Louisiana. Hurricane Ike made landfall as a Category 2 hurricane near Galveston, Texas, on September 7, 2008, and was declared a major disaster by the President on September 13, 2008. Ike was the third most destructive hurricane to make landfall in the United States and the third major hurricane of the 2008 Atlantic hurricane season; it caused widespread damage to some Gulf Coast areas already trying to recover from Hurricane Gustav, which hit Louisiana on September 1, 2008. Hurricane Ike was blamed for at least 100 deaths and damages are estimated at approximately $24 billion. Based on our review, SBA’s response following the 2008 Midwest floods and Hurricane Ike aligned with certain components of its DRP, and the agency’s efforts were in accordance with the plan. Though we noted earlier that the 2007 plan has not been updated and, therefore, has some obsolete information, for purposes of this study, we found that the plan addresses the major components—including infrastructure, human capital, information technology, and communications and outreach—and puts into writing a disaster assistance framework and related processes for how the agency plans to prepare for and respond to victims of potential disasters, and subsequently, offer assistance through its Disaster Loan Program. For example, according to SBA, following both disasters the agency used its organizational infrastructure and key staff in each of its core functions to provide disaster assistance. ODA also utilized available operational and technological support, and communications and outreach, to help ensure that the agency would be able to provide timely financial assistance to the disaster victims. While the 2008 disasters were not as severe as those in 2005, the agency’s performance in the aftermath of the 2008 flooding dramatically improved over its performance in the aftermath of the 2005 Gulf Coast hurricanes. Specifically, following the 2005 Gulf Coast hurricanes, processing times for a home loan reached a maximum of about 90 days, but in 2008 the processing time was about 5 days. Similarly, SBA took 70 days to process a business loan in 2005, but in 2008 the average processing time was about 9 days. In addition, on June 24, 2008, SBA opened a BRC in Cedar Rapids, which was co-located with FEMA’s DRC. The BRC enabled businesses owners and homeowners to work directly with SBA staff to learn about available recovery resources and programs, receive counseling, and receive face-to-face answers to their questions. At the peak of its efforts, SBA reported having 194 staff working from about 67 centers in Iowa to provide recovery assistance to flood victims in more than 81 counties. As of June 2009, SBA had approved more than $411 million in disaster assistance to individuals and business owners whose homes or property were damaged by the Midwest floods. In addition, in the aftermath of Hurricane Ike, SBA had about 116 disaster staff in Texas and 200 in Louisiana. In Texas, particularly, SBA customer service representatives provided assistance to Hurricane Ike victims through 13 DRCs, two Disaster Loan Outreach Centers, and two BRCs. The customer service representatives were available to meet individually with disaster victims to issue loan applications, answer questions about SBA’s disaster loan program, explain the application process, and help individuals complete their applications. Additionally, as of June 2009, SBA had approved approximately $677 million in SBA disaster loans to Texas and Louisiana homeowners, renters, businesses, and nonprofits who sustained damages from Hurricane Ike. Specifically, SBA provided about $478 million in loans to more than 9,260 homeowners and renters, and about $199 million in loans to nearly 1,640 businesses and nonprofit organizations. Similar to its response following the Midwest floods, SBA took less time to process disaster loan applications during its post-Hurricane Ike response because of upgrades made to DCMS, an expanded disaster response workforce, and an online electronic loan application––eliminating the need to mail an application or visit a center. As a result, the time needed to process a home loan following Hurricane Ike averaged about 5 days and a business loan averaged about 12 days. Individuals affected by both disasters told us they considered the agency’s overall performance satisfactory in responding to the disasters. However, the individuals believed some improvements could be made to SBA’s disaster loan application process. Similarly, our review of SBA’s 2008 Disaster Loan Program Customer Satisfaction Survey also showed that respondents provided some positive feedback about SBA’s performance, but they too believed that improvements were needed. During our site visits to areas in Iowa and Texas, we obtained insights on the devastation caused by the Midwest floods and Hurricane Ike from various state and local government officials and small business owners, as well as their perceptions of SBA’s initial efforts. SBA District Office officials and SBDCs affected by the disasters, as well as representatives of nongovernmental organizations also gave their views on the disaster recovery efforts. According to SBA and SBDC officials, state and local government officials, nongovernmental representatives, and business owners we interviewed in Iowa and Texas, in the days immediately following the disasters, ODA staff reported to the affected areas, established several BRCs, assigned knowledgeable customer service representatives, and began providing the needed disaster assistance. The individuals said that SBA representatives distributed loan applications and assisted in the initial application process by answering questions, providing guidance about the Disaster Loan Program, its eligibility rules and requirements, offering one-on-one assistance with filling out the disaster loan application, and accepting completed applications. Additionally, interviewees said SBA staff provided outreach and public information to affected individuals and businesses about the Disaster Loan Program. For example, to ensure that individuals and businesses knew about available assistance, SBA staff worked with the local media in providing television, radio and newspaper interviews, communicating information about loan availability, and disseminating information through various community briefings and town hall meetings. The interviewees said SBA staff also made several visits to state and local groups, such as the Chamber of Commerce to tell them about SBA’s Disaster Loan Program. Many of the people we interviewed said that while most applicants used the paper application, the electronic loan application–– introduced in August 2008 just prior to Hurricane Ike––worked well and they were not aware of any DCMS problems. Furthermore, many of them were satisfied overall with SBA’s initial disaster assistance efforts, and the feedback we received on SBA’s response to the disasters indicated to us that the agency’s assistance was consistent with the processes and procedures outlined in the DRP. As mentioned, respondents to SBA’s 2008 Disaster Assistance Program Customer Satisfaction Survey also were somewhat satisfied with ODA’s Disaster Loan Program. Specifically, our review of the 2008 survey results showed that ODA’s aggregated customer satisfaction index score was 55 on a scale of 100. Respondents, however, had mixed reaction program and the agency’s performance in key areas such as application processing, SBA’s Customer Service Center and the disaster recovery centers, inspection and decision processes, and loan closing. Specifically, survey results showed applicants who were declined for an SBA disaster loan had lower satisfaction ratings compared to applicants that were approved for disaster loans. For example, the declined applicants’ overall customer satisfaction index was 34 as compared to approved applicants’ customer satisfaction index, which ranged from 63 to 81, with homeowners and renters generally more satisfied than business owners. The survey results also showed that the inspection process and disaster to the recovery center areas were rated positively by all respondents. The respondents noted that the DRCs were easy to locate, had convenient hours of operation and accessible SBA staff; and rated SBA staff as being professional, knowledgeable and helpful. Additionally, respondents rated the SBA inspection process as an area where SBA staff excelled. While SBA’s response to the disasters was considered satisfactory overall, both the individuals we interviewed and survey results point to areas for improvement and suggested ways to increase satisfaction with SBA’s disaster assistance process. For example, some business owner applicants we interviewed expressed concern over collateral requirements and interest rates. They also complained about disparity between verbal versus final written loan terms and amounts, having multiple loan officers or case managers, and SBA not using district and branch office staff for follow up after centers were closed in their areas. Additionally, some business owner applicants said that the disaster loan application was too complex or lacked adequate instructions—a problem which interviewees believed sometimes caused some applicants to withdraw their application or decide not to apply for SBA disaster loans. Both the interviewees and the survey results indicated the amount of paperwork required for the application process was burdensome, and interviewees also expressed concerns about the timeliness of loan disbursements. Specifically, some interviewees said that improvements were necessary to speed up loan disbursements because some business owners had to wait as long as 7 months after submitting the disaster loan application to receive an initial loan disbursement, by which time a small business could be so economically weakened that its future operations would be in question. In terms of the survey, business loan and economic injury disaster loan recipients were dissatisfied particularly with the timeliness of fund disbursals after loan approval, and rated the application process for business loans as among the areas most needing improvement. In addition, the application and decision process were consistently among the lower-rated performing areas. In addressing some of the areas in need of improvement, many business owner applicants we interviewed suggested changes to SBA’s disaster loan application process, such as providing partial disbursements earlier in the process and using bridge loans to help ensure disaster victims receive timely assistance. We also consistently heard from SBDCs and state and local emergency management agencies, the need for joint pre-planning and disaster preparedness efforts with SBA, and more up-front information about SBA’s disaster response plan and their expected roles and responsibilities as part of that response effort. In addition, during our interviews, some business owner applicants complained that they had to provide copies of 3 years of federal income tax returns, although they had signed an IRS form 8821—Tax Information Authorization––which allows SBA to get tax return information directly from IRS. Interviewees found this process burdensome and somewhat inefficient and, as a result, suggested that SBA change its application requirements to remove the requirement that applicants must provide copies of tax returns. SBA officials explained the current process for obtaining tax information from IRS and stated that SBA does not require copies of tax returns from all business applicants. Rather, they said that SBA requests copies on a case- by-case basis when it is unable to determine repayment ability based on the tax transcript obtained by using IRS form 8821. However, our review of SBA’s filing requirements for business loans showed that SBA’s written procedures differed from those the officials explained. Specifically, SBA’s written requirements for business loans state that while SBA requires business applicants to sign form 8821, applicants also must submit copies of their tax returns. In addition to the potential paperwork burden for applicants, the conflicting written procedures and SBA’s current process could cause confusion and inefficient processing during disaster responses. Similarly, one suggestion from the ACSI report of survey results was that SBA improve or maintain its process in high-performing areas of the loan application process and work to improve its performance in the low- performing areas to demonstrate commitment to further improving the process for future disaster loan applicants. For instance, areas with the lowest impact on an applicant’s overall satisfaction, such as the inspection process and DRCs, were rated higher than other areas by all respondent groups and those areas rated as having the highest impact on satisfaction for most respondents—the application and decision areas—scored lowest in satisfaction. Consequently, in reporting the results and suggestions for agency action, SBA was encouraged to: (1) maintain its efforts in areas that were high performing and had low impact on overall customer satisfaction, and (2) increase its efforts to improve areas that were low performing and had a higher impact on satisfaction. During our review and analysis of the 2008 customer satisfaction survey, we found that the survey’s results were not a formal part of the agency’s process for reforming its disaster loan program, or its efforts to continually improve the application process. SBA officials were unable to cite specific actions taken to incorporate the survey’s results into efforts to improve its disaster program, and it appears that its primary use for the annual survey is linking it to the agency’s budget and performance accountability reports to provide an outcome measure for the Disaster Loan Program. Additionally, apart from SBA’s 2008 launch of its online disaster loan application, we found that the agency’s other disaster reform efforts, to date, have not focused on the complexity of the disaster loan application (particularly for business applicants), the extensive amount of paperwork and documents required, or the timeliness of disbursements. While SBA officials said they continually look for ways to improve the disaster loan application, the agency does not appear to have a formal process for addressing problem areas within its program and making needed improvements. Consequently, it may be missing opportunities to demonstrate its commitment to further improve the application process for future applicants. Finally, some of the improvements suggested by the individuals we interviewed are related to some requirements in the Act. For example, as we noted earlier, the Act requires that SBA provide specific regional marketing and outreach information and scenarios in its DRP, and include SBDCs in preparing for future disasters. Additionally, the Act requires that SBA coordinate with IRS, as necessary, in sharing tax records of disaster loan applicants to ensure expedited processing of all disaster loans. As mentioned earlier in this report, as of June, SBA has yet to address these two requirements. SBA’s response to the 2005 Gulf Coast hurricanes exposed many deficiencies in the agency’s Disaster Loan Program and demonstrated the need for reform. Since then, SBA has taken steps to improve its program. For instance, SBA issued a DRP, adopted an electronic loan application, upgraded the system capacity of its DCMS, improved employee training and expanded its disaster reserve staff. With passage of the Act, Congress also acted to transform and improve SBA’s Disaster Loan Program and ensure the agency is better prepared to handle future large-scale disasters. SBA adapted its initial DRP in June 2007, which laid out a framework and processes the agency has in place that would enable it to respond effectively to disasters, and the Act requires that SBA have such a plan and regularly update it. However, SBA has not addressed specifically how it would market its Disaster Loan Program in different areas of the country, nor adapted likely scenarios for certain regions prone to disasters. Although SBA believes that it has addressed the requirement for marketing and outreach in its DRP, the 2007 plan does not provide any regional perspective, nor has the agency updated this plan since 2007. We consistently heard from regional entities, such as SBDCs and emergency management groups, about the need for more upfront information on SBA’s Disaster Loan Program and their expected roles and responsibilities in disaster response efforts. By taking such actions, SBA could leverage the efforts and capacity of SBDCs, as well as state and local emergency management agencies, and ensure that it and they will be better prepared for future events, especially in disaster-prone areas. SBA has taken a number of steps to address the many requirements of the Act; however, some provisions have presented challenges for SBA in implementing specific requirements and meeting some associated deadlines. For example, SBA has not completely met certain requirements and the agency does not have an implementation plan in place to ensure the remaining requirements are addressed. Some of the changes required by the Act, especially those requiring SBA to create new programs, will take time to implement. It will be important for the agency to do so in a comprehensive manner; but because the implementation process already is behind schedule, it also will be important for SBA to ensure it has a plan for implementing the remaining requirements and report on its progress to Congress. Failure to produce annual reports on schedule can lead to a lack of transparency on the agency’s progress in reforming the program. Delays in updates to the DRP also limit its ability to adequately prepare for and respond to disasters. By continuing its efforts to address and implement all requirements in the Act and expeditiously communicate its actions, SBA could improve its operations for the 2009 hurricane season, build on the lessons learned in the aftermath of the 2005 Gulf Coast hurricanes, and further signal its commitment to its mission of providing affordable and timely financial assistance to help businesses and homeowners recover from disasters. SBA’s initial response following the 2008 Midwest floods and Hurricane Ike aligned with certain components of its DRP, and the affected individuals we interviewed, as well as respondents to SBA’s 2008 Disaster Loan Program Customer Satisfaction Survey were somewhat satisfied with the agency’s performance after the major disasters of 2008. However, the individuals we interviewed and survey results both indicated areas for improvement with SBA’s disaster loan program. For instance, our interviewees and the survey results indicated the amount of paperwork in the application process was burdensome and cited the application process, including tax information requirements, as an area for improvement. As discussed in this report, while SBA has made progress, the agency has missed opportunities to further improve its Disaster Loan Program, and, in particular, improve the application process for future applicants. For example, it was unclear to what extent it had a formal process in place for addressing identified problem areas and making needed improvements to its program. By establishing such a process to address identified problem areas, SBA could better demonstrate its commitment to improve the Disaster Loan Program. To facilitate SBA’s progress in meeting and complying with requirements of the Act and improve the Disaster Loan Program, we recommend that the Administrator of SBA take the following five actions: develop procedures for regional entities that would enable SBA to meet all region-specific requirements of the Act. Specifically, building on the lessons learned from previous disasters, SBA should include likely scenarios for certain regions prone to disasters and regional marketing information for SBDCs, other local resources, and local emergency management groups. In addition, SBA should make this information and other Disaster Loan Program information readily available to these regional entities prior to the likely occurrence of a disaster; complete the first annual report to Congress on disaster assistance, and adhere to the required time frames for subsequent annual reports; expeditiously issue an updated DRP that reflects recent changes resulting from the Act’s requirements, as well as SBA’s own disaster reform efforts; develop an implementation plan and report to Congress on the agency’s progress in addressing all requirements within the Act––including creating and implementing new programs, such as the Immediate and Expedited Disaster Assistance Programs––and include milestone dates for completing implementation and any major program, resource, or other challenges the agency faces as its continues efforts to address requirements and meet deadlines in the Act; and develop and implement a process to address identified problems in the disaster loan application process for future applicants. We provided SBA a draft of this report for review and comment. In comments provided to us in an email, SBA generally agreed with our recommendations and stated the agency’s plan to incorporate them into its ongoing efforts to implement the Act and improve the application process. Specifically, SBA said that the agency has plans to expand its outreach efforts to ensure the public in all regions of the country are more aware of SBA disaster assistance programs before a disaster strikes. SBA is also planning to submit both the required annual report, and the 2009 revision to its DRP to Congress by November 15, 2009. Additionally, SBA officials said the agency has plans to develop an implementation plan for completion of the remaining provisions. Finally, in response to our recommendation on the application process, the officials cited the electronic loan application as an example of its efforts to improve the application process and said the agency has plans to continue its improvement efforts and make such improvements an ongoing priority. The comments also referred to ongoing efforts since 2005 to improve various processes, including 79 projects to improve the processing and disbursement process, but did not specify how these efforts improved the application process for disaster victims. In addition, SBA did not say how it would implement a formal process to address identified problem areas in the disaster loan application process. We are sending copies of this report to interested Members of Congress and the Administrator of the Small Business Administration. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or [email protected] if you or your staff have any questions about this report. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to review (1) the extent to which the Small Business Administration (SBA) addressed the requirements of the Small Business Disaster Response and Loan Improvements Act of 2008 (Act), and (2) how SBA’s response, following the major disasters of 2008, aligned with key components of its June 2007 Disaster Recovery Plan (DRP). To respond to these objectives overall, we reviewed agency documents related to SBA’s implementation of the Act’s requirements, identified key components of the DRP, interviewed key officials at SBA headquarters about their roles and responsibilities in implementing the Act and SBA’s response to major disasters in 2008, and identified the requirements specified in the Act––including any statutory deadlines for implementing specific provisions of the Act. SBA officials we met with included senior officials representing the Executive Office of Disaster Strategic Planning and Operations (EODSPO) and the Office of Disaster Assistance (ODA). We also met with officials from SBA’s Office of the Inspector General to discuss planned audits and oversight activities related to SBA’s Disaster Loan Program and the agency’s implementation of the Act. To determine the extent to which SBA addressed the requirements and deadlines of the Act, we reviewed the Act and identified 26 provisions with substantive statutory requirements and 9 provisions with related deadlines; reviewed the Act to determine which provisions require general or explicit appropriations; obtained, reviewed and analyzed documentation, such as policy memorandums, reports issued to Congress, or progress reports to determine if requirements had been addressed and deadlines had been met; interviewed agency officials to obtain information on what, if any, challenges exist that may affect SBA’s ability to address certain requirements—including identifying reasons for any delays in meeting the statutory deadlines; and met with SBA to obtain information about the agency’s next steps and resources the agency identified it needs to completely address the remaining provisions. During these meetings, we requested expected time frames for completion, milestone dates, resources needed, and reasons for delay, if applicable, for the partially addressed provisions. To assess whether SBA’s initial response following the 2008 disasters aligned with key components of its 2007 DRP, we conducted site visits to areas impacted by the 2008 Midwest floods (Iowa) and Hurricane Ike (Texas). We reviewed SBA’s DRP and other plans issued by the agency (i.e., SBA’s ODA Field Operations, Processing and Disbursement Center, and Customer Service Center Disaster Response Plans) to identify some of the key components of the plans such as determining the agency’s strategy for establishing field operations, disseminating information, coordinating with Small Business Development Centers (SBDC) and other regional entities, and effectively processing applications, as well as the agency’s DRP that discusses its approach for being prepared for and responding to a disaster declaration, strategy for internal and external communication, and a description of ODA’s responsibilities. In both Iowa and Texas, we interviewed various stakeholders including SBA and SBDC officials, state and local government officials, representatives of local Chambers of Commerce and economic development organizations, and small business owners to discuss what worked well, in terms of SBA carrying out key components of their DRP, and what, if any, improvements were suggested for SBA’s Disaster Loan Program and processes. While our limited number of site visits was too small for generalizing the information obtained to assess ODA’s overall ability to respond to any disaster, the observations and perspectives expressed by the various stakeholders was sufficient to suggest that SBA has begun institutionalizing key reforms in its disaster program’s policy and practices. Furthermore, we obtained information about loan applicants’ and recipients’ satisfaction with the agency’s Disaster Loan Program and related services immediately following the Midwest floods and Hurricane Ike. We also reviewed the 2008 Disaster Assistance Program Customer Satisfaction Survey that addressed five customer segments which measure customer satisfaction with SBA’s Disaster Loan Program. It included four types of loan recipients—homeowners, renters, and business owners who received physical damage and economic injury disaster loans, as well as declined applicants. The survey questionnaire, which was developed through a collaborative effort between the Claes Fornell International (CFI) Group and SBA, measured overall satisfaction with SBA’s program in areas such as application processes, customer service center, recovery center, and inspection processes. About 4,800 loan recipients and declined applicants were included in the survey population resulting in about 570 completed responses used for analysis—a response rate of about 20 percent. Further, the number of completed interviews was based on a quota for calling among the five customer segments, with weights applied to responses for the number of completed surveys and the number of loan recipients and declined applicants studied. Our overall data reliability assessment of the Customer Satisfaction survey was generally based on discussions with SBA officials and our knowledge of the Disaster Loan Program, publicly available information on ACSI, and our prior reports which included analyses of past years’ survey results. As a result, we determined that survey data were sufficiently reliable for purposes of this report. We conducted this performance audit from October 2008 through July 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summar Busine Improvy of the 2008 Small ss Disaster Response and Loan ements Act Requirements Addressed (initil or ongoing) or dedline met Prtilly ddressed or ome dedline met Not ddressed or missed dedline Not pplicable ecause no ction i needed to e tken SBA t thi time, de to proviions’ dicretionry nre. William B. Shear, Director, (202) 512-8678, or [email protected]. In addition to the individual named above, Kay Kuhlman, Assistant Director; Michelle Bowsky; William Chatlos; Beth Ann Faraguna; Alexandra Martin-Arseneau; Marc Molino; Linda Rego; and Barbara Roesmann made significant contributions to this report.
|
After the Small Business Administration (SBA) was widely criticized for its performance following the 2005 Gulf Coast hurricanes, the agency took steps to reform the Disaster Loan Program and Congress enacted the Small Business Disaster Response and Loan Improvements Act of 2008 (Act). GAO was asked to determine (1) the extent to which SBA addressed the Act's requirements, and (2) how SBA's response to major disasters in 2008 aligned with key components of its June 2007 Disaster Recovery Plan (DRP). GAO reviewed the Act, as well as SBA information on requirements addressed and steps taken, including the DRP, various reports to Congress, and policy memoranda. GAO also conducted site visits to areas affected by major 2008 disasters, reviewed SBA's customer satisfaction survey, and obtained the opinions of relevant stakeholders. As of June 2009, SBA met 13 of 26 requirements of the Act, partially addressed 8, and did not take action on 5 which are not applicable at this time. SBA officials told GAO the agency has not yet completely addressed some provisions that require new regulations because to do so, the agency must make extensive changes to current programs or implement new programs. For two requirements that will involve private lenders, SBA plans to implement pilots before finalizing regulations. SBA has not yet addressed the Act's requirements for region-specific marketing and outreach and ensured that Disaster Loan Program information is readily available to regional entities, such as Small Business Development Centers (SBDC). By doing so, SBA could leverage the efforts and capacity of local resources and emergency management groups, and ensure that it and they will be better prepared for future disasters. Also, as of June 2009, SBA had not met deadlines to issue an annual report to Congress or an updated DRP. Failure to do so can lead to a lack of transparency on the agency's progress in reforming the program and limit its ability to adequately prepare for and respond to disasters. Furthermore, SBA did not have an implementation plan for addressing the remaining requirements. SBA's initial response after the 2008 Midwest floods and Hurricane Ike aligned with certain components of its initial DRP, such as using technology and outreach efforts to ensure timely assistance. The individuals GAO interviewed and results from SBA's 2008 Disaster Loan Program Customer Satisfaction Survey provided some positive feedback about SBA's performance following recent disasters. However, interviewees and survey results indicated areas for improvement; in particular, both indicated that application paperwork was burdensome and that the application process needed improvement. SBA officials told GAO that they have been taking steps to improve the application process, but did not provide documentation of such efforts. As a result, it did not appear to have any formal process for identifying problems in the application process and making needed improvements.
|
In accordance with section 1842 (42 U.S.C. 1395u) of the Social Security Act, the Health Care Financing Administration (HCFA) contracts with 32 insurance carriers to process and issue benefit payments on claims submitted under Medicare Part B coverage. Carriers are required to process claims in a timely, efficient, effective, and accurate manner. During fiscal year 1993, carriers processed about 576 million Part B claims submitted by about 780,000 physicians and 136,000 suppliers. Section 1842 of the Social Security Act provides that carriers pay only for services that are covered and that they reject a claim if they determine that the service was not medically necessary. In fiscal year 1993, carriers denied 112 million Part B claims in whole or in part (19 percent of all claims processed) for a total of $17 billion in denied claims (which represented 18 percent of all billed charges, a figure unchanged from the previous year). Services deemed not medically necessary constituted about 9 percent of the dollar amount denied by carriers. A claimant (provider or beneficiary) who is dissatisfied with a carrier’s claims decision has the right to appeal. Although most claim denials are the result of routine administrative checks made during claims processing (for example, denials for duplicate claim submissions or ineligible claimants), a significant portion of denials are the result of coverage determinations. Coverage under Medicare is determined by three criteria: Medicare law, national coverage standards developed by HCFA, and local coverage standards developed by individual carriers. According to section 1832 (42 U.S.C. 1395k) of the Social Security Act, Medicare Part B covers a wide range of health services, such as physician services, outpatient hospital services, the purchase of durable medical equipment, prosthetic devices, and laboratory tests. At the same time, the act limits or excludes certain services: It places limits on podiatric, chiropractic, and dental services and specifically excludes some categories of service, such as routine physical checkups and cosmetic surgery. Medicare law is best viewed as a framework for making coverage determinations: It is not, as HCFA has observed, “an all-inclusive list of specific items, services, treatments, procedures or technologies covered by Medicare.” “Notwithstanding any other provisions of this title, no payment may be made under part A or part B . . . for any expenses incurred or items of services . . . which . . . are not reasonable and necessary for the diagnosis or treatment or illness or injury or to improve the functioning of a malformed body member.” “a test of whether the service in question is ‘safe’ and ‘effective’ and not ‘experimental’; that is, whether the service has been proven safe and effective based on authoritative evidence, or alternatively, whether the service is generally accepted in the medical community as safe and effective for the condition for which it is used.” Although carriers make most coverage decisions, HCFA has set national coverage standards for some specific services, the guidelines of which are found in the Medicare Carriers Manual, the Medicare Coverage Issues Manual, and other program publications. Where HCFA has issued a national coverage decision, carriers are expected to enforce it. Although national coverage standards are for the most part straightforward, some standards may require clarification or interpretation. In such instances, carriers are advised to consult with a HCFA regional office, which may in turn ask the HCFA central office for guidance. In the absence of national coverage standards, HCFA has, consistent with Medicare law, given carriers the discretion to develop and apply their own medical policies based on local standards of medical practice. Since national coverage standards have been issued for only a small portion of all services, carriers often “must decide whether the service in question appears to be reasonable and necessary and therefore covered by Medicare.” HCFA has given carriers broad latitude in this area—that is, it has given them primary responsibility for defining the criteria that are used to assess the medical necessity of services. Such local medical policies allow carriers to target specific services that may need greater scrutiny. For example, local medical policies may be developed in response to excessive utilization of a service or inappropriate billing patterns. To implement medical policies, carriers develop prepayment screens that suspend a subset of claims for manual review. Screens are computer algorithms that use certain claim information (such as diagnostic code or frequency of services performed) to channel certain types of claims to examiners for further review. The criteria used to flag claims for medical review are less exhaustive than the criteria used in making the final determination. For example, a screen for chiropractic treatment may suspend claims of beneficiaries who have received more than 12 treatments within the past year. At this point, the suspended claims are reviewed by claims examiners, who make a determination based on medical policy. A carrier’s medical policy defines the conditions under which chiropractic treatments beyond the threshold are medically necessary. It is, however, important to note that the proportion of claims that carriers review for medical necessity is determined by the amount of money available to HCFA for allotment to carriers for the purpose of medical review. In fiscal year 1994, HCFA allotted enough funds for 5 percent of claims to be medically reviewed. Despite the importance of carrier vigilance over Medicare claims, budgetary constraints have led to a decrease in program safeguard activities such as prepayment screening of claims for medical necessity. The proportion of claims that are reviewed for medical necessity has decreased from 20 percent of all claims in 1989 to 5 percent in 1994. Because carriers now have fewer resources to review the appropriateness of claims, it is essential that carriers use what resources they do have in the most effective way possible. Yet, we found that HCFA has not compiled information, nor does it have a systematic method that would allow it to assess the adequacy of current carrier safeguard controls. We conducted our study between April and November 1994 in accordance with generally accepted government auditing standards. See appendix I for a description of our analytical methodology. This section presents the results of our analysis of 1992-93 medical necessity denial rates for six carriers across 74 expensive or heavily utilized services. We examined the (1) magnitude, (2) variability across carriers, and (3) annual changes of denial rates for 2 consecutive years. Table 1 summarizes 1993 denial rate information from appendix III (appendix II gives 1992 data) and shows the frequency distribution of denial rates for the 74 services across six carriers. This table shows that within this group of 74 services, denial rates were generally low—a finding that was consistent across all carriers. For example, the Northern California carrier had 47 services with a denial rate of zero, 19 services with a denial rate of between 1 and 10, 6 services with a rate of between 11 and 100, and 2 services with a denial rate of over 100 per 1,000 services allowed. Furthermore, the Southern California carrier, which had the largest number of services with denial rates over 10 per 1,000 allowed, still had a majority of services (46 of 74) with denial rates of less than 10 per 1,000 services allowed. The denial rates for 1992 and 1993 show notable variability across six carriers. Figure 1, which displays 1993 carrier denial rates for 5 different services, illustrates this point. For example, the range of denial rates across carriers for a chest x-ray varied between 0.1 and 90.2 (per 1,000 services allowed). The denial rates for at least two thirds of each carrier’s services did not significantly change between 1992 and 1993. In general, the magnitude of carrier denial rates was persistent for 2 consecutive years. Services that had high denial rates in 1992 also tended to have high rates in 1993. Conversely, services with low denial rates in 1992 also were generally low in 1993. (See table 2.) For two jurisdictions—South Carolina and Wisconsin—the number of services that had decreased denial rates in 1993 exceeded the number of services for which rates increased. Conversely, four carriers—Northern California, Southern California, North Carolina, and Illinois—had more services whose denial rates significantly increased than decreased. For Northern California, Southern California, and Illinois, the difference in the number of services with higher denial rates in 1993 was slight, from 7 to 11 services. However, denial rates for the North Carolina carrier significantly increased between 1992 and 1993 for 18 services; the denial rate was significantly decreased for only 1 service. The significant differences in denial rates for medical necessity across carriers give rise to the following question: What accounts for the variations in denial rates? To address this question, we met with carrier representatives and HCFA officials, who identified five factors that could help explain the variation in denial rates across carriers. The Medicare program has since its inception acknowledged the existence of regional variations in medical practice standards and has sought to accommodate these differences in adjudicating claims. One practical consequence of this policy is that HCFA has delegated to carriers the authority to determine whether a rendered service was medically necessary. Making such determinations requires that carriers first develop a local medical policy. Computer screens are used to suspend a subset of claims, which are then reviewed by claims examiners, who in turn follow local medical policy in making their determinations. Utilization and diagnostic screens are two of the more common types of screens. Utilization screens measure the number of times a service has been performed against a standard (for example, services per year), and diagnostic screens compare the diagnosis listed on a claim with a defined set of diagnoses that would usually warrant performance of that service. Differences in the way that carriers use screens can affect the variability of denial rates in two ways. First, in the absence of an applicable local medical policy or a coverage directive from HCFA to assess the validity of a claim, carriers usually assume that a claim is valid and thus should be approved. It follows that, given comparable billing patterns, a carrier with a screen in place for a specific medical service will deny more claims than a carrier without such a screen in place. Carriers differ in the number of services they screen; we reported earlier that the total number of local screens carriers used in 1988 ranged from 5 to 177. Second, different carriers screening the same service may use different criteria to suspend claims. Thus, although two carriers may screen the same service for medical necessity, their respective criteria may result in differing denial rates. To gauge the effect of medical necessity screens on carrier denial rates, we asked the carrier with the highest denial rate for medical necessity for 5 selected services to identify the specific reason for denial for a small sample of 15 to 20 claims denied for lack of medical necessity. In this way, we were able to identify the key screens that most directly caused denial. We selected the 5 services because carrier denial rates for each one exhibited significant variation. For each service, we selected the carrier with the highest denial rate and determined the reason for the denial: x-ray and multichannel blood test (Illinois), myocardial perfusion imaging and echocardiography (Southern California), and opthalmologic exam (Wisconsin). For example, for the automated multichannel blood test, the Illinois carrier had a denial rate of 138.9 per 1,000 services allowed in 1993, while the other carriers had negligible denial rates of 0, 0.1, 0.5, 1.4, and 1.7. After examining a sample of claims, the Illinois carrier concluded that the majority of its denials for reason of medical necessity resulted from a joint utilization and diagnostic screen. That is, a provider in the Illinois carrier’s jurisdiction could order this type of blood test for a patient up to two times per year with no condition attached. On the third and subsequent tests, however, the carrier checked the appropriateness of the test against a set of diagnostic codes specified by its local medical policy. If the diagnostic codes on the claim matched codes on this list, the service was approved. Conversely, if a diagnosis was not provided or did not match the accepted codes, the claim was denied and returned to the provider. The provider could then resubmit the claim with a different diagnostic code if appropriate. We then asked the other carriers (Northern California, Southern California, North Carolina, South Carolina, and Wisconsin) if they had similar utilization and diagnostic checks to assess the medical necessity of multichannel blood tests. Their responses indicated that two carriers used only a diagnostic screen and the remaining three did not have either a utilization or a diagnostic screen for this service. The carriers’ responses for this service, as well as for the 4 other services selected for analysis, are summarized in table 3. We found that the types of services screened for medical necessity varied across carriers. For example, as shown in table 3, only one of the six carriers (Southern California) screened echocardiography and myocardial perfusion imaging services. Similarly, while four carriers screened multichannel blood test services, the types of screens they used varied. For example, the North Carolina carrier used a utilization screen, the Wisconsin carrier used a diagnostic screen, and the Illinois carrier used both. Table 3 also provides evidence that carrier denial rates were associated with the presence or absence of a screen. For two services, echocardiography and myocardial perfusion imaging, the only carrier (Southern California) that had screens in place had much higher denial rates. While denial rates greater than zero do not always imply the presence of a medical necessity screen (some medical necessity denials may stem from postpayment review activities), denial rates are higher when a carrier has a screen. For the 3 other services—chest x-ray, multichannel blood test, and opthalmologic exam—the relationship between screening and carrier denial rates was less clear cut. With respect to multichannel blood test, it is possible that the reason the Illinois carrier had the highest denial rate stemmed from the fact that it used two types of screens, consisting of both a utilization and a diagnostic check, while the other carriers either had no screen (Northern California, Southern California, and South Carolina) or had only a diagnostic check (North Carolina and Wisconsin). This explanation, however, is less satisfactory when attempting to account for carrier variation in denial rates for chest x-rays and ophthalmologic exams. In sum, although the presence or absence of a screen was not sufficient to account for all variation in denial rates across carriers, it is important to note that the highest denial rates were invariably associated with screens. Beyond the simple presence or absence of a screen, the stringency of the screen criteria can also contribute to variation in denial rates across carriers by suspending a greater or lesser number of claims that are then subject to a medical review. We found that, even when screening the same service, carriers used different criteria for suspending claims. For example, the first 12 visits to a chiropractor for spinal manipulation to correct a subluxation must meet certain basic HCFA coverage criteria, such as the following: An x-ray demonstrating the spinal problem must be available, signs and symptoms must be stated, and the precise level of subluxation must be reported. The six carriers had all incorporated these criteria into their medical policies for chiropractic spinal manipulation. HCFA requires that carriers assess the necessity of visits in excess of 12 per year, but carriers diverged in how they assessed such treatments. One carrier stated that, after 12 visits, additional documentation on medical necessity would be required. Another carrier based the number of additional visits allowed on the injured area of the spine. When that number of additional visits was reached, this carrier required additional documentation from the provider. Still another carrier stated that, while it reviewed visits beyond 12, it usually did not require additional documentation until the 30-visit mark. While we anticipated variation in denial rates on account of differences in carriers’ implementation of screens, we expected less variation to result from carriers’ differing interpretations of national coverage standards. However, we learned that carriers interpreted and applied the same standards in different ways because some standards leave key elements of the policy undefined. In 1993, Transamerica Occidental Life, in coordination with HCFA, studied claims that it had processed for 17 different services for which Transamerica showed variation in denial rates in 1992 among the six carriers. The following discussion highlights some problem areas uncovered by the Transamerica study that relate to the implementation of national coverage standards. “There is a continued trend toward diagnostic screening for asymptomatic patients which we feel necessitates a formal policy. There is also wide variation among carriers as to the necessity for pre-operative diagnostic testing, and whether it falls within the ‘medical necessity’ coverage of the program. Review of various carriers’ policy indicates that some deny as ‘routine physical examination,’ and not as a medical necessity denial. HCFA needs to clarify their position on this issue so there is more consistency on a national basis.” “HCFA needs to re-evaluate its screening mammography billing and coverage requirements. Many screening services are being performed by nonscreening centers under the nonscreening procedure code. This may reflect a lack of, or inaccessibility to, screening mammography centers. There are also differences among carriers as to what constitutes a screening test. Some of the encounter codes used by HCFA as an indication for screening are also being used for diagnostic tests. Further clarification is needed.” Findings from the Transamerica study suggest that, at least with respect to chest x-rays and mammographies, carriers found it difficult to distinguish whether these procedures were performed for screening or diagnostic purposes. It is likely that this difficulty may extend to other types of test procedures. This example illustrates the fact that simply issuing a national coverage standard for a service is not sufficient to ensure consistency of application. While it is probably not feasible for HCFA to develop coverage standards that anticipate every conceivable circumstance under which a claim might be filed, we have identified a coverage issue for chest x-ray and mammography that appears to be in need of further clarification by HCFA. The manner in which carriers treated claims with billing errors or missing information affected denial rates. For example, if a carrier’s medical policy required that the provider indicate the diagnosis when submitting a claim for a particular type of service, and the claim lacked this information, the carrier had several options. The carrier could (1) return the claim to the provider, (2) “develop” the claim (that is, delay adjudication and try to obtain the required information by contacting the provider), or (3) deny the claim. If the first option was exercised and the claim was returned, it was as if the claim had never been submitted. If the second option was exercised and the carrier received the requisite claim information, then the claim was adjudicated. If the third option was selected and the carrier denied the claim, the provider had either to resubmit the claim or go through the appeal process to obtain payment for this service. The resubmitted claims might well be paid, but the carrier’s records would still show that the claim had been denied. (See table 4.) Although carriers had several ways of processing incomplete claims, the option they selected for any given claim depended on such factors as the cost incurred to develop the claim, the capability of their computer system, and special instructions from HCFA. For example, a carrier might have developed incomplete claims involving surgical procedures while denying incomplete claims involving chiropractic treatments, or the carrier might have rejected claims missing beneficiary health insurance numbers while developing claims with missing provider identification numbers. Because the preceding examples highlight only a handful of the numerous possible combinations that may have been used to process claims with incomplete information, it is difficult to characterize any one carrier’s approach, much less systematically compare differences. However, it is reasonable to infer that carriers that emphasized claim denial over claim development (or rejection) for incomplete claims had higher denial rates than carriers that did not. HCFA has examined this issue and has asked its Office of the General Counsel for advice that would bring consistency to the way that carriers process claims lacking basic information. In brief, HCFA recommends eliminating the denial option for incomplete claims. Claims that lack the requisite information would be returned or deleted and the provider or supplier would be notified. HCFA has noted that carriers have expressed concern over this proposal. Some carriers are against the elimination of the denial option because (1) it would negatively affect their administrative budgets (because deleted or returned claims do not count in their workload statistics), (2) the cost of returning claims can be high, and (3) physicians and suppliers learn how to bill correctly faster when a claim is denied rather than returned. HCFA has responded by asserting that “these costs will be more than offset by fewer denied claims, fewer beneficiary inquiries, and fewer unproductive and expensive appeals.” Standardizing the handling of incomplete claims would also improve the accuracy of carrier workload statistics by making them more comparable across carriers. Because carriers used different computer systems to process claims, their internal action codes—which indicate the reason for denying a service—were not identical. To facilitate comparisons, HCFA has required that each carrier translate its own set of internal action codes into 10 broad categories when transmitting data to HCFA’s central database. (See table I.2.) However, because HCFA has given carriers little guidance in performing this task, carriers are uncertain as to how denials should be classified for reporting purposes. This, in turn, has affected the reliability of estimated denial rates. “Changes were made to the reporting classification of messages as a result of our review of Medicare Carriers Manual (MCM) coverage criteria, shifting some of the denials from a medical necessity classification to a coverage classification. There is a great deal of variation among carriers as to whether certain types of ambulance denials are based on medical necessity or coverage. There needs to be more definitive information from HCFA as to how they want the denials to be classified.” We collected and analyzed reporting protocols for the six carriers in this study, and our analysis of these data corroborates Transamerica’s findings. (See appendix IV.) We found that while reported misclassifications of this type do not affect the actual outcome of claims, they can affect the reliability of estimated denial rates for certain services. For this reason, we calculated separate denial rates for “medical necessity” and “noncovered” care and the combined total (see appendixes II and III) and assessed the degree of intercarrier variability for each category of denial. We found significant intercarrier variability for all three types of denial categories. Reporting inconsistencies of this type affects HCFA’s ability to accurately monitor program operation activities and is thus an area where additional guidance from HCFA could improve the quality of the data it collects. HCFA officials advanced several hypotheses that might help explain variations in carrier denial rates. They focused on provider billing practices as they relate to (1) geographic differences in the level of fraud and abuse, (2) differences across carriers in provider education (that is, efforts aimed at increasing provider awareness of appropriate billing procedures), and (3) high denial rates caused by the aberrant billing practices of a minority of providers. HCFA has not systematically studied this issue and did not provide us with empirical evidence that would support any of these hypotheses. Using claims data, however, we were able to examine one of these hypotheses—whether the billing practices of a minority of providers were responsible for a disproportionate share of service denials. To test this hypothesis, we examined four services that exhibited wide variation in carrier denial rates for medical necessity. Although HCFA did not specify the criteria for identifying providers with aberrant billing practices, we assumed that providers that submit claims that are denied at a high rate have aberrant billing practices. However, such providers may not submit enough claims to substantially affect a carrier’s denial rate for that service. For this reason, we defined providers with aberrant billing practices in two ways: (1) those with the highest denial rates and (2) those with the largest number of denials. We then calculated a carrier’s denial rate for a service excluding the contribution of the top 5 percent of providers (in terms of both rate and total) to determine whether variations in denial rates were still observable. Table 5 shows that the top 5 percent of providers, in terms of the highest denial rates and highest number of services denied, contributed substantially to carrier denial rates for each of the 4 services. However, excluding these providers did not eliminate the variation across carriers. For example, the actual range of carrier denial rates for echocardiography was 0 to 173.3; excluding the Southern California providers with the highest denial rates, the range was 0 to 154.9; and excluding the Southern California providers with the largest number of services denied, the range was 0 to 63.1. Thus, under both definitions of aberrant billing practice, excluding aberrant practitioners reduced the variability in denial rates for a service but did not eliminate that variation. It is therefore likely that the billing practices of a few providers account for part of the intercarrier variation in denial rates. To further examine provider denial rates for medical necessity, we analyzed the distribution of provider denials for 16 services that had denial rates exceeding 90 per 1,000 allowed. For each service, we calculated the percentage of providers (within a carrier) that accounted for 50 percent of all denials for that service, as well as the percentage of providers with at least one denial. For example, only 6.9 percent of Northern California chiropractors accounted for 50 percent of all denials. Table 6 displays the result of these calculations. Our analysis suggests that a small minority of providers, between 1.5 and 10.6 percent, accounted for 50 percent of services denied for lack of medical necessity (and thus were responsible for the bulk of denials). Thus, the screens and medical policies these carriers used to determine the medical necessity of claims primarily affected a relatively small proportion of the provider community. Table 6 also shows that the proportion of providers that had at least one denial varied between 19.5 and 85.5 percent. The latter range suggests that some prepayment screens used to identify inappropriate billing patterns affected a smaller proportion of the provider population than did others. While we cannot explain differing patterns of provider denials—for example, they may stem from unnecessary services being disproportionately offered by a few providers, differences in patient characteristics, variations in billing practices, or a number of other factors—further examination of the reasons for them is warranted given their potential to explain substantial amounts of variation in denial rates. The magnitude of carrier denial rates was generally low and persistent for 2 consecutive years, although rates for some services shifted across years. Medical necessity denial rates for 74 services across six carriers varied substantially. The primary reason for variation in carrier denial rates was that certain carriers used screens for specific services while others did not. Thus, carriers’ selecting the services to be screened and their determining the stringency of the screen criteria probably account for a significant proportion of the variability. Further, a small proportion of the providers accounted for 50 percent of the denied claims. To a lesser degree, the varying interpretation of certain national coverage standards across carriers, differences in the way carriers treated claims with missing information, and reporting inconsistencies helped explain variation in carrier denial rates. We did not attempt to assess whether low or high medical necessity denial rates for individual carriers were appropriate. Low denial rates are desirable from the standpoint that they imply less annoyance and inconvenience for providers and beneficiaries. However, low denial rates are desirable only insofar as providers do not bill for medically unnecessary services. What is clear from our work is that further analysis of denial rates can provide useful insight into how effectively Medicare carriers are managing program dollars and serving beneficiaries and providers. Since funding constraints limit the number of claims carriers can examine on a prepayment basis, it is important that they use the most effective and appropriate screens. We believe that HCFA could improve its oversight capabilities by actively monitoring data on carrier denial rates and improving the reliability of the data that it collects. Data on denial rates are useful for identifying inconsistencies in the way that carriers assess claims for medical necessity. This information, in turn, could be used to identify the services that certain carriers have found to have billing problems. In addition, for services that are more uniformly screened by carriers, variation in denial rates could indicate that carriers are using different screen criteria, which raises issues of appropriateness and effectiveness. Finally, data on denial rates could be used to construct a profile of the subpopulation of providers that have a disproportionately large number of denials, which might suggest a solution to this problem. We recommend that, to improve its oversight of the Medicare Part B program, HCFA issue instructions to carriers on how to classify the reason for denial when reporting this information; analyze intercarrier screen usage (including the stringency of screen criteria), identify effective screens, and disseminate this information to all carriers; and direct carriers to profile the subpopulation of providers responsible for a disproportionate share of medical necessity denials in order to devise a strategy for addressing this problem. At your request, we did not obtain agency comments on a draft of this report. If you or your staff have any questions about this report or would like additional information, please call me at (202) 512-2900 or Kwai-Cheung Chan, Director for Program Evaluation in Physical Systems Areas, at (202) 512-3092. Major contributors to this report are listed in appendix V. We had two objectives in this report. Our first was to determine the extent of carrier variability in denial rates for lack of medical necessity. Our second was to identify and examine factors that contributed to intercarrier variation in denial rates. To develop the information on denial rates, we analyzed a 5-percent sample of 1992 and 1993 claims for the top 74 medical services processed by six carriers (based on their national ranking in terms of total allowed charges in 1992). We also interviewed HCFA officials and representatives of the following six carriers: California Blue Shield (jurisdiction: Northern California), Transamerica Occidental Life Insurance (jurisdiction: Southern California), Connecticut General Life Insurance Company (jurisdiction: North Carolina), Blue Shield of South Carolina, Illinois Blue Cross and Blue Shield, and Wisconsin Physicians’ Service. In selecting carriers for our analysis, we considered geographic location and the number of claims processed. Our sample included two carriers each from of the Southeast, the Midwest, and the West. We sought to maximize the geographic distance between regions while retaining the potential for examining intraregional variation in claims adjudication. With regard to the number of claims processed, we attempted to obtain a mix of large and small carriers. Table I.1 lists the carriers we visited and the number of claims they processed in fiscal year 1992. Number of claims processed (in millions) Illinois Blue Cross and Blue Shield Connecticut General (North Carolina) Taken together, these six carriers processed about 19 percent of all Part B claims in fiscal year 1992. It should be noted, however, that the judgmental method used to select carriers for this report does not allow us to generalize our findings to the universe of carriers. We obtained data on denial rates from the National Claims History File, a database maintained by HCFA. It contains a wide variety of claim information, including type of medical service billed and type of action carriers take as a result of the claim adjudication process. On the Medicare claim form, each billed service, or line item, appears as a separate charge with a corresponding five-digit service code that describes the type of service provided. (See figure I.1.) For example, code 71020 refers to a chest x-ray. It is important to note that a Medicare claim can contain submitted charges for more than one service. A claim for a physician’s office visit, for example, may also include the charges for laboratory tests performed during the visit. The denial rates presented in this report are based on specific services, not on claims. Each service, or line item, listed on a claim is subject to the carrier’s approval or denial. For each service processed, the carrier must indicate whether the claim for service was approved or denied and, if denied, the specific reason for denial. Table I.2 shows the categories of denial that are reported to HCFA’s central database. Data match (Medicare secondary payer cost avoided) We analyzed services that were denied because they were “medically unnecessary.” We focused on this type of denial because it reflects, to a greater degree, the effect of carrier discretion in claims assessment. That is, determining medical necessity quite often entails the application of a complicated set of decision rules and may ultimately require the individual judgment of a claims reviewer. In contrast, the other types of denial involve more straightforward criteria that can be applied by means of computerized programs (such as whether charges for the same service appear twice on a claim). We calculated denial rates by summing the number of services denied for medical necessity and dividing the total by the number of services allowed for each of 74 services. We excluded from the analysis services denied for reasons other than medical necessity. Although Medicare covers more than 10,000 different medical services, relatively few services account for the bulk of Medicare costs. Our analysis was restricted to the top 74 services, based on their national ranking in terms of the total of allowed charges in 1992. In 1992, the top 74 services constituted approximately 50 percent of all Medicare Part B allowed charges. Services that rank high in allowed charges are either frequently performed (for example, office visits) or costly (for example, angioplasty treatments). N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Cystourethroscopy (separate procedure) Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Immunoassay for tumor antigen (for example, prostate specific antigen) Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Hospital discharge day management Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Ambulance service, basic life support Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. No allowed services were found for this code. N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Cystourethroscopy (separate procedure) Med. Cov. Med. Cov. Med. Cov. Discussion of secondary membranous cataract (opacified posterior) Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Thyroid stimulating hormone (TSH) Med. Cov. Med. Cov. Immunoassay for tumor antigen (for example, prostate specific antigen) Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Hospital discharge day management Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. Ambulance service, basic life support Med. Cov. Med. Cov. Med. Cov. Med. Cov. Med. Cov. (continued) N. Calif. S. Calif. N.C. S.C. Ill. Wis. Med. Cov. No allowed services were found for this code. A carrier might not pay for a particular service for numerous reasons. And, because carriers must explain denials in writing to providers and beneficiaries, carriers must track the specific reason for a denial when processing a claim. This is accomplished by assigning a unique “action code” to each billed service on a claim. For example, code “AB” might indicate that the carrier denied a B-12 injection because the diagnostic code listed on the claim was, based on HCFA coverage parameters, not medically necessary. Similarly, “BB” might indicate that an office visit was denied because the claimant was ineligible for Medicare. While the reasons for denials are generally comparable across all carriers, the “action codes” that carriers use to record the reasons are not; hence, the code “AB” might not be used by all carriers or, if used, might mean something different for each. Before transmitting information to the National Claims History (NCH) File, HCFA’s central database for claims, HCFA requires that each carrier translate its set of action codes into 10 broad denial categories (see table I.2). HCFA does not instruct carriers in how to make this classification. Thus, “AB” might be translated for NCH as “C” (for noncovered service) and “BB” as “O” (other denial). However, given that carriers have different sets of action codes to classify, the question naturally arises: Is the resulting NCH classification comparable across carriers? In other words, Does “noncovered service” or “medically unnecessary” mean the same thing to different carriers? To answer this question, we made use of the fact that carrier action codes are connected to HCFA denial messages (a common set of messages that carriers are required to use in their written communications with beneficiaries). That is, while North Carolina and Wisconsin may use different internal action codes to record the reason for denying a service, they use the same set of HCFA messages to describe that reason to the beneficiary. By comparing the HCFA messages, rather than action codes, with NCH categories, it is possible to gain a sense of how similar different carriers’ coding practices are. For illustrative purposes, table IV.1 displays a sample of carrier action codes, HCFA denial messages, and NCH categories for two carriers. Table IV.1 shows that when North Carolina uses “AA” and Wisconsin “30,” both carriers send the beneficiary the same message: “Medicare pays for transportation to the closest hospital or skilled nursing facility that can provide the necessary care” (HCFA message 1.01). Similarly, when they transmit this information to NCH, both carriers report the denial as relating to “noncovered care.” However, when North Carolina and Wisconsin send the beneficiary the message, “HCFA does not pay for routine foot care” (HCFA message 10.05), they report different reasons for denial to NCH. North Carolina reports this as a “noncovered” care denial while Wisconsin considers it a “medical necessity” denial. Reporting consistency among carriers varies by type of message. For example, table IV.1 shows that there is agreement for three actions and disagreement for one action and, in a third instance, one of the carriers uses a particular HCFA message that the other does not. We collected translation tables, similar to table IV.1, for all six carriers in this study and compared HCFA message numbers with corresponding NCH categories. We restricted our comparison of HCFA messages to those that were (1) used for communicating denials, (2) used by at least three carriers, and (3) classified as a “medical necessity” denial by at least one carrier. Table IV.2 shows how carriers report the service denial reason to NCH when a particular HCFA message is sent to a beneficiary. Table IV.3 displays the actual messages that correspond to the HCFA message numbers. As table IV.2 demonstrates, carriers generally agree on how they classify HCFA messages for reporting purposes; instances of carrier disagreement center primarily on the distinction between “medically unnecessary” and “noncovered care” and, to a lesser extent, on “other.” For messages that HCFA has explicitly designated as pertaining to “medical necessity” (messages 15.01 through 15.33), we found the highest level of carrier agreement. Table IV.2: CWF Categories by HCFA Message Number and Carrier(continued) (continued) Medicare pays for transportation to the closest hospital or skilled nursing facility that can provide the necessary care. Medicare does not pay for separate charges by the mile. Medicare does not pay for transportation in a wheelchair van. The information we have in your case does not support the need for this ambulance service. The information we have in your case does not support the need for this transportation. (NOTE: Use of transportation between places of medical care.) The information we have in your case does not support the need for extra help in the ambulance. Medicare pays for the services of a chiropractor only when “recent” x-rays support the need for the services. “Recent” means the x-rays were taken within the past 12 months. (continued) Medicare pays for chiropractic services only to correct a subluxation of the spine. Medicare does not pay for this because your x-ray does not support the need for the service. Medicare does not pay for this because the x-ray was not taken near enough to the time treatment began. Medicare does not pay for this because it is part of the total charge at the place of treatment. Medicare does not pay for this because it is part of the monthly charge for dialysis. Medicare does not pay for immunosuppressive drugs that are not approved by the Food and Drug Administration. Medicare pays for this service up to 1 year after transplant and release from the hospital. Each prescription for immunosuppressive drugs is limited to a 30-day nonrefillable supply. Medicare can pay for this supply or equipment only if your supplier agrees to accept assignment. Medicare can pay only one supplier each month for these supplies and equipment. Medicare cannot pay more than $ — each month for these supplies. (NOTE: The limits for 1992 are $1,600 and $2,080 for CCPD. Update these figures when limits change.) Medicare does not pay for drugs that have not been approved by the Food and Drug Administration. Medicare pays for this drug only when Medicare pays for the transplant. Medicare cannot pay for this because we have not received the information we requested. (NOTE: If assigned claim, add: “The assignment agreement remains in effect and will apply to the new claim.”) Medicare cannot pay for this because your provider used an invalid or incorrect procedure code and/or modifier for the service you received. Please ask your provider to resubmit the claim with the valid procedure code and/or modifier. No certification of medical necessity was received for this equipment. Medicare does not pay for routine foot care. Another agency handles the bills for these services. We have sent the information to them. They will send you a notice. (Applies to RRB, United Mine Workers.) Medicare does not pay for this because the laboratory is not approved for this type of test. (continued) Medicare does not pay for laboratory procedures which have not been approved by the Food and Drug Administration. The information we have in your case does not support the need for this many visits or treatments. The information we have in your case does not support the need for this equipment. The information we have in your case does not support the need for this service. (If the claim was reviewed by your Medical Staff, add: Your claim was reviewed by our Medical Staff.) The information we have in your case does not support the need for this number of home visits per month. The information we have in your case does not support the need for this injection. The information we have in your case does not support the need for this many injections. The information we have in your case does not support the need for similar services by more than one doctor during the same time period. The information we have in your case does not support the need for this many services within this period of time. The information we have in your case does not support the need for more than one visit a day. The information we have in your case does not support the need for the level of service shown on the claim. The information we have in your case does not support the need for similar services by more than one doctor of the same specialty. The information we have in your case does not support the need for this laboratory test. The information we have in your case does not support the need for the level of service shown on this claim. We have approved this service at a reduced level. The information we have in your case does not support the need for this foot care. The information we have in your case does not support the need for more than one screening PAP smear in three years. Medicare does not pay for a surgical assistant for this kind of surgery. The doctor should not bill you for this service. Medicare does not pay for two surgeons for this procedure. Medicare does not pay for team surgeons for this procedure. (continued) Medicare does not pay for this in the place or facility where you received it. Medicare does not pay for this because the claim does not show that it was prescribed by your doctor. Medicare cannot pay for this service because the claim did not show that the Peer Review Organization approved it. Medicare does not pay for this service separately since payment of it is included in our allowance for other services you received on the same day. Medicare does not pay for this service because it is part of another service that was performed at the same time. Medicare does not pay for this item or service. Medicare does not allow a separate charge for this because it is included as part of the primary service. The provider cannot bill you for this. Medicare does not pay for this because it is a treatment that has yet to be proved effective. Medicare does not pay for these services or supplies. Medicare does not pay for drugs you can give yourself. Medicare does not pay for discussions on the telephone with the doctor. Medicare does not pay separately for a hospital admission and a visit or consultation on the same day. You should not be billed separately for this service. You do not have to pay this amount. (NOTE: Assigned claim.) Medicare does not pay separately for a hospital admission and a visit or consultation on the same day. You do not have to pay this amount. (NOTE: Unassigned claim.) Medicare will pay for only the nursing facility service when performed on the same day as another visit in a different site. You should not be billed separately for this service. You do not have to pay this amount. (NOTE: Assigned claim.) Medicare will pay for only the nursing facility service when performed on the same day as another visit in a different site. You do not have to pay this amount. (NOTE: Unassigned claim.) Medicare does not pay separately for this service. You should not be billed separately for this service. You do not have to pay this amount. (NOTE: Use for global denials for assigned claims.) Medicare does not pay separately for this service. You do not have to pay this amount. (NOTE: Use for global denials for unassigned claims.) (continued) Medicare does not pay for services performed by a private duty nurse. Medicare cannot pay for this service as billed. (NOTE: Use when nonphysician practitioners do not separate professional and technical services on the claim.) Medicare does not pay for routine examinations and related services. Medicare does not pay for this screening examination for women under 35 years of age. The place where you had this examination is not approved by Medicare. Medicare does not pay for this examination because less than one year (two/three years) has (have) passed since your last examination of this kind. Medicare will pay for this screening examination again in one year (two/three years). Medicare pays for this examination only once for women age 35-39. Medicare pays for screening pap smears only once every three years unless high risk factors are present. Medicare does not pay for services of a hospital specialist unless there is an agreement between the hospital and the specialist on how to charge for the services. Medicare will pay for only one hospital visit or consultation per physician per day. You do not have to pay this amount. Medicare will pay for one hospital visit per day. You do not have to pay this amount. Medicare does not pay for this service when performed, referred or ordered by this provider of care. Medicare does not pay for these charges because the cost of the care before and after surgery is part of the approved amount for the surgery. (NOTE: Use for global denials.) Medicare does not pay for cosmetic surgery and related services. Medicare does not pay for a surgical assistant for this kind of surgery. Medicare does not pay a doctor for assisting at this kind of surgery. The doctor cannot bill you for this service. Medicare does not pay for routine eye examinations or eye refractions. Medicare does not pay for eyeglasses or contact lenses except after cataract surgery or if the natural lens of your eye is missing. (continued) Medicare pays for only one pair of glasses after cataract surgery with lens insertion. Medicare does not pay the extra charge for deluxe frames. Medicare does not pay for this service when it is performed in an ambulatory surgical center. Sushil K. Sharma, Assistant Director Richard M. Lipinski, Project Manager Patrick C. Seeley, Communications Analyst Penny Pickett, Communications Analyst Venkareddy Chennareddy, Referencer The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed Medicare Part B claims processing, focusing on the: (1) differences in carriers' denial rates for lack of medical necessity; and (2) factors that contribute to intercarrier variations in denial rates. GAO found that: (1) in 1992 and 1993, denial rates for lack of medical necessity for 74 expensive or heavily utilized services were generally low, but the six carriers reviewed varied significantly in their denial rates; (2) denial rates for the 74 services varied from zero to over 100 per 1,000 services allowed; (3) in general, the carriers' denial rates remained stable for two-thirds of their services in 1992 and 1993; (4) the Medicare program has traditionally allowed carriers to include regional variations in medical practice standards in their criteria for determining allowable claims; (5) the Health Care Financing Administration (HCFA) has developed initiatives to promote consistency in medical policy across carriers; and (6) variations in carrier denial rates stemmed from carriers' differing prepayment screens, varying interpretations of certain national coverage standards, carriers' differing treatment of incomplete claims, and reporting inconsistencies.
|
The United States consumes energy from three major categories of sources: fossil, nuclear, and renewable. Fossil energy comes from coal, natural gas, crude oil, and petroleum products. Nuclear energy comes from uranium, which is mined and processed into nuclear fuel. This fuel undergoes nuclear fission in a nuclear reactor to produce heat, which is converted into electricity using steam turbine technology. Renewable energy comes from a variety of sources, including biomass, which is organic material from plants and animals and includes liquid biofuels (such as ethanol and biodiesel), wood, and waste (such as municipal solid waste and agricultural byproducts); hydroelectric power; geothermal; wind; and solar. Many sources of renewable energy are converted into electricity before being consumed. The United States domestically produces most of the energy it consumes. However, it also imports a portion, mostly in the form of crude oil and petroleum products. In 2013, the United States consumed over 97 quadrillion British thermal units (Btu) of energy, including over 12 quadrillion Btus of imported energy, according to EIA data. As shown in figure 1, most of this energy (or about 82 percent) came from fossil energy sources. The rest came from renewable and nuclear energy sources. There are four major sectors of the U.S. economy that consume energy at the point of end use: the industrial sector, which includes facilities and equipment used for manufacturing, agriculture, mining, and construction; the transportation sector, which generally comprises vehicles (such as cars, trucks, buses, trains, aircraft, and boats, among others) that transport people or goods; the residential sector, which consists of homes and apartments; and the commercial sector, which includes buildings such as offices, malls, stores, schools, hospitals, hotels, warehouses, restaurants, and places of worship, among others, as well as federal, state, and local facilities and equipment. End-use sectors obtain energy from different combinations of sources. The industrial sector mainly consumes natural gas and electricity but also uses some petroleum products as feedstock. The transportation sector mainly consumes gasoline, diesel, and jet fuel; it also consumes biofuels and natural gas, as well as small amounts of electricity. The residential and commercial sectors mainly consume energy from electricity and natural gas but also use some petroleum products. As described above, every sector consumes electricity produced by the electric power sector, which takes electricity generated from fossil, nuclear, or renewable energy and delivers it to the end-use sectors through transmission and distribution lines. As shown in figure 2, the industrial sector consumed the largest share of energy (32 percent or 31.3 quadrillion Btus) in 2013, followed by the transportation, residential, and commercial sectors. Not all of the energy produced is available for consumption at the point of end use, mainly because energy losses occur whenever energy is converted from one form to another. For example, coal-fueled power plants produce electricity by burning coal in a boiler to heat water and produce steam. The steam, at tremendous pressure and temperature, flows into a turbine, which spins a generator to produce electricity. During this process, the burning of coal produces heat energy, some of which converts water into steam. In turn, some of the energy in the steam is converted into electrical energy. At each point in this process, some of the original energy contained in the coal is lost. According to EIA, about two- thirds of the energy consumed to generate electricity is lost in conversion, and most of these losses occur in fossil-fueled and nuclear power plants that generate steam to turn turbines. According to general economic principles, a number of factors may affect the production and consumption of energy. These factors include the following: Changes in the supply of energy relative to changes in demand may affect the price that consumers pay. For example, if the supply of gasoline increases faster than the demand for it, the price of gasoline will most likely decrease. In contrast, if the demand for gasoline increases faster than the ability to supply it, the price of gasoline will most likely increase. Prices of energy provide signals to producers and consumers and may affect their behavior. For example, lower gasoline prices provide an incentive for consumers to consume more gasoline, while higher gasoline prices provide an incentive for consumers to consume less gasoline. (However, incentives or disincentives may not actually change behavior if they are insufficient to outweigh other factors or considerations.) Costs associated with producing energy provide signals to producers and may affect their behavior. For example, lower production costs provide an incentive for oil companies to produce more gasoline, while higher production costs provide an incentive for oil companies to produce less gasoline. Three important factors—energy efficiency, energy conservation, and the global economic recession of 2007 to 2009—had a major influence on U.S. energy production and consumption over the past decade but generally affected all energy producers and consumers rather than affecting only a specific source of energy. The first two factors are related but have distinct meanings. Energy efficiency is the use of technology that requires less energy to perform the same function, such as using a compact fluorescent light bulb in place of an incandescent bulb. Energy conservation is any behavior that results in the use of less energy, such as turning the lights off when you leave the room. Both energy efficiency and conservation reduce our consumption of energy and thereby influence the total amount of energy produced and consumed in the United States. Some data suggest that the United States increased the efficiency with which it used energy from 2000 to 2013. For example, according to Federal Highway Administration data, the fuel economy of U.S. motor vehicles increased from an average of 16.9 miles per gallon in 2000 to 17.6 miles per gallon in 2012 (the latest year for which data are available). Another measure of the efficiency with which the United States uses its energy is to compare U.S. energy consumption with the U.S. gross domestic product, which is a measure of the value of all final goods and services produced within the United States in a given period. By making this comparison, we can measure whether it takes less energy over time to produce the same value of goods and services. According to EIA data, the consumption of energy per dollar value of gross domestic product decreased from about 7,900 Btus per dollar in 2000 to about 6,200 Btus per dollar in 2013, indicating that the United States became more energy efficient. The third factor, the global economic recession of 2007 to 2009, resulted in reduced economic activity in general, which reduced energy demand and had long-term effects on global energy markets. For example, according to a 2009 International Energy Agency report, the recession’s effects on both energy producers and consumers included the following: Energy companies drilled fewer oil and gas wells and reduced spending on refineries, pipelines, and power stations. Businesses and households spent less on energy-using appliances, equipment, and vehicles. Tighter credit and lower prices made investment in energy savings less attractive financially, while the economic crisis encouraged energy consumers to reduce overall spending. As a result, the deployment of more energy-efficient equipment was delayed. Equipment manufacturers were expected to reduce investment in research, development, and commercialization of more energy- efficient models. The federal government supports or intervenes in U.S. energy production and consumption through a number of key methods, including (1) setting standards and requirements, (2) directly providing goods and services, (3) assuming risk, (4) providing funds, and (5) collecting or forgoing revenue from taxes or fees. The federal government also conducts and provides funding for energy-related R&D. Through laws and regulations, the federal government sets standards and requirements for (or prohibitions against) certain activities. Some laws and regulations focus on affecting economic activity by controlling prices, output, or the entry and exit of firms in a market. For example, under the Atomic Energy Act, as amended, the Nuclear Regulatory Commission is responsible for issuing licenses to commercial nuclear reactors and conducting oversight of activities under such licenses to protect the health and safety of the public, among other things. Without such licenses, firms cannot operate nuclear reactors and generate electricity from nuclear energy. Other laws and regulations focus on the effects of economic activity on the health and welfare of citizens. For example, under the Clean Air Act, EPA is responsible for regulating emissions of a variety of air pollutants from coal-fueled power plants and other energy producers. While laws and regulations vary widely in how they are designed, they tend to function by influencing the decisions of producers and consumers in the market. In addition, laws and regulations may impose a variety of costs, such as costs on regulated entities to comply with the laws and regulations (“compliance costs”), and costs on government agencies to administer and enforce them. To the extent that the compliance costs affect the costs of engaging in particular energy-related activities, laws and regulations may change the behavior of energy producers and consumers. The federal government provides some goods or services directly—that is, through a government agency—rather than providing funds to another entity to provide these goods or services. For example, the federal government may produce and sell electricity generated at federally- owned facilities and produce reports and information on energy markets, among other things. Government provision of goods or services may be deemed necessary to address certain circumstances, such as economic inequalities among segments of the public or a need for a good or service considered unlikely to be met by the private sector. Such activities may affect energy producers and consumers in different ways. For example, production and sales of electricity generated at federally-owned facilities may involve energy sources and prices that differ from those of electricity produced and sold by private market participants. The federal government assumes risk (and potential costs associated with risk) in a number of ways, such as making direct loans—disbursing funds to nonfederal borrowers under contracts requiring the repayment of such funds either with or without interest; guaranteeing loans—providing a guarantee, insurance, or other providing or subsidizing insurance. pledge regarding the payment of all or a part of the principal or interest on any debt obligation of a nonfederal borrower to a nonfederal lender; limiting liability; and By assuming some or all of the costs associated with risks for certain energy activities, the government may make those activities relatively less expensive, thus providing an incentive to pursue those activities. For example, if the federal government assumes the risk of default on a loan to a manufacturer of turbines (that generate electricity from wind energy), nonfederal lenders may offer a lower interest rate to the manufacturer than they would in the absence of the federal guarantee. Lowering the costs of capital for developers could result in certain projects being financed that would otherwise not be built. The federal government directly provides (or outlays) funds for different purposes. In some cases, an agency may provide funds in the form of a grant. For example, USDA may provide grants to help farmers, ranchers, and rural small businesses purchase and install renewable energy systems. In other cases, an agency may need to purchase goods or services. For example, federal agencies purchase energy for their buildings, as well as vehicles and fuel for these vehicles. To the extent that federal outlays lower the cost associated with a particular activity, federal outlays may lead to changes in the behavior of energy producers and consumers. The federal government collects revenues using different methods. One prominent method is through the tax system, which includes personal income taxes, corporate income taxes, and excise taxes based on the value of goods and services sold, among other types of taxes. The primary purpose of the federal tax system is to collect the revenue needed to fund the operations of the federal government. Taxes may alter taxpayers’ behavior by inducing them to shift resources from higher-taxed uses to lower-taxed uses in an effort to reduce tax liability. The federal government also collects revenues associated with its management of federal lands. The federal government owns and manages roughly 30 percent of the nation’s total surface area (or about 700 million acres onshore). It also has jurisdiction and control over the outer continental shelf, which includes about 1.8 billion acres of submerged lands in federal waters off the coast of Alaska, in the Gulf of Mexico, and off the Atlantic and Pacific coasts. The federal government leases federal lands for the production of oil, gas, minerals such as coal, or other resources. In exchange, the government generally collects revenues, including payments in the form of rents and bonuses, which are required to secure and maintain a lease, and royalties, which are based on the value of the minerals that are extracted. However, the federal government may choose to forgo certain revenues. Tax expenditures are tax provisions that are exceptions to the “normal structure” of individual and corporate income tax necessary to collect federal revenue. These preferences can have the same effects as government spending programs; hence the name tax expenditures. The Congressional Budget and Impoundment Control Act of 1974 identified six types of tax provisions that are considered tax expenditures when they are exceptions to the normal tax, as described in table 1. Tax expenditures may affect the behavior of energy producers and consumers by providing an incentive to engage in certain types of activities. For some tax expenditures, forgone revenues can be of the same magnitude or larger than related federal spending for some mission areas. In addition to forgoing tax revenues, the federal government may choose to forgo revenues associated with its leases of federal lands and waters. “Royalty relief” is a waiver or reduction of royalties that companies would otherwise be obligated to pay for their leases of federal lands or waters. For example, the Outer Continental Shelf Deep Water Royalty Relief Act of 1995 mandated royalty relief for oil and gas leases issued in the deep waters of the Gulf of Mexico from 1996 to 2000. Energy-related R&D takes place across a spectrum of activities, including basic research, applied research, and demonstration. Basic research includes efforts to explore and define scientific or engineering concepts or to investigate the nature of a subject without targeting any specific technology. Applied research includes efforts to develop new scientific or engineering knowledge to create new and improved technologies. Demonstration includes efforts to operate new or improved technologies to collect information on their performance and assess readiness for commercialization and deployment for widespread use. The federal government plays a critical role in supporting energy-related R&D, which may involve conducting R&D at government-owned laboratories or funding another entity to conduct R&D. For example, as one of the largest research agencies in the federal government, DOE spends billions of dollars every year on R&D to support its diverse missions, including advancing scientific research and technology development and ensuring efficient and secure energy, among other things. However, because long time lags may occur between basic research activities and activities related to commercialization and deployment, it is often difficult to link government-funded R&D to specific effects on energy production, consumption, and prices in the future. DOE’s R&D covers a broad range of activities, and DOE program offices manage 17 national laboratories. The following DOE program offices and laboratories primarily support energy-related R&D: The Office of Science oversees six national laboratories with research areas focusing on energy: Ames Laboratory in Iowa, Argonne National Laboratory in Illinois, Brookhaven National Laboratory in New York, Oak Ridge National Laboratory in Tennessee, Pacific Northwest National Laboratory in Washington, and Princeton Plasma Physics Laboratory in New Jersey. The Office of Science is the nation’s single largest funding source for supporting research in energy sciences. The Office of Nuclear Energy oversees the Idaho National Laboratory in Idaho. The office’s primary mission is to advance nuclear power as a resource capable of meeting the nation’s energy, environmental, and national security needs by resolving technical, cost, safety, proliferation resistance, and security barriers. The Office of Fossil Energy oversees the National Energy Technology Laboratory in Pennsylvania. The office’s primary mission is to ensure reliable fossil energy resources for clean, secure, and affordable energy while enhancing environmental protection. The Office of Energy Efficiency and Renewable Energy oversees the National Renewable Energy Laboratory in Colorado. The office’s mission is to develop solutions for energy-saving homes, buildings, and manufacturing; sustainable transportation; and renewable electricity generation. Federal agencies other than DOE also provide funding for energy-related R&D. For example, as we found in February 2012, the Department of Defense and USDA implemented numerous initiatives to help develop renewable energy technologies. In addition, as we found in August 2012, the National Aeronautics and Space Administration, National Science Foundation, EPA, and National Institute of Standards and Technology implemented a number of energy initiatives related to batteries and energy storage. The following three sections provide information on U.S. production and consumption of fossil, nuclear, and renewable energy from 2000 through 2013 and major factors, including federal activities, that influenced energy production and consumption levels. The fourth section provides information on other federal activities that may have influenced aspects of U.S. energy production and consumption from 2000 through 2013 but were not targeted at a specific energy source, as well as information on federal support for R&D. According to the studies and reports we reviewed, several major factors influenced U.S. production and consumption of fossil energy from 2000 through 2013: Advances in drilling technologies enabled economic production of natural gas from shale and other tight formations. These advances led to increases in domestic production of natural gas starting around 2008 and contributed to declines in domestic prices of natural gas starting around 2009. As domestic production rose and prices declined, domestic consumption increased, imports of natural gas decreased, and companies began taking steps to gain approval to export liquefied natural gas. The same advances in drilling technologies also enabled the economic production of crude oil from shale formations. These advances led to increases in the domestic production of crude oil beginning around 2009, reversing a decades-long trend of decreasing production. Global crude oil prices generally increased between 2000 and 2013, the largest, sustained price increase since comparable data were available. Increased domestic production contributed to lower prices for some regions of the country; however, the impact of increased domestic crude oil production on global crude oil prices was likely small. Imports of crude oil decreased beginning around 2008 as domestic production displaced imported crude oil to U.S. petroleum refiners. Around 2010, U.S. refiners began consuming greater quantities of crude oil to produce more petroleum products. As domestic consumption of petroleum products generally decreased beginning around 2008, exports of petroleum products (mostly diesel fuel) increased. Due in part to lower prices of natural gas, the use of coal for electricity generation decreased in recent years as utilities switched to natural gas. Domestic coal production decreased in recent years; however, coal exports increased as domestic consumption declined faster than domestic production. The studies and reports we reviewed also indicated that federal activities may have influenced fossil energy markets, generally by providing incentives or disincentives for the production and consumption of fossil energy. These activities included setting standards and requirements related to fossil energy emissions, assuming risks associated with oil spills and the Strategic Petroleum Reserve, collecting revenues associated with excise taxes on transportation fuels and royalty payments for oil and gas leases, and forgoing revenues associated with tax expenditures for fossil energy producers and royalty relief for oil and gas production. (See app. III for more information on the major factors that influenced U.S. production and consumption of fossil energy from 2000 through 2013.) According to the studies and reports we reviewed, several major factors may have influenced U.S. production and consumption of nuclear energy from 2000 through 2013. Specifically, declining natural gas prices, along with the 2011 accident at Japan’s Fukushima Daiichi commercial nuclear power plant, may have led to decreases in the production and consumption of nuclear energy in recent years. Federal activities also may have influenced this trend, generally by providing incentives or disincentives for the production and consumption of nuclear energy. These activities included setting standards and requirements related to the operation of nuclear power plants, providing services related to the storage of nuclear waste, assuming risks associated with nuclear power plant operations, and forgoing revenues associated with tax expenditures for nuclear energy producers. (See app. IV for more information on the major factors that influenced U.S. production and consumption of nuclear energy from 2000 through 2013.) According to the studies and reports we reviewed, several major factors influenced U.S. production and consumption of renewable energy— particularly from ethanol, wind energy, and solar energy—from 2000 through 2013: Federal tax credits for ethanol and federal policies requiring the use of ethanol in transportation fuels were major factors influencing an 8-fold increase in the production and consumption of ethanol from 2000 through 2013. As domestic production of ethanol outpaced consumption in recent years, U.S. exports of ethanol increased. State policies requiring the use of renewable energy in electricity production, as well as federal activities such as outlays and tax credits for renewable energy producers, were major factors influencing production and consumption of electricity from wind and solar energy. Technological advances also played an important role. These factors supported a 30-fold increase in production and consumption of wind energy from 2000 through 2013 and a 19-fold increase in the production and consumption of solar energy. See appendix V and appendix VI for more information on the major factors that influenced U.S. production and consumption of renewable energy from 2000 through 2013. According to the studies and reports we reviewed, other federal activities were not targeted specifically at fossil, nuclear, or renewable energy production and consumption but may have influenced aspects of U.S. energy production and consumption from 2000 through 2013. Relevant federal activities included setting standards and requirements for energy efficiency, selling electricity, providing loans and loan guarantees related to energy efficiency, making outlays for energy consumption and energy efficiency, and forgoing revenues through tax expenditures for electricity transmission and energy efficiency, among other things. Many of these federal efforts, particularly activities related to energy efficiency, provided disincentives for energy production and use; in contrast, other federal efforts, such as selling electricity, provided incentives for energy production and consumption. In addition, the federal government, and DOE in particular, supported energy-related R&D, which typically is directed at the early stages of technological advances and therefore not generally linked to actual production or consumption of energy. Some of this R&D related to specific energy sources, while other R&D was more general. (See app. VII for more information on federal activities that may have influenced other aspects of U.S. energy production and consumption from 2000 through 2013.) We provided a draft of this report to DOE, Interior, Treasury, and USDA for review and comment. DOE, Treasury, and USDA provided technical or clarifying comments, which we incorporated as appropriate. Interior indicated they had no comments on the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Energy, the Interior, and the Treasury; the Administrator of EIA; the Director of OMB; and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix VIII. This report provides information on U.S. production and consumption of fossil, nuclear, and renewable energy from 2000 through 2013 and major factors, including federal activities, that influenced energy production and consumption levels. It also provides information on other federal activities that may have influenced aspects of U.S. energy production and consumption from 2000 through 2013 but were not targeted at a specific energy source, as well as information on federal support for research and development (R&D). To provide information on U.S. production and consumption of fossil, nuclear, and renewable energy from 2000 through 2013, we reviewed and analyzed Department of Energy (DOE) Energy Information Administration (EIA) historical data, as well as EIA articles and monthly and annual reports. To assess the reliability of EIA data, we took several steps including reviewing available documentation on the collection of the data. We determined the EIA data to be sufficiently reliable for our purposes. To provide information on the major factors, including federal activities, that influenced energy production and consumption levels from 2000 through 2013; to provide information on other federal activities that may have influenced U.S. energy production and consumption from 2000 through 2013 but were not targeted at a specific energy source; and to provide information on federal R&D, we performed the following steps: We reviewed information from our prior work related to the production and consumption of crude oil, petroleum products, natural gas, coal, nuclear energy, ethanol, wind energy, solar energy, energy efficiency, electricity transmission, and federal R&D. We relied on our prior work to identify the major factors that influenced U.S. energy production and consumption levels. However, this work may not have identified all of the relevant factors that influenced energy production and consumption from 2000 through 2013. In addition, we did not examine permitting issues on federal lands, including issues related to infrastructure and oil and gas development. We reviewed information related to U.S. energy production and consumption from federal agencies and government organizations, including reports and studies by the Congressional Budget Office (CBO), Congressional Research Service (CRS), EIA, congressional Joint Committee on Taxation (JCT), and U.S. Department of Agriculture (USDA). To identify reports and studies, we conducted searches of various databases, such as ProQuest and PolicyFile, for studies published since 2008. We also asked agency officials and other stakeholders we contacted to recommend reports and studies. We relied on these reports and studies to identify the major factors that influenced U.S. energy production and consumption levels. However, these reports and studies may not have identified all of the relevant factors that influenced energy production and consumption from 2000 through 2013. We reviewed and analyzed data on outlays, royalties collected, and excise taxes collected from DOE, Department of the Interior (Interior), Department of the Treasury (Treasury), and Office of Management and Budget (OMB). To assess the reliability of these data sets, we interviewed individuals with knowledge of them and reviewed available documentation on the collection of the data and on any methods that were used in calculating the data. From this review, we determined that the data sets were sufficiently reliable for our purposes. We reviewed and analyzed data on estimates of tax expenditures, forgone royalties, and federal credit programs collected from DOE, JCT, and Treasury. We also relied on lists of tax expenditures and estimates of their cost compiled annually by Treasury and JCT under the energy budget function. In general, we used Treasury revenue loss estimates for each tax expenditure except in cases where only JCT reported a tax expenditure. Regarding data on tax expenditures, changes in economic conditions and estimation techniques can affect revenue loss estimates for tax expenditures, making them differ from year to year. Also, legislation affecting tax rates or the tax structure affects tax expenditure estimates. When statutory rates increase, a taxpayer’s ability to reduce tax on a portion of income is worth more; consequently, tax expenditures are worth more. Likewise, when rates decrease, tax expenditures are worth relatively less. To assess the reliability of these data sets, we interviewed individuals with knowledge of them and reviewed available documentation on the collection of the data and on any methods that were used in calculating the data. From this review, we determined that the data sets were sufficiently reliable for our purposes. We conducted this performance audit from August 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The federal budget provides data on energy-related outlays. The federal budget is divided into different functional areas, which is a way of grouping budgetary resources so that all budget authority and outlays of on-budget and off-budget federal entities and tax expenditures can be presented according to the national needs being addressed. National needs are grouped in 17 broad areas, one of which is “energy.” The energy area includes (1) promoting an adequate supply and appropriate use of energy to serve the needs of the economy and (2) energy programs of the Department of Energy (DOE) and its predecessor agencies. It also excludes atomic energy defense activities and general science research not closely related to energy. The Office of Management and Budget’s (OMB) public budget database contains historical data on outlays associated with each functional area, including the energy area. However, the federal budget does not provide a comprehensive source for all federal outlays that might be related to energy production and consumption. Instead, the 17 functional areas in the federal budget relate to the primary area of a given account, even though the programs within the account may serve a variety of purposes. As a result, the public budget database may not identify all energy-related activities in the energy functional area. For example, the federal budget identifies seven agencies, including DOE and the U.S. Department of Agriculture (USDA), with outlays in the energy supply subfunction in fiscal year 2010. However, as we found in February 2012, 17 federal agencies beside DOE and USDA—such as the Departments of Defense and the Interior— implemented renewable energy initiatives for fiscal year 2010. The Department of the Treasury (Treasury) and the congressional Joint Committee on Taxation (JCT) report revenue loss estimates for energy- related tax expenditures. Both Treasury and JCT estimate the revenue loss associated with each tax provision they have identified as a tax expenditure. Treasury’s list is included in the President’s annual budget submission; and JCT issues annual tax expenditure estimates as a stand- alone product. Both organizations calculate a tax expenditure as the difference between tax liability under current law and what the tax liability would be if the provision were eliminated and the item were treated as it would be under a “normal” income tax. In general, the tax expenditure lists that Treasury and JCT publish are similar, although these lists differ somewhat in the number of tax expenditures reported and the estimated revenue losses for particular expenditures. In addition, as with the federal budget, both lists of tax expenditures are divided into different functional areas, including one related to energy. However, Treasury’s list and JCT’s list of tax expenditures may not include all tax expenditures that provide a benefit to energy producers. For example, both Treasury and JCT list the deduction for income attributable to domestic production activities as a tax expenditure. This tax expenditure allows a deduction of 6 percent from taxable income for oil extraction, among other things. According to Treasury estimates, repealing this provision would result in more than $17 billion in revenue related to oil and natural gas production for fiscal years 2014 through 2023. However, because this tax expenditure is available to other industries beside oil production, Treasury and JCT do not list this tax expenditure under the energy functional area. This appendix provides more detailed information on U.S. production and consumption of fossil energy from 2000 through 2013 and major factors, including federal activities, that influenced fossil energy production and consumption levels. The year-to-year pattern of domestic production of natural gas fluctuated from 2000 through 2006 and then began to increase around 2007, according to Energy Information Administration (EIA) data, and as shown in figure 3. Specifically, the United States produced about 19.2 trillion cubic feet of natural gas in 2000; by 2005 and 2006, production had fallen below 19 trillion cubic feet but then began to increase, reaching over 24 trillion cubic feet in 2012 and 2013. Domestic consumption of natural gas exceeded domestic production throughout the period, with the difference coming from imports, primarily from Canada. However, as shown in figure 3, the difference between the domestic consumption and production of natural gas generally decreased between 2007 and 2013, leading to a reduction in natural gas imports. Natural gas is used by a number of sectors in the economy, most notably for electricity generation; for industrial use as a source of heat or as a feedstock for petrochemical production, among other things; for residential heating and other home uses; and for commercial heating and other uses. Figure 4 shows the share of natural gas consumption by sector in 2000 and 2013. Specifically, according to EIA data, natural gas consumption for electricity generation (as well as other energy needs of the electric power sector) increased from about 5.2 trillion cubic feet in 2000 to about 8.2 trillion cubic feet in 2013. Natural gas consumption for commercial use also increased, from about 3.2 trillion cubic feet in 2000 to about 3.3 trillion cubic feet in 2013. Industrial and residential uses declined over the same period. The studies and reports we reviewed indicated that increases in the domestic production of natural gas were due primarily to increases in the extraction of natural gas from shale formations. As shown in figure 5, the production of natural gas from shale formations caused an increase in total domestic natural gas production starting around 2008 and continuing through 2012 (the latest year for which annual data were available). According to EIA data, natural gas withdrawals from shale formations increased from about 2 trillion cubic feet in 2007 to over 10 trillion cubic feet in 2012. These increases were largely due to technological advances in horizontal drilling and hydraulic fracturing—a process that injects a combination of water, sand, and chemical additives under high pressure to create and maintain fractures in underground rock formations that allow oil and natural gas to flow. For example, as we reported in January 2012, improvements in horizontal drilling and hydraulic fracturing led to a boom in the production of natural gas from shale formations. In addition, according to the Congressional Research Service (CRS), in recent years the oil and gas industry improved its extraction rate of natural gas from shale formations from about 5 percent to about 15 percent of total estimated gas resources in the ground (thereby tripling the amount of recoverable natural gas). Increases in domestic natural gas production contributed to decreases in domestic natural gas prices, according to CRS. The price of natural gas, as with other commodities, is driven by supply and demand. The Henry Hub spot market in Louisiana is the best known spot market for natural gas. As shown in figure 6, annual prices for natural gas in the Henry Hub spot market generally increased between 2000 and 2008 (although some fluctuations occurred) before decreasing between 2008 and 2013. Specifically, in 2000, the annual spot price was $4.31 per million British thermal units (Btu) of natural gas. This price generally increased to $8.69 per million Btus in 2005 and $8.86 per million Btus 2008. Since 2008, the annual price generally decreased to $3.73 per million Btus in 2013. In recent years, natural gas prices in the United States were much lower than in other parts of the world, according to CRS. This price difference encouraged some American companies to apply for authorization to export domestically produced liquefied natural gas from the contiguous 48 states. Specifically, since 2010, the Department of Energy (DOE) has received more than 30 applications for permission to export liquefied natural gas to countries that do not have a free trade agreement with the United States. DOE has fully approved 3 applications and approved 6 others on the condition that the Federal Energy Regulatory Commission issues a satisfactory environmental review of the associated liquefied natural gas export facility. Moreover, EIA has projected that the United States could be a net exporter of liquefied natural gas by 2016. The United States produces crude oil, which is refined along with imported crude oil into petroleum products such as gasoline, diesel, and jet fuel. U.S. refineries use both domestically produced crude oil and imported crude oil to produce petroleum products. According to EIA data, there were 143 petroleum refineries in the United States as of January 2013, with a capacity to process 17.8 million barrels of crude oil per day. The United States also both exports and imports petroleum products. Domestic production of crude oil declined from 2000 through 2008, continuing a downward trend beginning in the 1970s, but increased beginning in 2009, according to EIA data. Specifically, as shown in figure 7, between 2000 and 2008, domestic crude oil production decreased from an average of about 5.8 million barrels of oil per day to 5.0 million barrels per day. However, by 2013, domestic crude oil production had increased to an average of almost 7.5 million barrels per day, the highest level of oil production since 1989. In 2013, an average of about 15.3 million barrels per day of crude oil was refined into petroleum products at U.S. refineries. As domestic crude oil production increased, imports of foreign crude oil decreased. Specifically, crude oil imports decreased from an average of about 9.8 million barrels per day in 2008 to 7.7 million barrels per day in 2013. Overall domestic consumption of petroleum products—including gasoline, diesel, and jet fuel—peaked in 2005 and then generally declined through 2013, according to EIA data. Specifically, as shown in figure 8, consumption of petroleum products increased from an average of 19.7 million barrels per day in 2000 to 20.8 million barrels per day in 2005 before declining to between 18-19 million barrels per day in recent years. Amid declining domestic consumption and somewhat increasing U.S. refining capacity, refiners have increasingly exported petroleum products. Specifically, U.S. exports of petroleum products grew from an average of about 1.1 million barrels per day in 2005 to about 3.5 million barrels per day in 2013. The United States had been a net importer of petroleum products since 1949, but in 2011 it became a net exporter, primarily of diesel. The primary users of petroleum products in the United States are the transportation and industrial sectors, according to EIA data. As shown in figure 9, the transportation sector consumed the largest share of petroleum products in 2000 and 2013 (at about 4.8 billion barrels). The industrial sector consumed the next largest share of petroleum products (at about 1.8 billion barrels in 2000 and about 1.7 billion barrels in 2013), while the remaining sectors (commercial, residential, and electric power) consumed the smallest share (at about 0.7 billion barrels in 2000 and about 0.3 billion barrels in 2013). Overall, total consumption of petroleum products increased from about 7.2 billion barrels in 2000 to about 7.6 billion barrels in 2007 before decreasing to about 6.9 billion barrels in 2013. The studies and reports we reviewed indicated that increases in domestic crude oil production came primarily from increased production from shale formations. As we found in March 2014, the technological advances in horizontal drilling and hydraulic fracturing that contributed to increasing U.S. production of natural gas have also allowed companies that develop petroleum resources to extract crude oil from shale and other formations that were previously considered to be inaccessible because traditional techniques did not yield sufficient amounts for economically viable production. According to EIA data, much of the increase in crude oil production has been from shale and other tight formations, such as the Bakken formation in North Dakota and the Eagle Ford formation in Texas. Production from these two states accounted for 87 percent of the increase in U.S. crude oil production from 2008 through 2013. Prices of crude oil increased considerably between 2000 and 2013 but show a distinctive pattern, according to EIA data. First, crude oil prices generally increased from 2000 to 2008. Crude oil is a widely traded global commodity, and prices of crude oil are generally determined by global supply and demand rather than exclusively by events in a single oil- producing country such as the United States. Since the mid‐1980s, benchmark crude oil prices such as West Texas Intermediate in the United States and Brent in Europe have served as reference points that the global oil market uses for pricing other crude oils. As shown in figure 10, crude oil prices in these two markets increased from about $30 per barrel in 2000 to almost $100 per barrel in 2008—one of the largest and most sustained crude oil price increases since comparable data were available. In 2009, crude oil prices in both markets decreased to about $62 per barrel, reflecting the effects of the global economic recession of 2007 to 2009 and the consequent falling demand for petroleum products. In 2010, crude oil prices in both markets increased. Starting in 2010, increases in U.S. production of crude oil contributed to lower prices for some domestic crude oils and may have had a small effect on some global oil prices, according to EIA. Specifically, as shown in figure 10, West Texas Intermediate crude oil began to sell at a large discount relative to Brent crude oil starting around 2011. This price divergence was due to increases in domestic crude oil production— existing crude oil pipelines were constrained in their ability to ship additional quantities of oil from the midcontinental United States and Canada to refineries, which limited the ability of this oil to reach global oil markets. In terms of global effects, in 2013 U.S. crude oil production grew more than the combined increase in the rest of the world, which contributed to relatively stable global crude oil prices in 2013, according to EIA. The United States has the largest recoverable coal reserves in the world, according to EIA, and most domestically produced coal comes from five states: Wyoming, West Virginia, Kentucky, Pennsylvania, and Illinois. Domestic production of coal fluctuated somewhat during the past decade but declined by about 16 percent from 2008 through 2013, according to EIA data. Specifically, as shown in figure 11, the United States produced 1.07 billion short tons of coal in 2000; by 2008, production had risen to 1.17 billion short tons. However, coal production fell below 1 billion short tons by 2013—the lowest level in almost 2 decades, according to EIA data. Domestic consumption of coal generally increased from 2000 (at 1.08 billion short tons) to 2007 (at 1.13 billion short tons). However, starting in 2008, coal consumption generally decreased and reached 0.93 billion short tons in 2013. As domestic consumption of coal fell, the United States generally exported more coal to Europe and Asia. The vast majority of coal consumed in the United States is used for generating electricity. The studies and reports we reviewed indicated that the recent decrease in domestic coal production and consumption came partly from declines in the use of coal to generate electricity. In the past, the fuel cost of generating one kilowatt-hour of electricity from natural gas had typically been higher than that of coal, according to EIA. However, coal began losing its price advantage over natural gas for electricity generation in some parts of the country in 2009, particularly in the eastern United States, according to EIA. In addition, new natural gas-fueled generating units generally are able to convert fuel into electricity more efficiently than existing coal-fueled generating units, meaning they can convert a unit of fuel energy into more electricity than coal-fueled units. Newer designs of coal-fueled units exist that can operate at higher efficiencies, but few have been built in the United States. In addition, recently proposed or finalized EPA regulations affecting coal-fueled electricity generating units may also have played a role in recent decreases in domestic coal production and consumption. The price of coal depends on its type, in part because different types of coal produce differing amounts of energy when burned. According to EIA, two of the most common types in the United States are bituminous and subbituminous coal. Bituminous coal is the oldest and most abundant coal type found in the United States. West Virginia, Kentucky, and Pennsylvania are the primary producers of bituminous coal. Subbituminous coal contains less energy than bituminous coal; however, large quantities are found in thick beds near the surface, resulting in low mining cost and, correspondingly, lower prices. Wyoming produces the vast majority of subbituminous coal. As shown in figure 12, annual U.S. prices for bituminous and subbituminous coal generally increased from 2000 to 2012 (the latest year for which data are available). For example, for bituminous coal, prices increased from $24.15 per short ton in 2000 to $66.04 per short ton in 2012, or an increase of over 170 percent. Some of these cost increases may be due to increases in coal transportation costs and declines in mine productivity during this period, according to EIA. As the price of coal increased, it reduced coal’s price advantage relative to other energy sources, such as natural gas, which decreased in price over this period. In addition to the major factors we identified above from the studies and reports we reviewed, we identified a number of federal activities that may have also played a role in influencing U.S. production and consumption of fossil energy from 2000 through 2013. These activities included setting standards and requirements for emissions from electricity generating units, assuming risks associated with oil production, collecting excise taxes and royalty payments, and providing tax expenditures and royalty relief. Some of these activities provided an incentive to produce and consume fossil energy, while others provided a disincentive for its production and consumption. The federal government established or strengthened a number of standards and requirements related to fossil energy production from 2000 through 2013. For example, under the Clean Air Act, the Environmental Protection Agency (EPA) establishes national ambient air quality standards for six pollutants which states are, primarily, responsible for attaining. States attain these standards, in part, by regulating emissions of these pollutants from certain stationary sources, such as electricity generating units. In particular, according to EPA, fossil fuel-fired electricity generating units are among the largest emitters of sulfur dioxide and nitrogen oxides, which have been linked to respiratory illnesses and acid rain, as well as of carbon dioxide, the primary greenhouse gas contributing to climate change. Numerous Clean Air Act requirements apply to electricity generating units, including New Source Review, a permitting process established in 1977. Under New Source Review, owners of generating units must obtain a preconstruction permit that establishes emission limits and requires the use of certain emissions control technologies. New Source Review applies to (1) generating units built after August 7, 1977, and (2) existing generating units—regardless of the date built—that seek to undertake a “major modification,” a physical or operational change that would result in a significant net increase in emissions of a regulated pollutant. In general, the cost of complying with New Source Review requirements provided a disincentive for producing electricity from fossil energy sources. As we found in June 2012, EPA has investigated most coal-fired generating units at least once for compliance with New Source Review requirements since 1999, and has alleged noncompliance at more than half of the units it investigated. Specifically, of the 831 units EPA investigated, 467 units were ultimately issued notices of violation, had complaints filed in court, or were included in settlement agreements. In total, EPA reached 22 settlements covering 263 units, which will require affected unit owners to, among other things, install around $12.8 billion in emissions controls. According to our analysis of EPA data, these settlements will reduce emissions of sulfur dioxide by an estimated 1.8 million tons annually, and nitrogen oxides by an estimated 596,000 tons annually. The federal government assumed some risks related to fossil energy production and consumption from 2000 through 2013. For example, the federal government assumed financial risks associated with potential cleanup costs for some oil spills, and the federal government acquired billions of dollars worth of crude oil to hold in reserve in case of supply disruptions, as discussed below: Cleanup costs for oil spills. Under the Oil Pollution Act of 1990, as amended, which was enacted after the Exxon Valdez oil spill in 1989, the federal government established a “polluter pays” system that places the primary burden of liability for costs of spills on the responsible parties, up to a specified limit of liability. In general, the level of potential financial liability under the act depends on the kind of vessel or facility from which a spill originates and is limited in amount. However, if the oil discharge is the result of gross negligence or willful misconduct, or a violation of federal operation, safety, and construction regulations, then liability under the act is unlimited. In addition, the act provides the Oil Spill Liability Trust Fund to pay for oil spill costs when the responsible party cannot or does not pay. The fund’s primary revenue source is an 8-cent-per-barrel tax on petroleum products—a small fraction of the price of a barrel in 2013— either produced in the United States or imported from other countries. The fund is subject to a $1 billion cap on the amount of expenditures from the fund per incident. Stockpiling crude oil. Congress created the Strategic Petroleum Reserve in 1975, following the Arab oil embargo of 1973 to 1974, to help protect the U.S. economy from damage caused by oil supply disruptions. The reserve is owned by the federal government and operated by DOE. It can store up to 727 million barrels of crude oil in salt caverns. The President has discretion to authorize release of oil in the Strategic Petroleum Reserve to minimize significant supply disruptions. In the event of such a disruption, the reserve can supply oil to the market by either selling stored crude oil or trading this oil in exchange for a larger amount of oil to be returned later. From fiscal year 2000 through 2013, the federal government received almost $3.9 billion from the sale of crude oil from the reserve, spent about $0.5 billion to purchase crude oil, and spent $2.5 billion for operations and maintenance of the reserve. The assumption of liability by the federal government for some oil spills may have provided an incentive for oil production and consumption by potentially decreasing the overall cost associated with certain production- related activities. For example, the liability limitations established under the Oil Pollution Act may have lowered costs for liability insurance or other insurance paid for by oil producers. However, the extent to which this federal intervention influenced changes in petroleum or natural gas production or consumption is difficult to precisely measure. Moreover, the fund—which is paid by oil producers—raises the cost of producing oil by a small fraction, which may have a negative impact on oil production. DOE’s operation of the Strategic Petroleum Reserve may have had little to no impact on fossil energy production and consumption between 2000 and 2013. For example, we found in August 2006 that filling the Strategic Petroleum Reserve from late 2001 through 2005 during a time of tight supply and demand conditions had minimal impact on oil prices because the volume was so small compared with world oil demand, according to most of the experts with whom we spoke. However, some experts believed the existence of the Strategic Petroleum Reserve may have a stabilizing effect on oil prices, particularly during extreme supply or demand events, whether or not it is actually used. If true, the reserve could have had positive effects on oil producers or consumers by reducing the risks associated with unstable prices. From 2000 through 2013, the federal government collected revenues through excise taxes and royalty payments related to fossil energy production and consumption while forgoing other related revenues through tax expenditures and royalty relief. Regarding excise taxes, the federal government collected about $637 billion through excise taxes targeting or related to fossil energy—primarily motor fuels (gasoline, diesel, and others)—from fiscal year 2000 through 2012. The federal excise tax rate on gasoline is 18.4 cents per gallon (the same amount as in 1993). Most revenues from these taxes are dedicated to the Highway Trust Fund, which was established by Congress in 1956 and is a major source of funding for various surface transportation programs. As shown in figure 13, revenues from excise taxes targeting or related to fossil energy were about $45 billion a year from fiscal year 2000 through 2004 and increased to about $50 billion a year for the rest of the period. Regarding royalty payments, the federal government collected more than $124 billion in revenues from royalty and other payments for federal oil, gas, and coal leases from fiscal year 2003 through 2013. As shown in figure 14, revenues from royalty and other payments increased from almost $8 billion in fiscal year 2003 to $23.4 billion in fiscal year 2008, then decreased to about $9 billion in fiscal years 2009 and 2010 before increasing to about $13.2 billion in fiscal year 2013. Regarding tax expenditures, the federal government incurred revenue losses of almost $50 billion from fiscal year 2000 through 2013 due to 16 tax expenditures we identified as targeting or related to fossil energy according to the Department of the Treasury (Treasury) and the Joint Committee on Taxation (JCT) estimates. As shown in figure 15, revenue losses associated with these 16 tax expenditures increased from less than $2 billion in fiscal year 2000 to over $4.6 billion in both fiscal year 2006 and 2007. They decreased from fiscal year 2007 through 2010 before increasing to about $4.6 billion in fiscal year 2012 and declining to $4.1 billion in fiscal year 2013. Table 2 provides descriptions of these 16 federal tax expenditures. The table also provides information from Treasury on tax expenditures that will or have expired, in full or in part, due to an expiration of legislative authority or some other expiration under the law as of the fall of 2014, as well as on tax expenditures that currently have no expiration. In addition, the table provides information on revenue loss estimates from Treasury (unless otherwise specified). The federal government may have incurred additional revenue losses associated with other tax provisions related to fossil energy; however, we were unable to identify data on the level of federal revenue losses. For example, JCT and Treasury identified three tax provisions as being related to the oil and gas industry but also broadly available to taxpayers engaged in energy-related and non-energy-related activities, such as manufacturing or trade. These three tax provisions are described in table 3. For these provisions, JCT and Treasury did not estimate annual revenue losses attributable to fossil energy production or consumption for fiscal year 2000 through 2013, and therefore this information was not readily available. Regarding royalty relief, the federal government provided nearly $12 billion in royalty relief for oil and gas production from 2000 through 2012, according to Interior estimates. As shown in figure 16, revenue losses associated with royalty relief increased from $40 million in 2000 to more than $2 billion in 2011, before declining to about $1.9 billion in 2012 (the most recent estimate available). These federal activities—collecting revenues from excise taxes and royalties and forgoing revenues from tax expenditures and royalty relief— may have influenced U.S. fossil energy production and consumption in different ways, as described below: Excise taxes. Because excise taxes raised prices on motor fuels, they provided a disincentive for consuming such fuels. However, because much of the revenue from these excise taxes was used to improve roads and other transportation infrastructure, these taxes could also have provided an incentive for motor vehicle use and thereby increased consumption of motor fuels. Royalties. Because royalty payments raised costs associated with the development and sales of fossil energy, they provided a disincentive to produce and consume fossil energy. However, we cannot say to what extent the federal royalties provided a disincentive for oil and gas development on federal lands relative to other places because oil and gas companies that lease federal lands look for the best economic terms across a wide range of land owners (such as state, private, federal, and international owners). We found in 2008 that studies of many resource owners indicated that the federal government collected less in total revenues than most other resource owners, but we do not have more recent comparisons of revenues collected. Tax expenditures and royalty relief. In general, tax expenditures and royalty relief provided incentives for fossil energy production by lowering the costs associated with the exploration and development of oil and gas resources. This appendix provides more detailed information on U.S. production and consumption of nuclear energy from 2000 through 2013 and on major factors, including federal activities, that may have influenced nuclear energy production and consumption levels. The United States has 100 operating commercial nuclear reactors, all of which are used to generate electricity. The year-to-year pattern of domestic production and consumption of nuclear energy fluctuated from 2000 through 2013, as shown in figure 17. In general, nuclear energy production and consumption increased from 754 million megawatt-hours in 2000 to 806 million megawatt-hours in 2007, according to Energy Information Administration (EIA) data. From 2007 through 2010, nuclear energy production and consumption remained steady between about 800 million megawatt-hours and 807 million megawatt-hours. From 2011 through 2013, nuclear energy production and consumption decreased below 800 million megawatt-hours. The proportion of electricity generated by nuclear power in the United States changed little during the period we reviewed, with nuclear reactors accounting for about 20 percent of total U.S. electricity generation in 2000 and about 19 percent in 2013. The studies and reports we reviewed indicated that recent decreases in U.S. nuclear energy production and consumption may be due to a number of major factors, including reductions in natural gas prices and the Fukushima Daiichi nuclear accident. Regarding natural gas prices, some nuclear plant operators cited price reductions as an important factor in their decisions regarding nuclear power reactor operations. For example, the Vermont Yankee Nuclear Power Station in Vernon, Vermont, began operations in 1972, and the owners obtained a renewed license in 2011 to operate the plant for an additional 20 years. However, in August 2013, the owners announced plans to permanently close the plant in 2014. According to the owners, their decision to close the plant was driven in part by lower natural gas prices, which had reduced the comparative profitability of the plant. In March 2011, a 9.0-magnitude earthquake and subsequent tsunami devastated northeastern Japan and severely damaged the Fukushima Daiichi nuclear power plant. The resulting radiological emergency involved the most extensive release of radioactive material at a nuclear power plant since the 1986 Chernobyl disaster. Following this release, the Japanese government evacuated people within 12 miles of the plant, and later extended the evacuation zone to 19 miles. In total, almost 150,000 people were evacuated. In response to the incident, Japan shut down all of its nuclear power reactors, and concerns heightened about the safety of commercial nuclear power plants worldwide. For example, Germany closed 8 of the country’s 17 reactors and decided to shut down the remainder by 2022. In the United States, the Fukushima incident affected some plans to build new nuclear power plants. For example, in 2011, a company planning to construct two nuclear reactors in Texas cited uncertainties related to the Fukushima incident as a reason for abandoning the project. U.S. nuclear energy production and consumption trends may also have been affected, to a more limited extent, by increases in the price of uranium oxide, which is processed into fuel used by nuclear power reactors. As shown in figure 18, uranium oxide prices have increased considerably from 2000 to 2013, according to EIA data. Specifically, the average domestic price of uranium oxide increased from $11.45 per pound in 2000 to $52.51 per pound in 2013, an increase of more than 300 percent. In addition to the major factors we identified above from the studies and reports we reviewed, we identified a number of federal activities that may have also played a role in influencing U.S. production and consumption of nuclear energy from 2000 through 2013. These activities included setting standards and requirements related to the operation of nuclear power reactors, directly providing goods and services related to the storage of spent nuclear fuel, assuming risk associated with the operation of nuclear power reactors, and forgoing revenues associated with tax expenditures for nuclear energy producers. Some of these activities provided an incentive to produce and consume nuclear energy, while others provided a disincentive for its production and consumption. The federal government established or strengthened a number of standards and requirements related to nuclear energy from 2000 through 2013. For example, after the Fukushima incident, the Nuclear Regulatory Commission (NRC) accepted 12 recommendations from a task force that NRC had convened in 2011 to review its processes and regulations and determine whether lessons learned from the accident could inform its oversight processes. The task force recommended that NRC require licensees to reevaluate and upgrade seismic and flooding protection of reactors and related equipment, strengthen capabilities at all reactors to withstand loss of electrical power, and take other actions to better protect their plants for a low-probability, high-impact event. NRC’s activities to strengthen the safety and security of nuclear power plants after the Fukushima incident may have increased the costs associated with operating commercial nuclear power reactors, thereby providing a disincentive for nuclear power production. The federal government engaged in activities related to providing for the management of spent nuclear fuel—i.e., fuel that has been used and removed from the reactor core of a nuclear plant—from 2000 through 2013. The nation currently has about 70,000 metric tons of commercial spent nuclear fuel stored at 75 sites in 33 states. This fuel is extremely hazardous: without protective shielding and proper handling, its intense radioactivity can kill a person directly exposed to it or cause long-term health hazards, such as cancer, as well as contaminate the environment. The Nuclear Waste Policy Act of 1982 directed the Department of Energy (DOE) to investigate sites for a federal deep geological repository for spent nuclear fuel and high-level radioactive waste. In 1987, Congress amended the Nuclear Waste Policy Act to direct DOE to focus its efforts only on Yucca Mountain in Nevada for a repository. The act, as amended, authorized DOE to contract with commercial nuclear reactor operators to take custody of spent fuel for disposal at the repository beginning in 1998. DOE spent billions of dollars related to this effort and in 2008 submitted a license application for the construction of a permanent repository at Yucca Mountain to NRC, which has regulatory authority over the construction and operation of a repository. However, as we reported in August 2012: In 2009, DOE announced that it planned to terminate its work related to the Yucca Mountain repository and in 2010 filed a motion to withdraw the license application. NRC’s licensing board denied the motion, but DOE continued to take steps to dismantle the repository project. In September 2011, the NRC commissioners considered whether to overturn or uphold the licensing board’s decision, but they were evenly divided and unable to take final action on the matter. Instead, the NRC commissioners directed the licensing board to suspend work by September 30, 2011. NRC’s failure to consider the application, among other things, was contested in federal court. Several parties filed a petition against NRC asking the federal court to compel NRC to provide a proposed schedule with milestones and a date for approving or disapproving the license application, among other things. Federal activities related to the Yucca Mountain repository may have provided a disincentive for nuclear energy production and consumption. For example, DOE’s actions regarding its license application for the construction of the repository may have caused uncertainty about the federal government’s long-term strategy for storing nuclear waste because Congress has not agreed upon a path forward. This uncertainty may have provided a disincentive for some nuclear plant operators to stay in the market or expand capacity because storing nuclear waste is expensive. The federal government assumed certain risks related to nuclear energy production and consumption from 2000 through 2013. For example, under the Price-Anderson Act, the federal government limited the liability of nuclear plant operators in the case of a nuclear accident. The act requires each licensee of a nuclear plant to have primary insurance coverage equal to the maximum amount of liability insurance available from private sources—currently $375 million—to settle any such claims against it. In the event of an accident at any plant where liability claims exceed the $375 million primary insurance coverage, the act also requires licensees to pay retrospective premiums (also referred to as secondary insurance). The act places a limit on the total liability per incident, which is currently about $13 billion. In addition, the federal government assumed risks related to nuclear energy production and consumption by establishing a loan guarantee program. Specifically, Section 1703 of the Energy Policy Act of 2005 authorized DOE to issue loan guarantees for projects that avoid, reduce, or sequester greenhouse gases using new or significantly improved technologies. In 2010, DOE made conditional commitments under Section 1703 to provide $8.3 billion in loan guarantees for the construction of two advanced nuclear reactors at the Vogtle Electric Generating Plant in Georgia. These federal activities provided an incentive for nuclear energy production and consumption by decreasing the overall cost associated with certain production-related activities. For example, according to the Congressional Budget Office (CBO), the Price-Anderson Act provides a benefit to nuclear plant operators by reducing their cost of carrying liability insurance. CBO estimated that the potential level of support was about $600,000 annually per reactor, which would be about $62 million annually for all reactors in the United States. Without the liability limitations provided by the Price-Anderson Act, the cost of obtaining insurance for nuclear power plant operators might have been higher. Consequently, the act may have supported higher levels of nuclear power production in the United States between 2000 and 2013 than would have otherwise occurred because the lower cost provided an incentive for increased production and consumption. The federal government incurred revenue losses related to nuclear energy production and consumption from 2000 through 2013. Specifically, we identified one tax expenditure targeting nuclear energy that resulted in $7.9 billion in revenue losses from fiscal year 2000 through 2013. This tax expenditure—the special tax rate for nuclear decommissioning reserve funds—increased from $100 million in fiscal year 2000 to $1.1 billion in fiscal year 2013, as shown in figure 19. Under the special tax rate for nuclear decommissioning reserve funds, taxpayers (e.g., utilities) who are responsible for the costs of decommissioning nuclear power plants can elect to create reserve funds to be used to pay for decommissioning. The funds receive special tax treatment: amounts contributed are deductible in the year the contributions are made and are not included in the taxpayer’s gross income until the year they are distributed, thus effectively postponing tax on the contributions. Amounts actually spent on decommissioning are deductible in the year they are made. Gains from the funds’ investments are subject to a 20 percent tax rate—a lower rate than that which applies to most other corporate income. In general, this tax expenditure supported nuclear energy production and consumption by lowering the costs of nuclear energy production and providing an incentive to engage in nuclear power production. This appendix provides more detailed information on U.S. production and consumption of renewable energy from 2000 through 2013 and on major factors, including federal activities, that influenced renewable energy production and consumption levels. U.S. production and consumption of renewable energy generally increased from 2000 through 2013, as shown in figure 20, according to Energy Information Administration (EIA) data. Specifically, in terms of energy content (i.e., British thermal units ), production and consumption of all sources of renewable energy—including liquid biofuels (such as ethanol and biodiesel), other forms of biomass (i.e., wood and waste), hydroelectric power, geothermal, wind, and solar—increased from 6.1 quadrillion Btus in 2000 to 9.2 quadrillion Btus in 2013. As a proportion of total energy consumption, consumption of renewable energy increased from about 6 percent in 2000 to 9 percent in 2013. Hydroelectric power and wood were the two largest sources of renewable energy during this time period and accounted for about 50 percent of all renewable energy consumed in 2013. However, the overall increase in domestic production and consumption of renewable energy from 2000 through 2013 can be attributed primarily to increases in the production and consumption of ethanol, wind energy, and solar energy. Domestically produced ethanol primarily comes from corn grown in the Midwest; the cornstarch is converted into sugar and then fermented and distilled into ethanol. Ethanol is used as a transportation fuel; almost all ethanol is blended into gasoline as an additive to make fuel containing up to 10 percent ethanol by volume. As shown in figure 21, ethanol production increased from 1.6 billion gallons in 2000 to almost 14 billion gallons in 2011 and about 13 billion gallons in both 2012 and 2013. Consumption of ethanol followed a similar pattern until 2010, when domestic consumption of ethanol remained relatively flat at around 13 billion gallons per year. The United States increased its exports of ethanol beginning in 2010, mostly to Brazil and Canada. The studies and reports we reviewed indicated that several federal activities had a major impact on the increase in ethanol production and consumption—most notably federal tax expenditures and requirements for the use of ethanol in transportation fuel. Regarding federal tax expenditures, alcohol fuel credits provided a 45-cent-per-gallon tax credit to gasoline suppliers who blend ethanol with gasoline. According to the Department of the Treasury (Treasury) data, the alcohol fuel credits resulted in more than $39 billion in revenue losses from fiscal year 2000 through 2013. As shown in figure 22, revenue losses associated with alcohol fuel credits increased from about $0.9 billion in fiscal year 2000 to $7 billion in fiscal year 2011, before decreasing to $3.7 billion in fiscal year 2012 and about $50 million in fiscal year 2013. Regarding federal requirements, the Energy Policy Act of 2005 (EPAct) created a federal renewable fuel standard that generally required gasoline and diesel sold in the United States to contain 4 billion gallons of renewable fuels, such as ethanol and biodiesel, in 2006 and 7.5 billion gallons in 2012. The Energy Independence and Security Act of 2007 (EISA) expanded the renewable fuel standard by requiring that U.S. transportation fuel contain 9 billion gallons of renewable fuels in 2008, with the amount required increasing annually to 36 billion gallons in 2022. The 36-billion-gallon total must include at least 21 billion gallons of advanced biofuels (defined as renewable fuels other than ethanol derived from corn starch that meet certain criteria) and can include up to 15 billion gallons of conventional biofuels (defined as ethanol derived from cornstarch). In our previous work, we found that the alcohol fuel credits were important in establishing and expanding the domestic ethanol industry. However, we also found that the alcohol fuel credits became less important over time for sustaining the ethanol industry because (1) most of the capital investment had already been made and (2) the credits were duplicative with the renewable fuel standard. We recommended in 2009 that Congress consider modifying or phasing out the alcohol fuel credits. Congress allowed the alcohol fuel credits to expire at the end of 2011. In addition to the alcohol fuel credits and renewable fuel standard, the federal government may have influenced ethanol production and consumption from 2000 through 2013 in a number of other ways, according to the studies and reports we reviewed, including the activities described below. While the precise effects of these activities on changes in ethanol production and consumption are difficult to measure, some of these activities provided incentives for the production and consumption of ethanol. Requirements for federal fleets to use ethanol and other alternative fuels. The Energy Policy Act of 1992 requires that 75 percent of all vehicles acquired by the federal fleet in fiscal year 1999 and afterward be “alternative fuel vehicles,” which can use ethanol and blends of 85 percent or more of ethanol with gasoline, among other fuels. EPAct generally requires that all such vehicles be fueled with alternative fuel. In addition, EISA requires that no later than October 2015 and each year thereafter, agencies must achieve a 10 percent increase in vehicle alternative fuel consumption relative to a baseline established by the Energy Secretary for fiscal year 2005. Excise taxes. The federal excise tax rate on ethanol in motor fuels is 18.4 cents per gallon. We did not separately analyze the portion of revenues from excise taxes on ethanol from the portion on gasoline. However, we believe the effects of these excise taxes may be similar to the effects of excise taxes on gasoline. Another factor likely affecting ethanol production and consumption from 2000 through 2013 was the price of ethanol relative to the prices of corn and gasoline, according to U.S. Department of Agriculture (USDA) research. Ethanol prices generally increased from 2000 through 2013, according to USDA data, as shown in figure 23. Specifically, ethanol prices increased from an annual average of $1.35 per gallon in 2000 to $2.47 per gallon in 2013. Because ethanol is used as a gasoline substitute, and because nearly all ethanol produced in the United States comes from corn, the relationship between prices of ethanol, gasoline, and corn is complex. As gasoline prices rise, ethanol’s appeal as a substitute increases, as does the profitability of ethanol production and the demand for corn. As a result, according to USDA’s Economic Research Service, prices of corn, ethanol, and gasoline have become more interrelated in recent years. Specifically, from March 2008 to March 2011, ethanol supply and demand accounted for about 23 percent of the variation in the price of corn, while corn market conditions accounted for about 27 percent of ethanol’s price variation. At the same time, about 16 and 17 percent of gasoline price variation could be attributed to ethanol and corn markets conditions, respectively. Wind is transformed into electricity using wind turbines. In terms of the electricity generated from wind turbines, domestic production and consumption of wind energy increased from 5.6 million megawatt-hours in 2000 to 167.7 million megawatt-hours in 2013 (an increase of almost 3,000 percent), as shown in figure 24. According to the Department of Energy (DOE), wind energy comprised 43 percent of all additions to U.S. generating capacity in 2012, overtaking natural gas-fired electricity generation as the leading source of new capacity for that year. Solar energy is used to heat, cool, and power homes and businesses using a variety of technologies that convert sunlight into usable energy. The most widely used solar technology is the photovoltaic cell, which uses semiconducting materials to convert sunlight into electricity. Sunlight can also be used to heat water (for later use) or to boil water, which produces steam that can be used in a turbine to generate electricity. In terms of the electricity generated from solar energy, domestic production and consumption of solar energy increased from about 0.5 million megawatt-hours in 2000 to 9.3 million megawatt-hours in 2013 (an increase of almost 1,900 percent), as shown in figure 25. According to the Solar Energy Industries Association, photovoltaic solar installations grew 76 percent from 2011 to 2012, and 8 of the 10 largest photovoltaic installations in the United States were built in 2012. The studies and reports we reviewed indicated that the increase in wind and solar energy production and consumption resulted from a number of major factors—most notably state policies and federal activities, as well as technological advances. Regarding state activities, many states have created policies known as renewable portfolio standards that encouraged the production and use of renewable energy. These state policies generally require a percentage of electricity sold or generated in the state to come from eligible renewable resources, including wind and solar energy. According to EIA, 29 states and the District of Columbia had enforceable renewable portfolio standards or similar laws as of October 2013. According to the Congressional Research Service (CRS), state policies have been the primary creator of demand for wind projects. Regarding federal activities, the studies and reports we reviewed indicated that the federal government influenced increases in the production and consumption of wind and solar energy primarily through tax incentives. Specifically, the production tax credit and the investment tax credit, along with a related program that provided grants in lieu of these tax credits, resulted in almost $14 billion in revenue losses and almost $20 billion in outlays from fiscal year 2000 through 2013. These tax credits and grants, which are described below, supported wind and solar energy production by lowering the costs associated with production and providing an incentive to those firms engaged in the construction and operation of wind and solar energy projects. Production tax credit. This credit provided a 10-year, inflation- adjusted income tax credit based on the amount of renewable energy produced at wind and other qualified facilities. The amount of the credit varied depending upon the source. The value of the credit was 2.2 cents per kilowatthour in 2012 for certain resources (e.g., wind, geothermal, and certain biomass electricity production) and was raised to 2.3 cents per kilowatthour in 2013. This credit resulted in about $9.6 billion in revenue losses from fiscal year 2000 through 2013. Specifically, as shown in figure 26, revenue losses associated with this tax credit increased from $40 million in fiscal year 2000 to $1.5 billion or more annually from fiscal year 2010 through 2013. This credit, which has periodically expired and then been extended, is available to facilities for which construction began before January 1, 2014. As we reported in March 2013, new additions of wind energy capacity fell dramatically in years following the credit’s expiration. Investment tax credit. This credit, which has not expired, provides an income tax credit for business investments in solar systems and small wind turbines, among other things. Investments in solar and small wind turbine systems qualify for a 30 percent tax credit. In addition, temporary provisions enacted under the American Recovery and Reinvestment Act of 2009 (Recovery Act) allow taxpayers to claim this credit for property that otherwise would have qualified for the production tax credit. This credit resulted in over $4 billion in revenue losses from fiscal year 2000 through 2013. As shown in figure 26, no revenue loss estimates were reported for this tax credit from fiscal year 2000 through 2005; revenue losses then generally increased from $80 million in fiscal year 2006 to almost $2 billion in fiscal year 2013. Section 1603 program. Section 1603 of the Recovery Act, as amended, allows taxpayers eligible for the production or investment tax credit to receive a payment from the Treasury in lieu of a tax credit. This Treasury program provided almost $20 billion in outlays from fiscal year 2009 through 2013, as shown in figure 26, of which about $13 billion were related to wind energy projects, and about $4 billion were associated with solar energy projects. This program, which is still available in some cases, applies to projects placed in service during 2009, 2010, or 2011, or afterward if construction began on the property during the specific years and the property is placed in service by a credit termination date (e.g., January 1, 2017 for certain energy property). In addition to these tax credits, the studies and reports we reviewed indicated that the federal government provided incentives for the production and consumption of wind and solar energy in other important ways, including through the following activities: Requirements for purchasing electricity. Under EPAct, federal agencies’ consumption of electricity from renewable sources has generally been required— to the extent economically feasible and technologically practicable—to meet or exceed 5 percent of total consumption in fiscal years 2010 through 2012, and 7.5 percent in fiscal year 2013 and thereafter. According to DOE’s most recent data, federal agencies spent about $57 million in electricity purchases from renewable sources in fiscal year 2012. Loan guarantees. DOE’s Title 17 Innovative Technology Loan Guarantee Program included a temporary program for the rapid deployment of renewable energy projects, among other things. As shown in table 4, DOE guaranteed 23 loans totaling more than $14 billion for wind and solar energy projects. Most of these loans (15 of 23) and most of the amount guaranteed went to projects to produce and sell electricity generated from solar energy. There have been two defaults on guaranteed loans, both for projects involving the manufacture of solar energy equipment. However, most of the long- term total estimated cost to the government is associated with solar generation projects. The authority to enter into loan guarantees under DOE’s temporary program expired on September 30, 2011. As a result of required federal purchases of electricity from renewable sources, the federal government provided incentives to produce wind and solar energy. In addition, through the loan guarantee program described above, the federal government assumed risks of defaults on loans to firms engaged in developing wind and solar energy projects. These federal actions had the potential to lower the costs for some of these projects. Such lower costs could have led to certain projects being financed that otherwise may not have been developed. Along with federal activities, the studies and reports we reviewed indicated that technological advances played a role in influencing increases in production and consumption of wind and solar energy. For example, according to DOE’s National Renewable Energy Laboratory (NREL), wind turbine manufacturers increased turbine performance by steadily increasing the turbine height and rotor diameter of their turbines from 2000 through 2010. In addition, the average capacity of wind turbines installed in the United States has more than doubled since 2000—increasing from 0.88 megawatts in 2000 to 1.95 megawatts in 2012, according to NREL. Regarding solar technology, technological innovation—along with improved manufacturing processes and growing markets—resulted in declining costs associated with the manufacture of photovoltaic technologies, according to a 2012 DOE study. Table 5 provides descriptions of the three federal tax expenditures we identified in appendix V as targeting or related to ethanol, wind energy, and solar energy, as well as four additional federal tax expenditures we identified as more broadly targeting or related to renewable energy. The table also provides information from the Department of the Treasury (Treasury) on tax expenditures that will or have expired, in full or in part, due to an expiration of legislative authority or some other expiration under the law as of the fall of 2014, as well as on tax expenditures that currently have no expiration. In addition, the table provides information on revenue loss estimates from Treasury (unless otherwise specified). Table 6 provides a description of the federal program we identified in appendix V as targeting or related to wind and solar energy, as well as two additional federal programs we identified as more broadly targeting or related to renewable energy. The table also provides information reported by the Office of Management and Budget (OMB) on outlays. Table 7 provides a description of the federal loan guarantee program we identified in appendix V as targeting or related to wind and solar energy, as well as an additional federal loan guarantee program we identified as more broadly targeting or related to renewable energy. The table also provides information on disbursements and estimated costs reported by OMB and provided by DOE. This appendix provides more detailed information on other federal activities that were not targeted specifically at fossil, nuclear, or renewable energy production and consumption but may have influenced aspects of U.S. energy production and consumption from 2000 through 2013. It also provides information on federal energy-related research and development (R&D). The studies and reports we reviewed indicated that a number of federal activities may have influenced U.S. energy production and consumption from 2000 through 2013 but were not targeted at a specific energy source. These activities included setting standards and requirements for energy efficiency, selling electricity, providing loans and loan guarantees related to energy efficiency, making outlays for energy consumption and energy efficiency, and forgoing revenues through tax expenditures for electricity transmission and energy efficiency, among other things. The federal government established or strengthened a number of standards and requirements generally related to energy production and consumption from 2000 through 2013. For example, since the 1970s, the federal government has regulated vehicle fuel economy through corporate average fuel economy (CAFE) standards, which originally required manufacturers to meet a single fleetwide standard for all cars and either a single standard or class standards for light trucks. The Energy Independence and Security Act of 2007 (EISA) instituted several changes to CAFE standards in 2007, such as moving from a single fleet standard to an attribute-based standard. In 2009, the U.S. administration announced a new policy to increase vehicle fuel economy by strengthening CAFE standards and aligning them with the first greenhouse gas emissions standards for vehicles, which would be administered by the Environmental Protection Agency (EPA). The Department of Transportation and EPA established a national program for these two sets of standards by issuing coordinated regulations covering vehicle model years 2012 to 2025 in May 2010 and October 2012. Vehicle manufacturers will have to meet more stringent fuel economy standards, which are projected to be equivalent to over 50 miles per gallon by 2025. In addition, since the 1970s, the federal government has established minimum efficiency standards requiring that certain products, such as residential appliances, commercial equipment, and lighting products, meet specified energy efficiency standards before they can be sold in the United States. The Energy Policy and Conservation Act of 1975 required the Department of Energy (DOE) to set minimum energy-efficiency standards for manufacturers of specified categories of consumer products such as refrigerators, dishwashers, furnaces, and hot water heaters. The statute was later amended (e.g., by the Energy Policy Act of 2005 and EISA) to include additional categories of consumer products. Manufacturers’ compliance with the standards is mandatory. The statute further requires DOE to set and periodically review and revise standards for these product categories to achieve the maximum level of energy efficiency that is technically feasible and economically justified. The federal standards for vehicle fuel economy and for energy efficiency standards in products provided disincentives for overall energy production and consumption. Specifically, as we concluded in August 2007, the CAFE program reduced oil consumption by cars and light trucks from what it would have otherwise been, and the evidence suggested that increasing CAFE standards would save additional oil in the future. In addition, as we found in March 2013, DOE estimated that, from the inception of the federal minimum efficiency standards program in 1975 through 2005, consumer benefits from these standards amounted to about $64 billion. DOE projected that the standards will save consumers $241 billion by 2030 and $269 billion by 2045. The federal government directly provided goods and services generally related to energy production and consumption from 2000 through 2013. For example, the federal government is the largest owner of electricity- generating capacity in the country and owns significant electricity transmission assets. Development of these resources was initially pursued as part of efforts to provide electricity to rural areas, control flooding, and provide irrigation. Five federal utilities—four power marketing administrations and the Tennessee Valley Authority (TVA)— provide electricity and transmission services to customers in their regions. The power marketing administrations sell power produced primarily at federal hydroelectric dams and projects that are owned and operated by the Department of the Interior’s Bureau of Reclamation, the U.S. Army Corps of Engineers, or the International Boundary and Water Commission. TVA markets electricity produced at its own fossil, nuclear, and hydroelectric energy facilities. In October 2011, we reported on several features of TVA’s operations as a federally owned electric utility that provided incentives for energy production and consumption in its power service area—covering about 80,000 square miles in the southeastern United States, including almost all of Tennessee and parts of Mississippi, Kentucky, Alabama, Georgia, North Carolina, and Virginia, with a population of more than nine million people. In fiscal year 2010, TVA sold more than 173 million megawatt- hours of electricity to customers. To meet this customer demand, TVA generates electricity at 11 coal-fired plants, 11 natural gas-fired plants, 3 nuclear plants, and 29 hydroelectric dams, among other things. Under the TVA Act of 1933, as amended, TVA has not been subject to many of the regulatory oversight requirements that commercial utilities must satisfy. TVA is also exempt from paying federal and state taxes and can borrow funds for investment in its power system at very competitive interest rates as a result of its triple-A credit rating—which, as we have found, is based partly on its status as a federal entity. Additionally, unlike many utilities, TVA charges rates for its electric power that are not subject to review and approval by state public utility commissions. However, in setting TVA’s rates, TVA’s Board must comply with the primary objectives of the TVA Act, including the objective that power shall be sold at rates as low as are feasible. Through loan and loan guarantee programs, the federal government assumed risks generally related to energy production and consumption from 2000 through 2013. Specifically, we identified three federal programs that provided loans and loan guarantees targeting energy efficiency and other activities from fiscal year 2000 through 2013. These programs are described below: Direct and Guaranteed Electric Loan Program. This U.S. Department of Agriculture (USDA) program, authorized under the Rural Electrification Act of 1936, provides loans and loan guarantees to establish and improve electric service in rural areas, and to assist electric borrowers in implementing demand side management, energy efficiency and conservation programs, and on-grid and off-grid renewable energy systems. These loans and loan guarantees provide financing under favorable terms to eligible nonprofit utility organizations, such as electric co-ops and public utility districts, as well as to for-profit entities. According to USDA, the program supports approximately 700 electric system borrowers in 46 states. This program disbursed about $45 billion in loans from fiscal year 2000 through 2013 with a total cost of about $0.7 billion. To the extent that some of these loans supported energy efficiency and conservation programs, this program provided a disincentive for energy production and consumption. However, because some loans may have supported renewable energy systems, this program also provided incentives for energy production and consumption. Advanced Technology Vehicles Manufacturing loan program. This DOE program, authorized under EISA, provides loans to support development of advanced technology vehicles and associated components in the United States that would increase the fuel economy of U.S. passenger vehicles. EISA authorized DOE to make $25 billion in loans under this program. As we reported in March 2013, DOE has used the program to make five loans worth $8.4 billion. Two of the loans in this program defaulted, and one has been paid back in full. The current estimated cost of the loans is about $0.3 billion. Because these loans supported improvements in the fuel economy of passenger vehicles, this program provided disincentives for energy production and consumption. Green Retrofit Program for Multifamily Housing. This Department of Housing and Urban Development program, established by the American Recovery and Reinvestment Act of 2009 (Recovery Act), funds energy and green retrofits to selected affordable multifamily properties through grants and loans. Eligible projects include the installation of efficient heating and cooling systems and appliances, and the upgrade of units to reduce water usage, increase indoor air quality, and provide other various environmental benefits. This program disbursed about $83 million in loans from fiscal year 2000 through 2013 with a total cost of about $66 million. In addition, we identified other federal loan programs generally related to energy production and consumption, but for which we were unable to identify data on the level of federal support. For example, the Farm Security and Rural Investment Act of 2002, as amended, established the Rural Energy for America Program within USDA to provide loan guarantees to agricultural producers and small businesses in rural areas to assist with purchasing and installing energy efficiency improvements and renewable energy systems. Costs associated with loans from this program totaled about $17 million from fiscal year 2000 through 2013; however, USDA did not report costs specifically related to energy efficiency separately from costs related to renewable energy. The federal government provided funds generally related to energy production and consumption from 2000 through 2013. Specifically, we identified five federal programs and activities with almost $51 billion in outlays from fiscal year 2000 through 2013. Of these total outlays, about $40 billion were associated with the Department of Health and Human Services’ Low Income Home Energy Assistance Program, which provides funds to low-income households to help cover home heating and cooling costs. As shown in figure 27, outlays associated with this program generally increased from fiscal year 2000 through 2010, then decreased through fiscal year 2013. Specifically, program outlays rose from $1.5 billion in fiscal year 2000 to $4.6 billion in fiscal year 2010 before decreasing to $3.5 billion in fiscal year 2013. Because this program made outlays to low-income households to purchase energy, it provided an incentive for energy production and consumption. However, this spending cannot easily be attributed to specific energy sources or fuel types because funds supporting energy purchases in different regions would encourage consumption of different mixes of these fuels, reflecting regional differences in how energy is produced. Table 8 provides a description of these five federal programs and activities, as well as providing information reported by the Office of Management and Budget (OMB) and provided by DOE on outlays. We also identified other outlays that generally related to energy production and consumption, but for which we were unable to identify data on the level of federal support. For example, USDA’s High Energy Cost Grants program, authorized under the Rural Electrification Act of 1936, provides grants for energy generation, transmission, and distribution facilities serving rural communities with annual average home energy costs that exceed 275 percent of the national average. Applicants may receive grants for on-grid and off-grid renewable energy systems, as well as energy conservation and efficiency projects. This program provided $202 million in outlays from fiscal year 2000 through fiscal year 2013; however, USDA did not report on program outlays specifically related to electricity and energy efficiency separately from outlays related to renewable energy. In general, this program provided incentives for energy production and consumption through its support for energy generation and transmission to rural communities, while providing disincentives for energy consumption through energy conservation and efficiency projects. The federal government may have influenced general aspects of energy production and consumption from 2000 through 2013 through the tax code. Specifically, we identified 13 tax expenditures that resulted in more than $65 billion in federal revenue losses from fiscal year 2000 through fiscal year 2013. As shown in figure 28, most of these revenue losses ($42 billion or 64 percent) were associated with a tax expenditure excluding employer-paid transportation benefits from taxation, including a number of benefits related to parking, transit passes, and vanpool transportation, among other things. This tax expenditure provided an incentive for the production and consumption of transportation fuels by reducing costs associated with parking. However, it also provided a disincentive for production and consumption of transportation fuels by reducing costs associated with the use of public transportation, which may rely on electricity or other forms of energy. Table 9 provides descriptions of these 13 federal tax expenditures. It also provides information from the Department of the Treasury (Treasury) on tax expenditures that will or have expired, in full or in part, due to an expiration of legislative authority or some other expiration under the law as of the summer of 2014, as well as on tax expenditures that currently have no expiration. In addition, the table provides information from Treasury on revenue loss estimates (unless otherwise specified). In addition, we identified other provisions of the tax code that are generally related to energy production and consumption, but for which data on the level of federal support were not readily available. For example, tax-exempt municipal bonds allow publicly-owned utilities to obtain lower interest rates than those available from either private borrowers or the U.S. Treasury. Lower interest rates reduce borrowing costs for such utilities and provide incentives for producing electricity. While tax-exempt municipal bonds are used by energy industries such as electric utilities, the group of eligible borrowers also includes water utilities, telecommunication facilities, waste treatment plants, and other publicly-owned entities. OMB and the Joint Committee on Taxation did not provide estimates of annual revenue losses related to electric utilities for fiscal year 2000 through 2013 for this provision. Consequently, we cannot report the amount of forgone revenue related to this provision. We identified four federal programs that made more than $20 billion in outlays for R&D related to fossil, nuclear, and renewable energy from fiscal year 2000 through fiscal year 2013. The vast majority of these outlays were made by three DOE program offices—the Office of Fossil Energy, Office of Nuclear Energy, and Office of Energy Efficiency and Renewable Energy. As shown in figure 29, federal outlays related to R&D for fossil, nuclear, and renewable energy generally increased from fiscal year 2000 to 2013, although some variation occurred. Specifically, federal outlays for R&D related to fossil energy increased from about $412 million in fiscal year 2000 to about $1 billion in fiscal year 2012 and $0.9 billion in 2013. Outlays for R&D related to nuclear energy increased over the time period to almost $1 billion in fiscal year 2009, and then decreased through fiscal year 2013. In addition, outlays for R&D related to renewable energy increased to about $1.3 billion in fiscal year 2012 and just over $1 billion in 2013. According to the Energy Information Administration (EIA), new or expanded programs associated with the Recovery Act had a significant impact on energy-related R&D spending. Table 10 provides a description of these four federal R&D programs and activities; it also provides information reported by OMB and provided by DOE on outlays. These outlays provided funding for a variety of energy-specific R&D at federally owned laboratories, as illustrated in the following examples: Researchers at DOE’s National Energy Technology Laboratory conducted R&D to resolve the environmental, supply, and reliability constraints of producing and using fossil resources. One area of R&D involved combustion science, which provides the basis for a new generation of advanced fossil fuel conversion technologies to meet future demands for efficient, clean, and cost-effective energy production. Combustion science researchers at the laboratory conducted exploratory and applied research in the areas of combustion science technology, and dynamics of engines and other energy conversion devices. This research included modeling, simulation, and laboratory-scale studies of advanced combustion turbines, among other things. Researchers at DOE’s Idaho National Laboratory conducted R&D on advanced nuclear reactor designs, including the Next Generation Nuclear Power Plant project, which are intended to offer safety and other improvements over the current generation of nuclear power plants. EPAct formally established the Next Generation Nuclear Plant as a DOE project and designated the Idaho National Laboratory as the lead laboratory and construction site for the plant and directs the laboratory to carry out cost-shared R&D, design, and construction activities with industrial partners. Researchers at DOE’s National Renewable Energy Laboratory (NREL) conducted R&D that led to technological advances for wind and solar energy technologies. For example, laboratory researchers collaborated with wind turbine manufacturers to develop variable- speed turbines to take advantage of lower wind conditions. This innovation allowed a turbine manufacturer to develop and refine its 1.5-megawatt turbines. In addition, laboratory researchers collaborated with solar cell manufacturers to refine their manufacturing techniques before going into full production. This collaboration helped a company to become the world’s largest manufacturer of thin-film solar modules. We identified four DOE programs that provided funding for R&D that were not linked to a specific energy source but related to more general aspects of energy production and consumption. These programs made about $47 billion in outlays from fiscal year 2000 through 2013 for R&D related to basic energy sciences, energy efficiency and energy conservation, and electricity grid reliability, among other things. As shown in figure 30, federal outlays for these programs generally increased from fiscal year 2000 to fiscal year 2013, although some variation occurred. Specifically, federal outlays for R&D related to energy efficiency and conservation increased from about $670 million in fiscal year 2000 to over $6 billion in fiscal year 2011 and declining to about $2.0 billion in fiscal year 2013. Outlays for R&D related to basic energy science—funded by DOE’s Office of Science—increased from more than $700 million in fiscal year 2000 to about $1.5 billion in fiscal year 2013. In addition, outlays for R&D related to other activities, such as electricity delivery and energy reliability, increased from about $10 million in fiscal year 2003 to about $1.6 billion in fiscal years 2011, before declining to about $1.1 billion in fiscal year 2013. As mentioned above, according to EIA, new or expanded programs associated with the Recovery Act had a significant impact on energy-related R&D spending. Table 11 provides a description of these four federal R&D programs and activities; it also provides information reported by OMB on outlays. These outlays provided funding for a variety of R&D programs at federally owned laboratories and by nongovernmental entities. For example, researchers at NREL developed a new smart occupancy sensor in 2013 to control lighting and reduce energy costs that could lead to significant energy savings in commercial buildings. In addition, as we found in January 2012, since first receiving an appropriation in the Recovery Act, the Advanced Research Projects Agency-Energy program awarded more than $500 million to universities, public and private companies, and national laboratories to fund 181 projects that attempt to make transformational advances to a variety of energy technologies related to energy efficiency and renewable fuels, among other things. In addition to the individual named above, Chris Murray (Assistant Director), Quindi Franco, Jason Holliday, and David Messman made key contributions to this report. Also contributing to this report were Nicole Dery, Cindy Gilbert, Carol Henn, Jon Ludwigson, Cynthia Norris, MaryLynn Sergent, Anne Stevens, and Barbara Timmerman. Oil and Gas Transportation: Department of Transportation Is Taking Actions to Address Rail Safety, but Additional Actions Are Needed to Improve Pipeline Safety. GAO-14-667. Washington, D.C.: August 21, 2014. Petroleum Refining: Industry’s Outlook Depends on Market Changes and Key Environmental Regulations. GAO-14-249. Washington, D.C.: March 14, 2014. Coal Leasing: BLM Could Enhance Appraisal Process, More Explicitly Consider Coal Exports, and Provide More Public Information. GAO-14-140. Washington, D.C.: December 18, 2013. Oil and Gas Resources: Actions Needed for Interior to Better Ensure a Fair Return. GAO-14-50. Washington, D.C.: December 6, 2013. Oil and Gas Development: BLM Needs Better Data to Track Permit Processing Times and Prioritize Inspections. GAO-13-572. Washington, D.C.: August 23, 2013. Pipeline Permitting: Interstate and Intrastate Natural Gas Permitting Processes Include Multiple Steps, and Time Frames Vary. GAO-13-221. Washington, D.C.: February 15, 2013. Mineral Resources: Mineral Volume, Value, and Revenue. GAO-13-45R. Washington, D.C.: November 15, 2012. Electricity: Significant Changes Are Expected in Coal-Fueled Generation, but Coal is Likely to Remain a Key Fuel Source. GAO-13-72. Washington, D.C.: October 29, 2012. Oil and Gas: Information on Shale Resources, Development, and Environmental and Public Health Risks. GAO-12-732. Washington, D.C.: September 5, 2012. Oil and Gas Management: Interior’s Reorganization Complete, but Challenges Remain in Implementing New Requirements. GAO-12-423. Washington, D.C.: July 30, 2012. EPA Regulations and Electricity: Better Monitoring by Agencies Could Strengthen Efforts to Address Potential Challenges. GAO-12-635. Washington, D.C.: July 7, 2012. Oil and Gas: Interior Has Strengthened Its Oversight of Subsea Well Containment, but Should Improve Its Documentation. GAO-12-244. Washington, D.C.: February 29, 2012. Energy-Water Nexus: Information on the Quantity, Quality, and Management of Water Produced during Oil and Gas Production. GAO-12-156. Washington, D.C.: January 9, 2012. Deepwater Horizon Oil Spill: Actions Needed to Reduce Evolving but Uncertain Federal Financial Risks. GAO-12-86. Washington, D.C.: October 24, 2011. Oil and Gas Bonds: BLM Needs a Comprehensive Strategy to Better Manage Potential Oil and Gas Well Liability. GAO-11-292. Washington, D.C.: February 25, 2011. Energy-Water Nexus: A Better and Coordinated Understanding of Water Resources Could Help Mitigate the Impacts of Potential Oil Shale Development. GAO-11-35. Washington, D.C.: October 29, 2010. Federal Oil and Gas Leases: Opportunities Exist to Capture Vented and Flared Natural Gas, Which Would Increase Royalty Payments and Reduce Greenhouse Gases. GAO-11-34. Washington, D.C.: October 29, 2010. Coal Power Plants: Opportunities Exist for DOE to Provide Better Information on the Maturity of Key Technologies to Reduce Carbon Dioxide Emissions. GAO-10-675. Washington, D.C.: June 16, 2010. Oil and Gas Management: Interior’s Oil and Gas Production Verification Efforts Do Not Provide Reasonable Assurance of Accurate Measurement of Production Volumes. GAO-10-313. Washington, D.C.: March 15, 2010. Nuclear Power: Analysis of Regional Differences and Improved Access to Information Could Strengthen NRC Oversight. GAO-13-743. Washington, D.C.: September 27, 2013. Commercial Spent Nuclear Fuel: Observations on the Key Attributes and Challenges of Storage and Disposal Options. GAO-13-532T. Washington, D.C.: April 11, 2013. Emergency Preparedness: NRC Needs to Better Understand Likely Public Response to Radiological Incidents at Nuclear Power Plants. GAO-13-243. Washington, D.C.: March 11, 2013. Spent Nuclear Fuel: Accumulating Quantities at Commercial Reactors Present Storage and Other Challenges. GAO-12-797. Washington, D.C.: August 15, 2012. Nuclear Regulation: NRC’s Oversight of Nuclear Power Reactors’ Decommissioning Funds Could Be Further Strengthened. GAO-12-258. Washington, D.C.: April 5, 2012. Nuclear Fuel Cycle Options: DOE Needs to Enhance Planning for Technology Assessment and Collaboration with Industry and Other Countries. GAO-12-70. Washington, D.C.: October 17, 2011. Commercial Nuclear Waste: Effects of a Termination of the Yucca Mountain Repository Program and Lessons Learned. GAO-11-229. Washington, D.C.: April 8, 2011. Wind Energy: Additional Actions Could Help Ensure Effective Use of Federal Financial Support. GAO-13-136. Washington, D.C.: March 11, 2013. Renewable Energy: Agencies Have Taken Steps Aimed at Improving the Permitting Process for Development on Federal Lands. GAO-13-189. Washington, D.C.: January 18, 2013. Solar Energy: Federal Initiatives Overlap but Take Measures to Avoid Duplication. GAO-12-843. Washington, D.C.: August 30, 2012. Renewable Energy Project Financing: Improved Guidance and Information Sharing Needed for DOD Project-Level Officials. GAO-12-401. Washington, D.C.: April 4, 2012. Renewable Energy: Federal Agencies Implement Hundreds of Initiatives. GAO-12-260. Washington, D.C.: February 27, 2012. Biofuels: Challenges to the Transportation, Sale, and Use of Intermediate Ethanol Blends. GAO-11-513. Washington, D.C.: June 3, 2011. Defense Infrastructure: Department of Defense Renewable Energy Initiatives. GAO-10-681R. Washington, D.C.: April 26, 2010. Defense Infrastructure: DOD Needs to Take Actions to Address Challenges in Meeting Federal Renewable Energy Goals. GAO-10-104. Washington, D.C.: December 18, 2009. DOE Loan Programs: DOE Should Fully Develop Its Loan Monitoring Function and Evaluate Its Effectiveness. GAO-14-367. Washington, D.C.: May 1, 2014. National Laboratories: DOE Needs to Improve Oversight of Work Performed for Non-DOE Entities. GAO-14-78. Washington, D.C.: October 25, 2013. Energy Efficiency: Better Coordination among Federal Programs Needed to Allocate Testing Resources. GAO-13-135. Washington, D.C.: March 28, 2013. Department of Energy: Status of Loan Programs. GAO-13-331R. Washington, D.C.: March 15, 2013. Tax Expenditures: Background and Evaluation Criteria and Questions. GAO-13-167SP. Washington, D.C.: November 29, 2012. Batteries and Energy Storage: Federal Initiatives Supported Similar Technologies and Goals but Had Key Differences. GAO-12-842. Washington, D.C.: August 30, 2012. Home Energy Assistance for Low-Income Occupants of Manufactured Homes. GAO-12-848R. Washington, D.C.: August 24, 2012. Energy Conservation and Climate Change: Factors to Consider in the Design of the Nonbusiness Energy Property Credit. GAO-12-318. Washington, D.C.: April 2, 2012. DOE Loan Guarantees: Further Actions Are Needed to Improve Tracking and Review of Applications. GAO-12-157. Washington, D.C.: March 12, 2012. Energy: The Department of Energy’s Office of Science Uses a Multilayered Process for Prioritizing Research. GAO-12-410R. Washington, D.C.: February 24, 2012. Department of Energy: Advanced Research Projects Agency-Energy Could Benefit from Information on Applicants’ Prior Funding. GAO-12-112. Washington, D.C.: January 13, 2012. Green Building: Federal Initiatives for the Nonfederal Sector Could Benefit from More Interagency Collaboration. GAO-12-79. Washington, D.C.: November 2, 2011. Tennessee Valley Authority: Full Consideration of Energy Efficiency and Better Capital Expenditures Planning Are Needed. GAO-12-107. Washington, D.C.: October 31, 2011. Recovery Act: Energy Efficiency and Conservation Block Grant Recipients Face Challenges Meeting Legislative and Program Goals and Requirements. GAO-11-379. Washington, D.C.: April 7, 2011. Department of Energy: Advanced Technology Vehicle Loan Program Implementation Is Under Way, but Enhanced Technical Oversight and Performance Measures Are Needed. GAO-11-145. Washington, D.C.: February 28, 2011. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Department of Energy: Further Actions Are Needed to Improve DOE’s Ability to Evaluate and Implement the Loan Guarantee Program. GAO-10-627. Washington, D.C.: July 12, 2010. Low-Income Home Energy Assistance Program: Greater Fraud Prevention Controls Are Needed. GAO-10-621. Washington, D.C.: June 18, 2010. Federal Energy Management: GSA’s Recovery Act Program Is on Track, but Opportunities Exist to Improve Transparency, Performance Criteria, and Risk Management. GAO-10-630. Washington, D.C.: June 16, 2010. Vehicle Fuel Economy: NHTSA and EPA’s Partnership for Setting Fuel Economy and Greenhouse Gas Emissions Standards Improved Analysis and Should Be Maintained. GAO-10-336. Washington, D.C.: February 25, 2010.
|
Federal energy policy since the 1970s has focused primarily on ensuring a secure supply of energy while protecting the environment. The federal government supports and intervenes in U.S. energy production and consumption in various ways, such as providing tax incentives, grants, and other support to promote domestic production of energy, as well as setting standards and requirements. GAO was asked to provide information on federal activities and their influence on U.S. energy production and consumption over the past decade. This report provides information on U.S. production and consumption of fossil, nuclear, and renewable energy from 2000 through 2013 and major factors, including federal activities, that influenced energy production and consumption levels. It also provides information on other federal activities that may have influenced aspects of U.S. energy production and consumption from 2000 through 2013 but were not targeted at a specific energy source, as well as information on federal research and development. GAO analyzed DOE historical data on energy production and consumption, reviewed studies and reports from federal agencies and governmental organizations on federal energy-related activities, and analyzed data on federal spending programs and tax incentives, among other things. GAO is not making recommendations in this report. DOE, the Department of the Treasury, and the U.S. Department of Agriculture reviewed a draft of this report and provided technical comments that GAO incorporated as appropriate. According to the studies and reports GAO reviewed, several major factors, including federal activities, influenced U.S. production and consumption of fossil, nuclear, and renewable energy from 2000 through 2013. Examples of these factors include the following: Fossil energy. Advances in drilling technologies enabled economic production of natural gas and crude oil from shale and similar geological formations. These advances led to increases in domestic production of natural gas and crude oil beginning around 2008 and contributed to declines in domestic prices of natural gas, as well as lower prices for crude oil in some regions of the United States. Some federal activities also may have influenced these trends. For example, the federal government limited oil producers' liability associated with some oil spills, lowering the producers' costs for liability insurance. In addition, the federal government provided tax incentives encouraging production for oil and gas producers, resulting in billions of dollars in estimated federal revenue losses. Moreover, partly because of lower natural gas prices, domestic coal production decreased in recent years as utilities switched from coal to natural gas for electricity generation. Nuclear energy. Declining prices for a competing energy source—natural gas—may have led to decreases in the production and consumption of nuclear energy in recent years. Federal activities may have also influenced this trend. For example, the Department of Energy (DOE) announced plans to terminate its work to license a disposal facility for certain nuclear power plant waste in 2009, creating uncertainty about how this waste would be managed. This uncertainty may have provided a disincentive for some nuclear power operators to stay in the market or expand capacity because of the cost of storing nuclear waste. Renewable energy. Federal tax credits for ethanol and federal policies requiring the use of ethanol in transportation fuels were major factors influencing an 8-fold increase in the production and consumption of ethanol from 2000 to 2013. In addition, state policies requiring the use of renewable energy in electricity production, as well as federal outlays and tax credits for renewable energy producers, were major factors influencing a 30-fold increase and a 19-fold increase in production and consumption of electricity from wind and solar energy, respectively, from 2000 to 2013. According to the studies and reports GAO reviewed, other federal activities may have influenced aspects of U.S. energy production and consumption from 2000 through 2013 but were not targeted at a specific energy source. For example, the federal government strengthened energy efficiency standards for vehicle fuel economy and consumer products such as appliances and lighting, provided electricity and transmission services to customers through its power marketing administrations and the Tennessee Valley Authority, and spent billions of dollars helping low-income households cover heating and cooling costs. In addition, the federal government supported research and development targeting a wide range of energy-related technologies at government-owned laboratories and through funding to universities and other research entities.
|
DHS has some of the most extensive acquisition needs within the federal government. In fiscal year 2007, DHS obligated about $12 billion to acquire goods and services ranging from the basic goods and services federal agencies purchase, such as information technology equipment and support, to more complex and unique acquisitions, such as airport security systems and Coast Guard ships. DHS and its component agencies have faced a number of challenges related to procuring services and major system acquisitions. When DHS was formed in 2003, it was responsible for integrating 22 agencies with disparate missions. Of these, only seven came with their own procurement offices, only some of which had also managed complex acquisitions such as the Coast Guard’s Deepwater program or TSA’s airport screening programs. While the Homeland Security Acquisition Manual and the Federal Acquisition Regulation (FAR) do not distinguish between the terms acquisition and procurement, DHS officials have noted that procurement—the actual transaction to acquire goods and services—is only one element of acquisition. The term acquisition can include the development of operational and life-cycle requirements, such as formulating concepts of operations, developing sound business strategies, exercising prudent financial management, assessing trade-offs, and managing program risks. We have identified three key performance areas for acquisition management: assessing and organizing acquisition functions to meet agency needs; developing clear and transparent policies and processes for all acquisitions; developing an acquisition workforce to implement and monitor acquisitions. Our prior work has shown that these are among the key elements of an efficient, effective, and accountable acquisition function. We testified in April 2008 that, despite its initial positive acquisition management efforts, several challenges remained. The following summarizes each of these three areas: Assessing and organizing the acquisition function: Since it was created in 2003, DHS has recognized the need to improve acquisition outcomes, and has taken some steps to organize and assess the acquisition function. DHS has worked to integrate the disparate acquisition processes and systems that the component organizations brought with them when the department was created. To help assess acquisition management, in 2005 the Department developed an oversight program. This program incorporates DHS policy, internal controls, and elements of an effective acquisition function. This program has been partially implemented and monitors component-level performance through four recurring reviews: self-assessments; operational status; on-site; and acquisition planning. However, DHS has not yet accomplished its goal of integrating the acquisition function across the department. For example, the structure of DHS’s acquisition function creates ambiguity about who is accountable for acquisition decisions because it depends on a system of dual accountability and cooperation and collaboration between the CPO and the component heads. DHS officials stated in June 2007 that that they were in the process of modifying the lines of business management directive to clarify the CPO’s authority; however, this directive has yet to be approved. Developing clear and transparent polices and processes: DHS had made some progress in this area but has generally not developed clear and transparent policies and processes for all acquisitions. Specifically, DHS put into place an investment review process in 2003 that adopts many acquisition best practices to help the department reduce risk and increase the chances for successful investment in terms of cost, schedule, and performance. However, in 2005, we found that the process did not include critical management reviews. Further, our work has identified concerns with the implementation of the investment review process. In 2007, we reported that DHS had not fully implemented key practices of its investment review process to control projects. For example, we reported that DHS executives may not have the information they need to determine whether information technology investments are meeting expectations, which may increase the risk that underperforming projects are not identified and corrected in a timely manner. We have ongoing work on the implementation of DHS’s investment review process scheduled to be released later this year. Developing an acquisition workforce to implement and monitor acquisitions: DHS has taken initial steps needed to develop an acquisition workforce. In 2006, DHS reported significant progress in providing staff for the component contracting offices, though much work remained to fill the positions with qualified, trained acquisition professionals. DHS has also taken a positive step by authorizing additional staff for the CPO to provide staff for procurement oversight, program management and cost analysis functions. We have ongoing work on DHS’s acquisition workforce scheduled to be released later this year. Our work on both services contracting and major investments has consistently identified the need for improved acquisition planning to better ensure taxpayer dollars are spent prudently. Acquisitions must be appropriately planned and structured to minimize the risk of the government receiving services that are over cost estimates, delivered late, and of unacceptable quality. Specifically, we have emphasized the importance of clearly defined requirements to achieving desired results, and measurable performance standards to ensuring control and accountability. Too often, our work on federal acquisitions has reported that unrealistic, inadequate, or frequently changing requirements have left the government vulnerable to wasted taxpayer dollars. For services closely supporting inherently governmental functions, we found that DHS did not use risk assessment in its plans to hire contractors to provide these services. For services procured through methods such as interagency and performance-based contracting, we found acquisition planning was lacking. For major systems, acquisition planning includes establishing well-defined requirements and ensuring appropriate resources, such as adequate staffing and expertise, are in place to manage the investments; yet we have consistently found that these key elements are not in place. While there are benefits to using contractors to perform services for the government—such as increased flexibility in fulfilling immediate needs— we and others have raised concerns about the federal government’s increased reliance on contractor services. Of key concern is the risk associated with a contractor providing services that closely support inherently governmental functions: the loss of government control over and accountability for mission-related policy and program decisions. Professional and management support services, including program management and support services such as acquisition support, budget preparation, intelligence services, and policy development, closely support inherently governmental functions. To help ensure that the government does not lose control over and accountability for such decisions, longstanding federal procurement policy requires attention to the risk that government decisions may be influenced by, rather than independent from, contractor judgments when contracting for services that closely support inherently governmental functions. This type of risk assessment is also part of the acquisition planning process. While DHS program officials generally acknowledged that their professional and management support services contracts closely supported inherently governmental functions, they did not assess the risk of contractors providing these services. The nine cases we reviewed in detail provided examples of cases in which contractors provided services integral to and comparable to those provided by government employees; contractors provided ongoing support; and contract requirements were broadly defined. These conditions need to be carefully monitored to help ensure the government does not lose control over and accountability for mission related decisions. To improve DHS’s ability to manage the risk of selected services that closely support inherently governmental functions, as well as government control over and accountability for decisions, we recommended that DHS establish strategic-level guidance on and routinely assess the risk of using contractors for selected services and more clearly define contract requirements. DHS’s use of interagency contracting—a process by which one agency uses another agency’s contracts and contracting services—is another area we have identified acquisition planning was lacking. While interagency contracting offers the benefits of efficiency and convenience, in January 2005, we noted shortcomings and designated the management of interagency contracting as a governmentwide high-risk area. Our work on DHS’s use of interagency contracting showed that the department did not always select interagency contracts based on planning and analysis and instead made decisions based on the benefits of speed and convenience. We found that DHS conducted limited evaluation of contracting alternatives to ensure good value when selecting among interagency contracts. While interagency contacting is often chosen because it requires less planning than establishing a new contract, evaluating the selection of an interagency contract is important because not all interagency contracts provide good value when considering both timeliness and total cost. Although DHS guidance has required planning and analysis of alternatives for all acquisitions since July 2005, we found that it was not conducted for the four cases in our review for which it was required. To improve the management of interagency contracting, we recommended that DHS develop consistent, comprehensive guidance and training and establish criteria to consider in selecting an interagency contract. To help improve service acquisition outcomes, federal procurement policy calls for agencies to use a performance-based approach to the maximum extent practicable. This approach includes: a performance work statement that describes outcome oriented requirements, measurable performance standards, and quality assurance surveillance. In using a performance- based approach, the FAR requires contract outcomes or requirements to be well-defined, that is, providing clear descriptions of results to be achieved. Our prior reviews of complex DHS investments using a performance-based approach point to a number of shortcomings. For example, in June 2007, we reported that a performance-based contract for a DHS financial management system, eMerge2, lacked clear and complete requirements, which led to schedule delays and unacceptable contractor performance. Ultimately, the program was terminated after a $52 million investment. The DHS Inspector General has also indicated numerous opportunities for DHS to make better use of sound practices, such as well- defined requirements. Consistent with these findings, our 2008 report on performance-based acquisitions, for which we reviewed contracts for eight major investments at Coast Guard, CBP, and TSA, found that contracts for investments that did not have well-defined requirements, or a complete set of measurable performance standards, or both, at the time of contract award or the start of work experienced cost overruns, schedule delays, or did not otherwise meet performance expectations. In contrast, service contracts for investments that had well-defined requirements linked to measurable standards performed within budget meeting the standards in all cases where contractors had begun work. For example, TSA’s Screening Partnership Program improved its contracted services at the San Francisco International Airport to incorporate well-defined requirements linked to clearly measurable performance standards and delivered services within budget. To improve the outcomes of performance-based acquisitions, we recommended that DHS improve acquisition planning for requirements for major complex investments to ensure they are well- defined, and develop consistently measurable performance standards linked to those requirements. Following are examples of complex investments with contracts that did not have well-defined requirements or complete measurable performance standards and did not meet cost, schedule, or performance expectations. Contracts for systems development for two CBP major investments— Automated Commercial Environment (ACE) and Secure Border Initiative (SBInet)—lacked both well-defined requirements and measurable performance standards prior to the start of work and both experienced poor outcomes. The first, for DHS’s ACE Task Order 23 project—a trade software modernization effort—was originally estimated to cost $52.7 million over a period of approximately 17 months. However, the program lacked stable requirements at contract award and, therefore, could not establish measurable performance standards and valid cost or schedule baselines for assessing contractor performance. Software requirements were added after contract award, contributing to a project cost increase of approximately $21.1 million, or 40 percent, over the original estimate. Because some portions of the work were delayed to better define requirements, the project is not expected to be completed until January 2011—over three years later than originally planned. The second, Project 28 for systems development for CBP’s SBInet—a project to help secure a section of the United States-Mexico border using a surveillance system—did not meet expected outcomes due to a lack of both well-defined requirements and measurable performance standards. CBP awarded the Project 28 contract planned as SBInet’s proof of concept and the first increment of the fielded SBInet system before the overall SBInet operational requirements and system specifications were finalized. More than 3 months after Project 28 was awarded, DHS’s Inspector General reported that CBP had not properly defined SBInet’s operational requirements and needed to do so quickly to avoid rework of the contractor’s systems engineering. We found that several performance standards were not clearly defined to isolate the contractor’s performance from that of CBP employees, making it difficult to determine whether any problems were due to the contractor’s system design, CBP employees, or both. As a result, it was not clear how CBP intended to measure compliance with the Project 28 standard for probability of detecting persons attempting to illegally cross the border. Although it did not fully meet user needs and its design will not be used as a basis for future SBInet development, DHS fully accepted the project after an 8-month delay.In addition, DHS officials have stated that much of the Project 28 system will be replaced by new equipment and software. However, Project 28 is just one part of the entire Secure Border Initiative, and our recent work has noted that requirements and testing processes for the initiative have not been effectively managed, and important aspects of the program remain in flux. Additionally, our work has found that the Coast Guard’s Deepwater Program, ongoing since the late 1990s, is intended to replace or modernize 15 major classes of Coast Guard assets. In March 2007, we reported that the Coast Guard’s Deepwater contract had requirements that were set at unrealistic levels and were frequently changed. For some of the Deepwater assets, this resulted in cost escalation, schedule delays, and reduced contractor accountability over a period of many years of producing poor results such as ships that experienced serious structural defects. In light of these serious performance and management problems, Coast Guard leadership has changed its approach to this acquisition. It has taken over the lead role in systems integration, which was formerly held by a contractor. Formerly, the contractor had significant program management responsibilities, such as contractual responsibility for drafting task orders and managing the system integration of Deepwater as a whole. Coast Guard project managers and technical experts now hold the greater balance of management responsibility and accountability for program outcomes. Coast Guard officials have begun to hold competitions for Deepwater assets outside of the lead system integrator contract, and cost and schedule information is now captured at a level that has resulted in improved visibility, such as the ability to track and report cost breaches for assets. The Coast Guard has also begun to follow a disciplined project management framework, requiring documentation and approval of decisions at key points in a program’s life cycle. However, like other federal agencies, the Coast Guard has faced challenges in building an adequate government workforce and is relying on support contractors in key positions, such as cost estimators and contract specialists. Our work on contractors performing services closely supporting inherently governmental functions found that DHS program officials and contracting officers were not aware of federal requirements for enhanced oversight for these types of services. Both the FAR and the Office of Management and Budget’s Office of Federal Procurement Policy (OFPP) policy state that when contracting for these types of services a sufficient number of qualified government employees assigned to plan and oversee these contractor activities is needed to maintain control and accountability. For the nine cases we reviewed, the level of oversight provided did not always help ensure accountability for decisions or the ability to judge whether contractors were performing as required. We found cases in which the DHS components lacked the capacity to oversee contractor performance due to limited expertise and workload demands. DHS components were also limited in their ability to assess contactor performance in a way that addressed the risk of contracting for services that closely support inherently governmental functions. Assessing contractor performance requires a plan that outlines how services will be delivered and establishes measurable outcomes. However, none of the oversight plans and contract documents we reviewed contained specific measures for assessing contractor performance of selected services. To address this deficiency, we recommended that DHS assess the ability of its workforce to provide sufficient oversight when using these types of contracted services. Limited oversight also is due in part to insufficient data to monitor acquisitions. Our work on procurement methods, such as interagency contracting and performance-based acquisition, has found that DHS does not systematically monitor its use of these contracts to assess whether these methods are being properly managed, or to assess costs, benefits, or other outcomes of these acquisition methods. With regard to interagency contracting, we found that DHS was not able to readily provide data on the amounts spent through different types of contracts or on the fees paid to other agencies for the use of their contracting services or vehicles. This lack of information means that DHS cannot assess whether the department could achieve savings through using another type of contracting vehicle. We similarly found that DHS did not have reliable data on performance-based acquisitions to facilitate required reporting, informed decisions, and analysis of acquisition outcomes. For example, our review of contracts at the Coast Guard, CBP, Immigration and Customs Enforcement (ICE), and TSA showed that, about 51 percent of the 138 contracts we identified in FPDS-NG as performance-based had none of the required performance-based elements: a performance work statement, measurable performance standards, and a method of assessing contractor performance against performance standards. The unreliability of these data makes it difficult for DHS to be able to accurately report on governmentwide performance targets for performance-based acquisitions. We have recommended that DHS work to improve the quality of FPDS-NG data so that DHS can more accurately identify and assess the quality of the use and outcomes of various procurement methods. Inaccurate federal procurement data is not unique to DHS and is a long- standing governmentwide concern. Our prior work and the work of the General Services Administration’s Inspector General have identified issues with the accuracy and completeness of FPDS and FPDS-NG data, and OMB has stressed the importance of submitting timely and accurate procurement data to FPDS-NG. The Acquisition Advisory Panel has also raised concerns about the accuracy of FPDS-NG data. These circumstances illustrate the magnitude of the challenge DHS faces in developing timely and accurate data to monitor acquisitions. To improve procurement oversight, the CPO established and has implemented a departmentwide program to provide comprehensive insight into each component’s programs and disseminate successful management techniques throughout DHS. This program, which is based on a series of component-level reviews, was designed with the flexibility to address specific procurement issues. As such, it could be used to address areas such as performance-based acquisitions, interagency contracting, and the appropriate use of contractors providing services closely supporting inherently governmental functions. Some of the four key oversight reviews have begun under this program, but management assessments, or evaluation of the outcomes of acquisition methods and contracted services, have not been conducted. Our work has found that the CPO continues to face challenges in maintaining the staffing levels needed to fully implement the oversight program, and CPO authority to ensure that components comply with the procurement oversight plan remains unclear. Improving acquisition outcomes has been an ongoing challenge since DHS was established in 2003. Our work has consistently noted that sound acquisition planning, including clearly defining requirements, and ensuring adequate oversight are hallmarks of successful service acquisitions. A sufficient acquisition workforce is also key to properly managing acquisitions. Our body of work has also included many recommendations to the Secretary of Homeland Security to take actions aimed at improving acquisition management, planning, and oversight. While DHS has generally concurred with our recommendations, the department has not always stated how the underlying causes of the deficiencies we have identified will be addressed. Until the department takes needed action to address these causes, it will continue to be challenged to make the best use of its acquisition dollars. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee may have at this time. For further information about this statement, please contact me at (202) 512-4841 or [email protected]. Contact points for GAO’s Offices of Congressional Relations and Public Affairs are listed on the last page of this product. Key contributors to this statement were Amelia Shachoy, Assistant Director; Ann Marie Udale, Karen Sloan and Kenneth Patton. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Since it was created in 2003, the Department of Homeland Security (DHS) has obligated billions of dollars annually to meet its expansive homeland security mission. The department's acquisitions support complex and critical trade, transportation, border security, and information technology investments. In fiscal year 2007, DHS spent over $12 billion on procurements to meet this mission including spending for complex services and major investments. Prior GAO work has found that while DHS has made some initial progress in developing its acquisition function since 2003, acquisition planning and oversight for procurement and major acquisitions need improvement. This testimony discusses GAO's findings in these areas and is based on GAO's body of work on acquisition management issues. Recognizing the need to improve its acquisition outcomes, DHS has taken some steps to integrate disparate acquisition processes and systems that the component organizations brought with them when the department was formed. However, we have reported that more needs to be done to develop clear and transparent policies and processes for all acquisitions, and to develop an acquisition workforce to implement and monitor acquisitions. With regard to acquisition planning, DHS did not assess the risk of hiring contractors to perform management and professional support services that have the potential to increase the risk that government decisions may be influenced by, rather than independent from, contractor judgments. Planning for services procured through interagency and performance-based contracting methods was also lacking. For example, DHS did not always consider alternatives to ensure good value when selecting among interagency contracts. Shortcomings in DHS's use of a performance-based approach for complex acquisitions included a lack of well-defined requirements, a complete set of measurable performance standards, or both, at the time of contract award or the start of work. Contracts for several investments we reviewed experienced cost overruns, schedule delays, or less than expected performance. Acquisition oversight also has consistently been identified as needing improvement. While the Chief Procurement Officer (CPO) has recently implemented a departmentwide oversight program, evaluations of the outcomes of acquisition methods and contracted services have not yet been conducted. Further, the CPO continues to face challenges in maintaining the staffing levels needed to fully implement the oversight program, and CPO authority to ensure that components comply with the procurement oversight plan remains unclear.
|
A DOD directive assigns the Secretary of the Army the mission of the single manager for conventional ammunition within DOD. The Under Secretary of Defense for Acquisition, Technology, and Logistics is responsible for providing policy and guidance for the single manager for conventional ammunition’s mission, and ensuring compliance with the single manager’s responsibilities. Section 806 of the Strom Thurmond National Defense Authorization Act for Fiscal Year 1999 (Public Law 105- 261) vests the single manager with the authority to restrict the procurement of conventional ammunition to sources within the national technology and industrial base. The Secretary of the Army was authorized to delegate, within the Army, this authority. A January 28, 2003, memorandum from the Secretary of the Army delegated authority to make section 806 determinations to the Assistant Secretary of the Army for Acquisition, Logistics, and Technology. In an April 16, 2003, memorandum that office re-delegated limited section 806 authority to the Program Executive Officer for Ammunition in his capacity as the single manager’s executor. High-level planning guidance establishes general guidelines for the services to determine how much ammunition is needed to conduct military operations. As the single manager for conventional ammunition, the Army is responsible for coordinating with all the military services to meet conventional ammunition requirements. Through the PEO for Ammunition, the Army manages small and medium caliber ammunition, including small arms, mortar, automatic cannon, and ship gun ammunition (see table 1). All ammunition cartridges are composed of several components that must be assembled at different stages of production. See figure 1 for an example of a 5.56mm cartridge. It takes, on average, 23-months from the time a production order is placed until final delivery. Since World War II, DOD has relied primarily on a government-owned base to meet its conventional ammunition needs. During the Cold War, there were as many as 34 government-owned plants producing conventional ammunition. The end of the Cold War and subsequent changes to defense missions resulted in declining requirements. At its peak in 1985, funding for conventional ammunition was $4.3 billion; by 1999, funding had dropped by more than half to about $2 billion. Currently, there are 14 government-owned ammunition plants, 11 of which are contractor-operated. Three of these 11 facilities—Lake City (Missouri), Milan (Tennessee), and Radford (Virginia)—are the government-owned, contractor-operated producers of DOD’s small and medium caliber ammunition. Lake City is the primary producer of small caliber ammunition. Lake City is operated by a commercial ammunition producer under a contract that runs from fiscal year 1999 through fiscal year 2008. The contract initially called for a minimum production capacity amount of 350 million rounds and a maximum of 800 million rounds of 5.56mm, 7.62mm, and .50-caliber ammunition. The PEO increased the upper capacity requirement to 1.5 billion rounds per year to be accomplished by early 2006 through modifications made to the original contract. The PEO relies on annual contracts with three commercial producers within the national technology and industrial base for most of DOD’s supply of 20mm, 25mm, and 30mm medium caliber ammunition. Currently, one of these producers is manufacturing medium caliber ammunition at Radford which specializes in the production of propellants and explosives; the other two commercial producers manufacture medium caliber ammunition at their own facilities. Radford’s current contract was awarded in fiscal year 2003 and has been renewed on an annual basis through fiscal year 2005. Milan is the government’s primary producer of 40mm ammunition. The contract with the commercial operator at Milan runs from fiscal year 1998 through fiscal year 2006. DOD’s increased requirements for small and medium caliber ammunitions have largely been driven by increased weapons training requirements, dictated by the Army’s transformation to a more self-sustaining and lethal force—which was accelerated after the attacks of September 11, 2001— and by the deployment of forces to conduct recent U.S. military actions in Afghanistan and Iraq. Since 2000, requirements for small caliber ammunition have more than doubled, and requirements for medium caliber ammunition have almost doubled. Over the last decade, the Army began transforming its warfighting capabilities to respond more effectively to the growing number of peacekeeping operations, small-scale contingencies, and nontraditional threats, such as terrorism. According to Army officials, the transformation is the most comprehensive change in the Army in over a century and will affect all aspects of its organizations, training, doctrine, leadership, and strategic plans as well as its acquisitions. As part of its transformation, the Army is planning for its forces to be self-sustaining and capable of generating combat power and contributing decisively to combat operations. Following the September 11, 2001, attacks, the Army accelerated its force transformation to mobilize and deploy soldiers in support of various missions, most notably war-fighting operations in Afghanistan and Iraq. To meet its force transformation objectives, the Army began requiring all soldiers to gain additional weapons qualifications training after they complete initial basic training. The Army also began requiring that personnel in all deployed elements, including combat support and combat service support units, achieve and maintain greater proficiency in the use of specified weapons. For example, beginning in late 2001, the Army established a policy requiring each soldier to qualify twice a year on small caliber firearms instead of once a year as previously required. According to Army officials, in addition to the increased annual training requirements, small caliber ammunition needs have increased by an additional 66 percent due to a combination of the mobilization of units and contingency training, for example, training to react to defend against attacks on truck convoys; and, to a lesser extent, due to operations in Iraq and Afghanistan. The increased requirements are likely to continue to a significant extent beyond current operational deployments due to the increased training requirements. Between fiscal years 2000 and 2005, total requirements for small caliber ammunition increased from about 730 million to nearly 1.8 billion rounds (see figure 2). The 5.56mm rounds—used in the M16 rifle, the standard weapon used by soldiers—accounted for much of the small caliber increase (see table 2). Medium caliber requirements have also increased over the past few years. Between fiscal years 2000 and 2005, medium caliber requirements almost doubled, from 11.7 million rounds to almost 22 million rounds (see figure 3). The 40mm rounds represent the bulk of the increases between fiscal years 2000 and 2005 (see table 3). In an effort to help meet the increased need for small and medium caliber ammunition in the near term, the PEO upgraded the equipment at the Lake City, Milan, and Radford Army Ammunition plants. While these upgrades enabled Milan and Radford—the government-owned, contractor-operated producers of medium caliber ammunition—to meet DOD’s requirements, Lake City—the small caliber ammunition producer—was unable to meet DOD’s fiscal year 2004 requirement of about 1.6 billion rounds of ammunition. As a result, the PEO made additional procurements from the commercial market to make up for fiscal year 2004 shortfalls. The three government-owned, contractor-operated plants that produce small and medium caliber ammunition were built in 1941. Between fiscal years 2001 and 2005, DOD funded a total of about $93.3 million to upgrade these facilities. This included replacement or refurbishment of ammunition cartridge production equipment and other facility improvements. According to a PEO official, ongoing modernization is needed for the Army ammunition plants to continue to operate into the future, and in the case of the Lake City Army Ammunition Plant, additional equipment and facility upgrades will be needed to increase capacity to address future needs. According to a PEO official, the Army plans to replace and refurbish ammunition production equipment through fiscal year 2011. See table 4 for examples of funded modifications. According to PEO officials, the national technology and industrial base has been able to meet the increased requirements for medium caliber ammunition. In an effort to meet DOD’s small ammunition requirements, the PEO initiated additional modernization efforts at Lake City to increase production from a maximum capacity of 800 million rounds in fiscal year 2001 to approximately 1.2 billion rounds per year in July 2004. Despite this increased production capacity, Lake City was unable to meet fiscal year 2004 requirements for small caliber ammunition. Consequently, the PEO was forced to rely on other ammunition sources. While many commercial ammunition producers responded to the PEO’s sources sought announcements, few were able to satisfy DOD’s ammunition specifications. For example, seven of nine commercial producers responding to the PEO’s announcement for a specific type of 5.56mm ammunition were unable to meet the specifications, such as producing metal cartridge cases. For an announcement for different types of .50-caliber ammunition, none of the 10 respondents were able to meet all of the specifications. Several respondents were foreign ammunition producers. According to officials from U.S. commercial ammunition producers, the recent surge in DOD’s small caliber ammunition requirements could only be met by accessing available worldwide capacity. The PEO was eventually able to find commercial producers qualified to fill DOD’s small caliber ammunition shortfall in fiscal year 2004. These included Israel Military Industries and Olin-Winchester—a U.S. ammunition producer. According to data provided by the PEO, almost 313 million rounds of 5.56mm, 7.62mm, and .50-caliber ammunition were purchased from commercial ammunition producers in fiscal year 2004. According to a PEO official, DOD paid about $10 million more than a similar amount of small caliber ammunition would cost from Lake City. However, Lake City could not meet the 2004 requirement. Although DOD paid a premium as a result of the need to procure ammunition outside the government-owned base, we did not analyze whether maintaining a more robust base would have been cost-effective. According to DOD officials, the increased buys for small caliber ammunition are being funded through supplemental appropriations. Tables 5 and 6 illustrate how much funding for small and medium caliber ammunition acquisitions has been proposed by the President’s Budget, and the final funding including supplemental funds for fiscal years 2001 to 2005. The PEO has taken certain steps to ensure that the national technology and industrial base can meet future small and medium caliber ammunition needs. As part of these efforts, the PEO is attempting to build flexibility into its acquisition system to address near-term fluctuations in the requirement for small caliber ammunition. In addition, the PEO has initiated a longer-term planning process to better manage the national technology and industrial base for conventional ammunition. However, the PEO lacks access to some information needed to effectively implement certain planning initiatives, and other initiatives require actions that are beyond the purview of the PEO. Furthermore, the PEO has not established sufficient performance metrics necessary to ensure accountability. The PEO is taking several steps to increase flexibility in the small caliber ammunition procurement process. First, the PEO plans to increase Lake City’s production capacity to 1.5 billion rounds per year by March 2006 through additional modernization. Moreover, the PEO is in the process of selecting a commercial contractor that will provide an additional 300 million small caliber ammunition rounds per year. This commercial producer will serve as a second source in addition to Lake City, to meet small caliber ammunition needs. The contract is to be awarded in mid 2005, with initial deliveries to start in January 2007. Also, the PEO is requiring that this commercial source be able to supply an additional 200 million rounds of small caliber ammunition, if requirements continue to increase. In the event that future small caliber ammunition requirements were to decrease, which would likely happen if war fighting operations were scaled back, the PEO plans to reduce the amount of ammunition produced at Lake City, while maintaining the 300 million rounds of ammunition production provided by the commercial producer. According to a PEO official, the reduction at Lake City will be accomplished by reducing the number of work shifts rather than by storing or mothballing equipment. Therefore, a future need for increased production could be met by adding shifts. By building this flexibility into the production at Lake City, whose production the PEO can expand or contract under a new contract starting in fiscal year 2008, the PEO hopes to avoid the need for future additional buys, while retaining the capacity to expand production at Lake City. The PEO has initiated a planning process to ensure that the national technology and industrial base for conventional ammunition’s capacity can effectively and efficiently respond to future DOD ammunition needs, including small and medium caliber ammunition requirements. While the process is ongoing, information needed to effectively implement all aspects of the process and performance metrics needed to annually measure progress and ensure accountability are lacking. In November 2003, the PEO issued a plan with the following five goals: (1) balance industrial base and acquisition management risk; (2) transform to meet current and future requirements; (3) incentivize industry to reinvest in capital equipment and processes; (4) modernize required manufacturing and logistics capacity; and (5) operate effectively and efficiently. In addition to these five goals, the plan establishes 30 initiatives that are intended to help meet the goals. (See appendix II for a list of the 30 initiatives by goal.) The PEO has begun taking actions to implement several of the 30 initiatives to achieve the plan’s goals. For example, the PEO has begun implementing initiatives that address determining whether procurements of conventional ammunition should be restricted to sources within the national technology and industrial base, as provided by section 806 of Public Law 105-261. The PEO, the Joint Munitions Command, and the Defense Contract Management Agency have worked together to begin the development of a risk assessment tool that will include industrial base data such as requirements, suppliers, capacities, deficiencies, production schedules, and inventories. The tool also will be capable of developing reports with “what-if” scenarios to anticipate production problems. To date, the Joint Munitions Command and the Defense Contract Management Agency have identified hundreds of conventional ammunitions items that could not be produced if only one supplier of the necessary components needed to build the item would suddenly become unavailable. As part of its efforts to implement its section 806 responsibilities, the PEO is encouraging all ammunition program managers to provide acquisition plans for their conventional ammunition needs, as required by regulation. This process is intended to help the PEO ensure that other Army and DOD components are developing procurement strategies that adhere to section 806. Despite these actions, the implementation of the planning process has two major weaknesses. First, the PEO lacks the information needed to effectively implement several initiatives. For example, conducting a business case analysis to determine the future size and scope of the government-owned base to preserve critical capabilities and reduce costs is key to meeting the PEO goals. However, implementing the business case analysis is outside the PEO’s scope of responsibility. Further, who will need to take action and what needs to be done to develop the business case have not yet been specified. Further, in the case of the acquisition plans, the PEO has encouraged ammunition program managers to submit acquisition plans so that determinations can be made as to what should be procured within the national technology and industrial base, as called for in section 806. Some services program managers (other than Army) have not been forthcoming with all the information needed to make these determinations. The authority to require program managers to submit these plans rests with the Under Secretary of Defense for Acquisition, Technology and Logistics. Second, the performance measures in place are not sufficient to monitor progress made in meeting the plan’s goals and objectives and ensure accountability. The Government Performance and Results Act of 1993 (Public Law 103-62) provides guiding principles that agencies should use to gauge progress towards long-term goals. These principles include identifying required resources such as staff, schedules, and costs. Additionally, the act requires agencies to report actual performance against performance goals, the reasons certain goals were not met, and future planned actions to meet stated goals. The 30 initiatives included in the PEO’s plan establish some accountability for implementation because they identify the objectives for each goal and general performance measures. Further, the PEO is developing plans for each of the 30 initiatives—14 of which have been developed. These plans contain information on major activities or actions that must be taken to complete an initiative and identify the PEO staff responsible for managing each initiative. However, the PEO has not yet completed the development of implementation plans. Furthermore, the initiatives in the strategic plan do not include key results-oriented principles such as identifying key resources needed to meet the plan’s goals and objectives including costs, schedules, and responsible DOD components for individual initiatives. In addition the plan does not include an annual review process to compare the actions taken to the desired performance. Such evaluations could be useful in achieving the plan’s goals. Ensuring the industrial base can meet DOD’s fluctuating small and medium caliber requirements is a significant challenge. Unforeseen events, such as the terrorist attacks of September 11, 2001, and subsequent military deployments, make predicting future requirements difficult. However, it is imperative that the warfighter be provided with sufficient ammunition to carry out missions to counter ongoing and emerging threats without amassing wasteful unused stockpiles. While DOD has been able to meet its near-term ammunition requirements, it has had to rely on foreign suppliers to make up for some shortfalls. Implementing a strategy for DOD’s long-term ammunition needs should include steps to ensure that future ammunition acquisitions are both cost-effective and timely. The likelihood that the current strategy will achieve its goals and objectives could be enhanced by (1) ensuring that the information needed to effectively implement initiatives is provided to those responsible for implementation, and by (2) ensuring that the plan identifies key resources needed to achieve the plan’s goals and objectives, and developing an annual review process to compare the actions taken to the desired performance. To improve DOD’s ability to manage the national technology and industrial base for small and medium caliber ammunition and to address risks to that base, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to ensure that needed information on planned ammunition procurements is provided to the Program Executive Officer for Ammunition; and Assistant Secretary of the Army for Acquisition, Logistics, and Technology to ensure that the Program Executive Officer for Ammunition identifies and provides key resources and develops metrics for measuring annual progress in meeting planned goals and objectives. In commenting on a draft of this report, DOD concurred with both recommendations. In response to our recommendation that DOD ensure that needed information on planned ammunition procurements be provided to the Program Executive Officer for Ammunition, the Under Secretary of Defense for Acquisition, Technology, and Logistics plans to issue direction to the services emphasizing the need to submit small and medium caliber ammunition plans to the Single Manager for Conventional Ammunition. In response to our recommendation that the Program Executive Officer for Ammunition identify and provide key resources and develop metrics for measuring annual progress in meeting planned goals and objectives, the Under Secretary of Defense for Acquisition, Technology, and Logistics plans to provide direction to the Assistant Secretary of the Army for Acquisition, Logistics, and Technology and the Program Executive Officer for Ammunition to identify and provide key resources and develop/refine metrics for measuring annual progress in meeting planned goals and objectives. (See appendix III for agency comments.) In addition, DOD provided technical comments that we have incorporated as appropriate. Copies of this report will be sent to interested congressional committees and the Secretary of Defense. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 if you have any questions regarding this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report were Thomas Denomme, Marie Ahearn, Tony Beckham, Michael Gorin, Arturo Holguín, and Karen Sloan. To identify changes over the past several years that have increased the requirement for small and medium caliber ammunition and assess the actions DOD has taken to address the increased requirement, we reviewed documentation and data, and interviewed DOD officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; the Office of the Assistant Secretary of the Army for Acquisition, Logistics and Technology; the Office of the Army Deputy Chief of Staff for Operations and Plans; the Office of the PEO for Ammunition; and the Training and Doctrine Command. Specifically, we reviewed policy documents governing (1) DOD’s Transformational Planning Guidance, (2) Army training requirements, (3) small and medium caliber ammunition production requirements, and (4) other major operational requirements related to small and medium caliber ammunition needs. We also interviewed DOD and Army officials and obtained inventory data to determine small and medium caliber ammunition trends and to understand how selected policies have impacted the industrial base for ammunition. For the purposes of this review, we collected data on 5.56mm, 7.62mm, 9mm, and .50-caliber small caliber ammunition; as well as 20mm, 25mm, 30mm, and 40mm medium caliber ammunition. We also spoke to two commercial ammunition producers to obtain a better understanding of the supplier base. Finally, we examined budget data provided by officials from the Office of the Assistant Secretary of the Army for Financial Management and Comptroller to determine the funding for small and medium caliber ammunition between fiscal years 2001 to 2005. To determine how DOD plans to ensure that it can meet future small and medium caliber ammunition needs, we interviewed officials from previously mentioned DOD offices, including the Office of the PEO for Ammunition, to determine the status of their planning efforts. Synchronize ammunition procurements to maintain the required manufacturing capabilities and capacities. Define structured decision process to facilitate synchronizing. (Keep lines warm.) 1. Munitions Readiness Ratings. competencies and capabilities are available to meet requirements. 2. Utilized capacity and footprint. Balance cost, schedule, and performance with the need to have capability. contractor-operated/ government-owned, government-operated operating costs. industrial base. 4. Strategic outload capabilities (facilitization, staffing, and skills). Effectively implement section 806 of Public Law 105-261 (Strom Thurmond National Defense Authorization Act for Fiscal Year 1999). Use science base production and prototyping as principal means for attaining surge capabilities and emergency requirements. Partner with industry and academia to assist in advancing the state of manufacturing readiness. Define and require an industrial base readiness assessment into all acquisition plans and strategies. Pursue feasibility and overall business case for government-owned, contractor-operated Army ammunition plants for sell, long-term lease, and/or consolidation options focusing on preserving critical capabilities and reducing costs. (Pending fiscal year 2005 Base Realignment and Closure Process outcome and Army industrial base transformation guidance.) Strategic Goal 2: Transform to meet current and future requirements Develop a replenishment definition to increase planning and industrial base sizing consistencies. 1. Existence and clarity of replenishment requirements definition and strategy. replenishment definition. 2. Percentage of acquisition strategies/plans utilizing industrial base assessment tool for planning. 3. Munitions readiness ratings. 4. Trend of single point failure condition. Establish an integrated data environment process and centralized industrial base assessment tool. Assessment tool will include requirements, suppliers, capacities, deficiencies, production schedules, stockpile, metrics, and “what-if” report generation. 5. Logistics Modernization Program and industrial base preparedness. Increase manufacturing capability and readiness. deployment schedule. 6. Meet combatant command operations planning requirements. 7. Correct positioning of ammunition stocks to meet peacetime and wartime requirements. Pursue feasibility and overall business case for government-owned, contractor-operated Army ammunition plants for sell, long-term lease, and/or consolidation options focusing on preserving critical capabilities and reducing costs. (Pending fiscal year 2005 Base Realignment and Closure Process outcome and Army industrial base transformation guidance.) 8. Government-owned, contractor- contractor-operated Army ammunition plant operating costs/footprint and dispose of excess Army ammunition plant capacity. operated/government-owned, government-operated operating costs. 9. Utilized capacity and footprint. 10. Unit cost trends for critical ammunition end items. Sell non-value added, unutilized production equipment and utilize revenue for advancing manufacturing technology capability, environmental remediation, and reducing Army ammunition plant operating costs. Utilize science-based production methodologies and knowledge transfer/access to Industry for ramp-up capability and/or capacity. Alternative strategy to laying away facilities/equipment. manufacturing/logistics capability and readiness. modernization investments. 12. Munitions readiness ratings. Establish robust manufacturing modernization funding lines. 13. Logistics critical skills and capability sustainment assessments. 14. Strategic outload capabilities (facilitization, staffing, and skills). Strategic Goal 3: Incentivize industry to reinvest in capital equipment and processes Establish multi-year contracting strategies by ammo family (14 categories). 1. Number of suppliers in high-risk industrial base. financial condition. 2. Financial viability of suppliers of critical core capabilities. Selectively promote initiatives from the Armament Retooling and Manufacturing Support Act of 1992 authorizing the Army to permit commercial firms to use facilities located at government-owned, contractor-operated ammunition plants for commercial purposes, and identify projects for production modernization and transformation. Explore and implement indemnification on a selected basis. Promote long-term relationships/partnerships with Industry. Increase industry investment in equipment and facilities. 3. Industry investment applied to modernizing manufacturing processes, equipment, and facilities. Offer government-owned equipment and personnel for supplier use. Initiate a manufacturing modernization loan program to provide low-interest rates to the ammo supply chain. Facilitate use of science based production modeling and process controls. Award incentive production contracts that match government funds for contractor investment in capital equipment and processes. Strategic Goal 4: Modernize required manufacturing and logistics capacity Establish robust manufacturing modernization funding lines. Increase manufacturing and logistics readiness to meet current and future requirements. 1. Munitions readiness ratings. 2. Manufacturing readiness levels for future munitions. 3. Army and industry’s manufacturing modernization investments. 4. Number of single point failures adopting science based production methodologies. Identify, consolidate, and prioritize production deficiencies in the organic and commercial sector, aligning priorities with the program managers’ needs. Integrate into ammunition contracts a percentage required for capital improvement initiatives with contractor matching. Establish science-based production methodologies at critical single point failure locations and transfer prototyping knowledge/capabilities to Industry. Leverage and coordinate Mantech and research, development, technology, and engineering from all services. Strategic Goal 5: Operate effectively and efficiently Ensure that military services and industry participate in strategic planning activities. 1. Number of annual meetings with all services and industry. industry and services. Actively participate in industry organizations and events; e.g., National Defense Industrial Association, Munitions Industrial Base Task Force, etc. 2. Customer satisfaction survey. Level and consolidate procurement buys to the maximum extent practicable. Reduce ammunition life-cycle costs. Maximize customer satisfaction. Reduce response time in providing inefficiencies that have been corrected. ammunition to the joint warfighter. 4. Utilized capacity and footprint. Identify operating inefficiencies and formulate corrective actions. (This initiative was deleted in the November 2004 update of the plan.) 5. Customer satisfaction survey ratings. 6. Munitions readiness ratings. Incentivize the implementation of best business practices in single manager for conventional ammunition processes and at key government and commercial suppliers. 7. Number of self-help projects implemented by ammunition suppliers and associated value- engineering savings. Pursue feasibility and overall business case for government- owned, contractor-operated Army ammunition plants for sale, long- term lease, and/or consolidation options focusing on preserving critical capabilities and reducing costs. (Pending fiscal year 2005 Base Realignment and Closure Process outcome and Army industrial base transformation guidance.) 8. Meet combatant command operations planning requirements. 9. Correct positioning of ammunition stocks to meet peacetime and wartime requirements. Promote/incentivize contractors to pursue state-based, self-help programs among ammo suppliers. Promote commonality of components across/within ammo families. Develop and implement an industrial base integrated data environment using a web-based assessment tool and report generating system that captures production data, stockpile condition, requirements and specific industrial base metrics. 10. Percentage of production base and processes. plan converted to integrated data environment. 11. Establishment and completion of a logistics industrial base plan. Identify and benchmark best practices in production and facility management. Understand condition and posture of 12. Percentage of baseline metrics the ammunition and logistics base. collected. 13. Munitions readiness ratings. Baseline, characterize, and monitor the state of the industrial base supply chain. Utilize Joint Munitions Command production base readiness measurement scheme to characterize risk of industrial base to meet requirements. 14. Strategic outload capabilities (facilitization, staffing, and skills).
|
Following the end of the Cold War, the Department of Defense (DOD) significantly reduced its purchases of small and medium caliber ammunition and reduced the number of government-owned plants that produce small and medium caliber ammunition. Since 2000, however, DOD's requirements for these types of ammunition have increased notably. Because the success of military operations depends in part on DOD having a sufficient national technology and industrial base to meet its ammunition needs, Congress asked GAO to review DOD's ability to assess if its supplier base can meet small and medium caliber ammunition needs. Specifically, we (1) identified changes over the past several years that have increased the requirement for small and medium caliber ammunition, (2) assessed the actions DOD has taken to address the increased requirement, and (3) determined how DOD plans to ensure that it can meet future small and medium caliber ammunition needs. DOD's increased requirements for small and medium caliber ammunition over the past several years are largely the result of increased weapons training requirements needed to support the Army's transformation to a more self-sustaining and lethal force--an effort accelerated after the terrorist attacks of September 11, 2001--and the deployment of forces to conduct recent U.S. military actions in Afghanistan and Iraq. Between fiscal years 2000 and 2005, total requirements for small caliber ammunitions more than doubled, from about 730 million to nearly 1.8 billion rounds, while total requirements for medium caliber ammunitions increased from 11.7 million rounds to almost 22 million rounds. DOD has initiated several steps to meet the increased demand, including funding about $93.3 million for modernization improvements at the three government-owned ammunition plants producing small and medium caliber ammunition. DOD is currently able to meet its medium caliber requirement through modernization efforts at the government-owned ammunition plants and through contracts with commercial producers. The government-owned plant producing small caliber ammunition cannot meet the increased requirements, even with these modernization efforts. Also, commercial producers within the national technology and industrial base have not had the capacity to meet these requirements. As a result, DOD has had to rely at least in part on foreign commercial producers to meet its small caliber ammunition needs. DOD has taken steps to ensure that the national technology and industrial base can meet future small caliber ammunition needs by building flexibility into the acquisition system to address fluctuations. In addition, a planning process has been put in place to ensure that the base can respond to longer-term DOD ammunition needs, including small and medium caliber ammunition. While the process is ongoing, information to effectively implement the plan and timely performance measures to ensure accountability are lacking.
|
The U.S. government provides GPS service free of charge and plans to invest more than $5.8 billion over the next 5 years in the GPS satellites and ground control segments. The Department of Defense (DOD) develops and operates GPS, and an interdepartmental committee—co-chaired by DOD and the Department of Transportation—manages the U.S. space-based positioning, navigation, and timing infrastructure, which includes GPS. DOD also provides most of the funding for GPS. The Air Force is responsible for GPS acquisition and is in the process of modernizing GPS to enhance its performance, accuracy, and integrity. The modernization effort includes GPS IIF and IIIA, two satellite acquisition programs that are to provide new space-based capabilities and replenish the satellite constellation; the ground control segment hardware and software; and user equipment for processing modernized GPS capabilities. Other countries are also developing their own independent global navigation satellite systems that could offer capabilities that are comparable, if not superior, to GPS. In recent years under the IIF program, the Air Force has struggled to successfully build GPS satellites within cost and schedule goals. It encountered significant technical problems that still threaten its delivery schedule and it struggled with a different contractor for the IIF program. These problems were compounded by an acquisition strategy that relaxed oversight and quality inspections as well as multiple contractor mergers and moves, and the addition of new requirements late in the development cycle. GPS was not the only space program started in the 1990s to face such challenges. In fact, DOD continues to face cost overruns in the billions of dollars, schedule delays adding up to years, and performance shortfalls stemming from programs that began in the 1990s and after that were poorly structured, managed and overseen. What sets GPS apart from those programs is that GPS had already been “done” before. The GPS IIF program was far less ambitious than efforts to advance missile warning and weather monitoring capabilities, for example. Our report documents the history of the IIF program and the decisions made early on that weakened the foundation for program execution. What is important to highlight today is that the program is still experiencing technical problems that still threaten its delivery schedule. For example, last year, during the first phase of thermal vacuum testing (a critical test to determine space-worthiness that subjects the satellite to space-like operating conditions), one transmitter used to send the navigation message to the users failed. The program suspended testing in August 2008 to allow time for the contractor to identify the causes of the problems and take corrective actions. The program also had difficulty maintaining the proper propellant fuel-line temperature; this, in addition to power failures on the satellite, delayed final integration testing. In addition, the satellite’s reaction wheels, used for pointing accuracy, were redesigned because on- orbit failures on similar reaction wheels were occurring on other satellite programs—this added about $10 million to the program’s cost. As a result of these problems, the cost to complete GPS IIF will be about $1.6 billion—about $870 million over the original cost estimate of $729 million. The launch of the first IIF satellite has been delayed until November 2009—almost 3 years late. The Air Force is taking measures to prevent the problems experienced on the GPS IIF program from recurring on the GPS IIIA program. Some of the measures the Air Force is taking include: using incremental or block development, where the program would follow an evolutionary path toward meeting needs rather than attempting to satisfy all needs in a single step; using military standards for satellite quality; conducting multiple design reviews, with the contractor being held to military standards and deliverables during each review; exercising more government oversight and interaction with the contractor and spending more time at the contractor’s site; using an improved risk management process, where the government is an integral part of the process; not allowing the program manager to adjust the GPS IIIA program scope to meet increased or accelerated technical specifications, system requirements, or system performance; and conducting an independent technology readiness assessment of the contractor design once the preliminary design review is complete. These efforts are not trivial. The primary causes of space acquisition problems in our view include (1) the tendency to start space programs too early, that is, before there has been assurance that the capabilities being pursuing can be achieved within resources and time constraints and (2) the tendency to attempt to achieve all requirements in one step rather than gradually. The GPS IIIA program was structured to avoid these problems and ensure the program has the right knowledge for moving forward into the acquisition process. Moreover, our work has cited prior acquisition strategies in which the lack of contractor oversight was a problem. Again, the actions being taken on GPS IIIA put controls in place to strengthen oversight and government involvement. We also recognize that the GPS IIIA program took steps to produce realistic cost estimates, which has generally not been done in the past. Nevertheless, there is still a high risk that the Air Force will not meet its schedule for GPS. First, it is aiming to deploy the GPS IIIA satellites 3 years faster than the IIF satellites. Second, the time period between the contract award and first launch for GPS IIIA is shorter than most other major space programs we have reviewed. Third, GPS IIIA is not simply a matter of replicating the IIF program. Though the contractor has had previous experience with GPS, it is likely that the knowledge base will need to be revitalized. The contractor is also being asked to develop a larger satellite bus to accommodate the future GPS increments and to increase the power of a new military signal by a factor of ten. In view of these and other schedule issues, we believe that there is little room in the schedule to accommodate difficulties that the contractor or program may face. Where does this leave the wide span of military, civil, and other user of GPS? If the Air Force does not meet its schedule goals for development of GPS IIIA satellites, there will be an increased likelihood that in 2010, as old satellites begin to fail, the overall GPS constellation will fall below the number of satellites required to provide the level of GPS service that the U.S. government is committing to providing. The performance standards for both (1) the standard positioning service provided to civil and commercial GPS users and (2) the precise positioning service provided to military GPS users commit the U.S. government to at least a 95 percent probability of maintaining a constellation of 24 operational GPS satellites. Because there are currently 31 operational GPS satellites of various blocks, the near-term probability of maintaining a constellation of at least 24 operational satellites remains well above 95 percent. However, DOD predicts that over the next several years many of the older satellites in the constellation will reach the end of their operational life faster than they will be replenished, and that the constellation will, in all likelihood, decrease in size. Based on the most recent satellite reliability and launch schedule data approved in March 2009, the estimated long-term probability of maintaining a constellation of at least 24 operational satellites falls below 95 percent during fiscal year 2010 and remains below 95 percent until the end of fiscal year 2014, at times falling to about 80 percent. See figure 1 for details. Such a gap in capability could have wide-ranging impacts on GPS users, though the exact impact is hard to precisely define, as it would depend on which satellites stop operating. To illustrate, however, the military could see a decrease in the accuracy of precision-guided munitions that rely on GPS to strike their targets. Disruptions in service could require military forces to either use larger munitions or to use more munitions on the same target to achieve the same level of success. Intercontinental commercial flights use predicted satellite geometry over their planned navigation route, and may have to delay, cancel, or reroute flights. Enhanced 911 services, which rely on GPS to precisely locate callers, could lose accuracy particularly when operating in urban canyons or mountainous terrain. The Air Force is aware that, over the next several years, there is some risk that the number of satellites in the GPS constellation could fall below its required 24 satellites, and that this risk would grow significantly if the development and launch of GPS IIIA satellites were delayed by several years. Consequently, Air Force Space Command has established an independent review team to examine the risks and consequences of a smaller constellation on military and civil users. There are measures the Air Force and others can take to plan for and minimize these impacts, which are detailed in our report. However, at this time Air Force representatives believe the best approach to mitigating the risk is to take all reasonable steps to ensure that the current schedule for GPS IIIA is maintained. Moreover, it is unclear whether the user community knows enough about the potential problem to do something about it. To maximize the benefit of GPS, the delivery of its ground control and user equipment capabilities must be synchronized with the delivery of the satellites so that the full spectrum of military assets and individual users can take advantage of new capabilities. This is a challenging endeavor for GPS as it involves installing GPS equipment on board a wide range of ships, aircraft, missiles, and other weapon systems. Our review found that because of funding shifts and diffuse leadership, the Air Force has not been successful in synchronizing satellite, ground control, and user equipment segments. As a result of the poor synchronization, new GPS capabilities may be delivered in space for years before military users can take advantage of them. The Air Force used funding set aside for the ground control and user equipment segment to resolve GPS satellite development problems, causing a delay in the delivery of new GPS capabilities. For example, in 2005 the Air Force began launching its GPS IIR-M satellites, which broadcast a second civil signal. Unfortunately, the ground control segment will not be able to make the second civil signal operational until late 2012 or 2013—7 years later. Likewise, a modernized military signal designed to improve resistance to jamming of GPS will be available for operations on GPS satellites over a decade before user equipment will be fielded that is able to take strategic advantage of it. Because leadership for acquisitions across the space community is fragmented, there is no single authority responsible for synchronizing all segments related to GPS. The responsibility for developing and acquiring GPS satellites and associated ground control segments and for acquiring and producing user equipment for selected platforms for space, air, ground, and maritime environments falls under the Air Force’s Space and Missile Systems Center. On the other hand, responsibility for acquiring and producing user equipment for all other platforms falls on the military services. GPS has produced dramatic improvements both for the United States and globally. Ensuring that it can continue to do so is extremely challenging given competing interests, the span of government and commercial organizations involved with GPS, and the criticality of GPS to national and homeland security and the economy. On the one hand, DOD must ensure that military requirements receive top priority and the program stays executable. In doing so, it must ensure that the program is not encumbered by requirements that could disrupt development, design, and production of satellites. On the other hand, there are clearly other enhancements that could be made to GPS satellites that could serve a variety of vital missions—particularly because of the coverage GPS satellites provide—and there is an expressed desire for GPS to serve as the world’s preeminent positioning, navigation, and timing system. In addition, while the United States is challenged to deliver GPS on a tight schedule, other countries are designing and developing systems that provide the same or enhanced capabilities. Ensuring that these capabilities can be leveraged without compromising national security or the preeminence of GPS is also a delicate balancing act that requires close cooperation between DOD, the Department of State, and other institutions. Because of the scale and number of organizations involved in maximizing GPS, we did not undertake a full-scale review of the requirements and coordination processes. However, we reviewed documents supporting these processes and interviewed a variety of officials to obtain views on their effectiveness. While there is a consensus that DOD and other federal organizations involved with GPS have taken prudent steps to manage requirements and optimize GPS use, we also identified challenges in the areas of ensuring civilian requirements can be met and ensuring that GPS is compatible with other new, potentially competing global space-based positioning, navigation, and timing systems. According to the civil agencies that have proposed GPS requirements, the formal requirements approval process is confusing, time consuming, and difficult to manage. Regarding the international community, while the U.S. government has engaged a number of other countries and international organizations in cooperative discussions, only one legally binding agreement has been established. GPS has enabled transformations in military and other government operations and has become part of the critical infrastructure serving national and international communities. Clearly, the United States cannot afford to see its GPS capabilities decrease below its requirement, and optimally, it is one that should stay preeminent. Over the past decade, however, the program has experienced cost increases and schedule delays, and though the Air Force is making a concerted effort to address acquisition problems, there is still considerable risk that satellites will not be delivered on time and that there will be gaps in capability. As such, we concluded in our review that focused attention and oversight is needed to ensure the program stays on track and is adequately resourced, that unanticipated problems are quickly discovered and resolved, and that all communities involved with GPS are aware of and positioned to address potential gaps in service. But this is difficult to achieve given diffuse responsibility for the GPS acquisition program. Importantly, several recent congressional studies have found that authority and responsibilities for military space and intelligence programs are scattered across the staffs of various DOD organizations and the Intelligence Community, and that this is contributing to difficulties on all major space programs in meeting their schedules. The problem is more acute with GPS because of the range of organizations involved in the program. As mentioned earlier, because different military services are involved in developing and installing equipment onto the weapon systems they operate, there are separate budget, management, oversight, and leadership structures over the user segments. And while there have been various recommendations to accelerate the fielding of military user equipment, this has been difficult to do partially because the program office is experiencing technical issues. We recommended that the Secretary of Defense appoint a single authority to oversee the development of the GPS system, including space, ground control, and user equipment assets, to ensure that the program is well executed and resourced and that potential disruptions are minimized. The appointee should have authority to ensure space, ground control, and user equipment are synchronized to the maximum extent practicable; and coordinate with the existing positioning, navigation, and timing infrastructure to assess and minimize potential service disruptions in the event that the satellite constellation were to decrease in size for an extended period of time. Given the importance of GPS to the civil community, we also recommended that the secretaries of Defense and Transportation, as the co-chairs of the National Executive Committee for Space-Based Positioning, Navigation and Timing, address, if weaknesses are found, civil agency concerns for developing requirements and determine mechanisms for improving collaboration and decision making and strengthening civil agency participation. In responding to our report, DOD concurred with our recommendations, and stated that it recognized the importance of centralizing authority to oversee the continuing synchronized evolution of the GPS and that it will continue to seek ways to improve civil agency understanding of the DOD requirements process and work to strengthen civil agency participation. We continue to believe that DOD will consider an approach that enables a single individual to make resource decisions and maintain visibility over progress and establish a means by which progress in developing the satellites and ground equipment receive attention from the highest level of leadership, that is the Defense Secretary and perhaps the National Security Council, given the criticality of GPS to the warfighter and the nation, and the risks associated with not meeting schedule goals. In addition, as DOD undertakes efforts to inform and educate civil agencies on the requirements process, we encourage it to take a more active role in directly communicating with civil agencies to more precisely identify concerns or weaknesses in the requirements process. Mr. Chairman, this concludes my statement. I will be happy to answer any questions that you or other Members of the Subcommittee have at this time. To assess the acquisition of satellite, ground control, and user equipment, we interviewed Office of the Secretary of Defense (OSD) and Department of Defense (DOD) officials from offices that manage and oversee the Global Positioning System (GPS) program. We also reviewed and analyzed program plans and documentation related to cost, schedule, requirements, program direction, and satellite constellation sustainment, and compared programmatic data to GAO’s criteria compiled over the last 12 years for best practices in system development. We also conducted our own analysis, based on data provided by the Air Force, to assess the implications of potential schedule delays we identified in our assessment of the satellite acquisition. To assess coordination among federal agencies and the broader GPS community, we interviewed OSD and DOD officials from offices that manage and oversee the GPS program, officials from the military services, officials from civil departments and agencies, and officials at the U.S. Department of State and at various European space organizations. We also analyzed how civil departments and agencies coordinate with DOD on GPS civil requirements, and how the U.S. government coordinates with foreign countries. We conducted this performance audit from October 2007 to April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For further information, please contact Cristina Chaplain at (202) 512-4841 or [email protected]. Individuals making contributions to this testimony include Art Gallegos, Greg Campbell, Maria Durant, Laura Hook, Sigrid McGinty, Jay Tallon, and Alyssa Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Global Positioning System (GPS), which provides position, navigation, and timing data to users worldwide, has become essential to U.S. national security and a key tool in an expanding array of public service and commercial applications at home and abroad. The United States provides GPS data free of charge. The Air Force, which is responsible for GPS acquisition, is in the process of modernizing GPS. In light of the importance of GPS, the modernization effort, and international efforts to develop new systems, GAO was asked to undertake a broad review of GPS. Specifically, GAO assessed progress in (1) acquiring GPS satellites, (2) acquiring the ground control and user equipment necessary to leverage GPS satellite capabilities, and evaluated (3) coordination among federal agencies and other organizations to ensure GPS missions can be accomplished. To carry out this assessment, GAO's efforts included reviewing and analyzing program documentation, conducting its own analysis of Air Force satellite data, and interviewing key officials. It is uncertain whether the Air Force will be able to acquire new satellites in time to maintain current GPS service without interruption. If not, some military operations and some civilian users could be adversely affected. (1) In recent years, the Air Force has struggled to successfully build GPS satellites within cost and schedule goals; it encountered significant technical problems that still threaten its delivery schedule; and it struggled with a different contractor. As a result, the current IIF satellite program has overrun its original cost estimate by about $870 million and the launch of its first satellite has been delayed to November 2009--almost 3 years late. (2) Further, while the Air Force is structuring the new GPS IIIA program to prevent mistakes made on the IIF program, the Air Force is aiming to deploy the next generation of GPS satellites 3 years faster than the IIF satellites. GAO's analysis found that this schedule is optimistic, given the program's late start, past trends in space acquisitions, and challenges facing the new contractor. Of particular concern is leadership for GPS acquisition, as GAO and other studies have found the lack of a single point of authority for space programs and frequent turnover in program managers have hampered requirements setting, funding stability, and resource allocation. (3) If the Air Force does not meet its schedule goals for development of GPS IIIA satellites, there will be an increased likelihood that in 2010, as old satellites begin to fail, the overall GPS constellation will fall below the number of satellites required to provide the level of GPS service that the U.S. government commits to. Such a gap in capability could have wide-ranging impacts on all GPS users, though there are measures the Air Force and others can take to plan for and minimize these impacts. In addition to risks facing the acquisition of new GPS satellites, the Air Force has not been fully successful in synchronizing the acquisition and development of the next generation of GPS satellites with the ground control and user equipment, thereby delaying the ability of military users to fully utilize new GPS satellite capabilities. Diffuse leadership has been a contributing factor, given that there is no single authority responsible for synchronizing all procurements and fielding related to GPS, and funding has been diverted from ground programs to pay for problems in the space segment. DOD and others involved in ensuring GPS can serve communities beyond the military have taken prudent steps to manage requirements and coordinate among the many organizations involved with GPS. However, GAO identified challenges in the areas of ensuring civilian requirements can be met and ensuring GPS compatibility with other new, potentially competing global space-based positioning, navigation, and timing systems.
|
Energy oversees a nationwide network of 40 contractor-operated industrial sites and research laboratories that have historically employed more than 600,000 workers in the production and testing of nuclear weapons. In implementing EEOICPA, the President acknowledged that it had been Energy’s past policy to encourage and assist its contractors in opposing workers’ claims for state workers’ compensation benefits based on illnesses said to be caused by exposure to toxic substances at Energy facilities. Under the new law, workers or their survivors could apply for assistance from Energy in pursuing state workers’ compensation benefits, and if they received a positive determination from Energy, the agency would direct its contractors to not contest the workers’ compensation claims or awards. Energy’s rules to implement the new program became effective in September 2002, and the agency began to process the applications it had been accepting since July 2001, when the law took effect. Energy’s claims process has several steps, as shown in Figure 1. First, claimants file applications and provide all available medical evidence. Energy then develops the claims by requesting records of employment, medical treatment, and exposure to toxic substances from the Energy facilities at which the workers were employed. If Energy determines that the worker was not employed by one of its facilities or did not have an illness that could be caused by exposure to toxic substances, the agency finds the claimant ineligible. For all others, once development is complete, a panel of three physicians reviews the case and decides whether exposure to a toxic substance during employment at an Energy facility was at least as likely as not to have caused, contributed to, or aggravated the claimed medical condition. The panel physicians are appointed by the National Institute for Occupational Safety and Health (NIOSH) but paid by Energy for this work. Claimants receiving positive determinations are advised that they may wish to file claims for state workers’ compensation benefits. Claimants found ineligible or receiving negative determinations may appeal to Energy’s Office of Hearings and Appeals. Each of the 50 states and the District of Columbia has its own workers’ compensation program to provide benefits to workers who are injured on the job or contract a work-related illness. Benefits include medical treatment and cash payments that partially replace lost wages. Collectively, these state programs paid more than $46 billion in cash and medical benefits in 2001. In general, employers finance workers’ compensation programs. Depending on state law, employers finance these programs through one of three methods: (1) they pay insurance premiums to a private insurance carrier, (2) they contribute to a state workers’ compensation fund, or (3) they set funds aside for this purpose as self insurance. Although state workers’ compensation laws were enacted in part as an attempt to avoid litigation over workplace accidents, the workers’ compensation process is still generally adversarial, with employers and their insurers tending to challenge aspects of claims that they consider not valid. State workers’ compensation programs vary as to the level of benefits, length of payments, and time limits for filing. For example, in 1999, the maximum weekly benefit for a total disability in New Mexico was less than $400, while in Iowa it was approximately $950. In addition, in Idaho, the weekly benefit for total disability would be reduced after 52 weeks, while in Iowa benefits would continue at the original rate for the duration of the disability. Further, in Tennessee, a claim must be filed within 1 year of the beginning of incapacity or death. However, in Kentucky a claim must be filed within 3 years of exposure to more substances, but within 20 years of exposure to radiation or asbestos. As of June 30, 2003, Energy had completely processed about 6 percent of the nearly 19,000 cases that had been filed, and the majority of all cases filed were associated with facilities in nine states. Forty percent of cases were in processing, but more than 50 percent remained unprocessed. While some case characteristics can be determined, such as illness claimed, systems limitations prevent reporting on other case characteristics, such as the reasons for ineligibility or basic demographics. During the first 2 years of the program ending June 30 2003, Energy had fully processed about 6 percent of the nearly 19,000 claims it received. The majority of these claims had been found ineligible due to either a lack of employment at an eligible facility or an illness related to toxic exposure. Of the cases that had been fully processed, 42 cases—less than one third of one percent of the nearly 19,000 cases filed—had a final determination from a physician panel. More than two thirds of these determinations (30 cases) were positive. At the time of our study, Energy had not yet begun processing more than half of the cases, and an additional 40 percent of cases were in processing (see fig. 2). The majority of cases being processed were in the case development stage, where Energy requests information from the facility at which the claimant was employed. Fewer than 1 percent of cases in process were ready for physician panel review and an additional 1 percent were under panel review. A majority of cases were filed early during program implementation, but new cases continue to be filed. Nearly two-thirds of cases were filed within the first year of the program, between July 2001 and June 2002. However, in the second year of the program—between July 2002 and June 30, 2003—Energy continued to receive more than 500 cases per month. Energy officials report that they currently receive approximately 100 new cases per week. While cases filed are associated with facilities in 38 states or territories, the majority of cases are associated with Energy facilities in nine states (see fig. 3). Facilities in Colorado, Idaho, Iowa, Kentucky, New Mexico, Ohio, South Carolina, Tennessee, and Washington account for more than 75 percent of cases received by June 30, 2003. The largest group of cases is associated with facilities in Tennessee. Workers filed the majority of cases, and cancer is the most frequently reported illness. Workers filed about 60 percent of cases, and survivors of deceased workers filed about 36 percent of cases. In about 1 percent of cases, a worker filed a claim that was subsequently taken up by a survivor. Cancer is the illness reported in more than half of the cases. Diseases affecting the lungs accounted for an additional 14 percent of cases. Specifically, chronic beryllium disease is reported in 1 percent of cases, and beryllium sensitivity, which may develop into chronic beryllium disease, is reported in an additional 5 percent. About 7 percent of cases report asbestosis, and less than 1 percent claimed silicosis. Systems limitations prevent Energy officials from aggregating certain information important for program management. For example, the case management system does not collect information on the reasons that claimants had been declared ineligible or whether claimants have appealed decisions. Systematic tracking of the reasons for ineligibility would make it possible to identify other cases affected by appeal decisions that result in policy changes. While Energy officials report that during the major systems changes that occurred in July 2003, fields were added to the system to track appeals information, no information is yet available regarding ineligibility decisions. In addition, basic demographic data such as age and gender of claimants are not available. Gender information was not collected for the majority of cases. Further, insufficient edit controls— for example, error checking that would prevent claimants’ dates of birth from being entered if the date was in the future—prevent accurate reporting on claimants’ ages. Insufficient strategic planning regarding data collection and tracking have made it difficult for Energy officials to completely track case progress and determine whether they are meeting the goals they have established for case processing. For example, Energy established a goal of completing case development within 120 days of case assignment to a case manager. However, the data system developed by contractors to aid in case management was developed without detailed specifications from Energy and did not originally collect sufficient information to track Energy’s progress in meeting this 120-day goal. Furthermore, status tracking has been complicated by changes to the system and failure to consistently update status as cases progress. While Energy reports that changes made as of July 2003 should allow for improved tracking of case status, it is unclear whether these changes will be applied retroactively to status data already in the system. If they are not, Energy will still lack complete data regarding case processing milestones achieved prior to these changes. Our analysis shows that a majority of cases associated with major Energy facilities in nine stateswill potentially have a willing payer of worker’s compensation benefits. This finding reflects the number of cases for which contractors and their insurers are likely to not contest a workers’ compensation claim, rather than the number of cases that will ultimately be paid. The contractors considered to be willing payers are those that have an order from, or agreement with, Energy to not contest claims. However, there are likely to be many claimants who will not have a willing payer in certain states, such as Ohio and Iowa. For all claimants, additional factors such as state workers’ compensation provisions or contractors’ uncertainty on how to compute the benefit may affect whether or how much compensation is paid. A majority of cases in nine states will potentially have a willing payer of workers’ compensation benefits, assuming that for all cases there has been a positive physician panel determination and the claimant can demonstrate a loss from the worker’s illness that has not previously been compensated. Specifically, based on our analysis of worker’s compensation programs and the different types of workers compensation coverage used by the major contractors, it appears that approximately 86 percent of these cases will potentially have a willing payer—that is, contractors and their insurers who will not contest the claims for benefits. It was necessary to assume that all cases filed would receive a positive determination by a physician panel because sufficient data are not available to project the outcomes of the physician panel process. More specifically, there are indications that the few cases that have received determinations from physician panels may not be representative of all cases filed, and sufficient details on workers’ medical conditions were not available to enable us to independently judge the potential outcomes. In addition, we assumed that all workers experienced a loss that was not previously compensated because sufficient data were not available to enable us to make more detailed projects on this issue. As shown in table 1, most of the contractors for the major facilities in these states are self-insured, which enables Energy to direct them to not contest claims that receive a positive medical determination. In addition, the contractor in Colorado, which is not self-insured but has a commercial policy, took the initiative to enter into an agreement with Energy to not contest claims. The contractor viewed this action as being in its best interest to help the program run smoothly. However, it is unclear whether the arrangement will be effective because no cases in Colorado have yet received compensation. In such situations where there is a willing payer, the contractor’s action to pay the compensation consistent with Energy’s order to not contest a claim will override state workers’ compensation provisions that might otherwise result in denial of a claim, such as failure to file a claim within a specified period of time. However, since no claimants to date have received compensation as a result of their cases filed with Energy, there is no actual experience about how contractors and state workers’ compensation programs treat such cases. About 14 percent of cases in the nine states we analyzed may not have a willing payer. Therefore, in some instances these cases may be less likely to receive compensation than a comparable case for which there is a willing payer, unless the claimant is able to overcome challenges to the claim. Specifically, these cases that lack willing payers involve contractors that (1) have a commercial insurance policy, (2) use a state fund to pay workers’ compensation claims, or (3) do not have a current contract with Energy. In each of these situations, Energy maintains that it lacks the authority to make or enforce an order to not contest claims. For instance, an Ohio Bureau of Workers’ Compensation official said that the state would not automatically approve a case, but would evaluate each workers’ compensation case carefully to ensure that it was valid, and thereby protect its state fund. Concerns about the extent to which there will be willing payers of benefits have led to various proposals for addressing this issue. For example, the state of Ohio proposed that Energy designate the state as a contractor to provide a mechanism for reimbursing the state for paying the workers’ compensation claims. However, Energy rejected this proposal on the grounds that EEOICPA does not authorize the agency to establish such an arrangement. In a more wide-ranging proposal, legislation introduced in this Congress proposes to establish Subtitle D as a federal program with uniform benefits administered by the Department of Labor. In contrast to Subtitle B provisions that provide for a uniform federal benefit that is not affected by the degree of disability, various factors may affect whether a Subtitle D claimant is paid under the state workers’ compensation program, or how much compensation will be paid. Beyond the differences in the state programs that may result in varying amounts and length of payments, these factors include the demonstration of a loss resulting from the illness and contractors’ uncertainty on how to compute compensation. Even with a positive determination from a physician panel and a willing payer, claimants who cannot demonstrate a loss, such as loss of wages or medical expenses, may not qualify for compensation. On the other hand, claimants with positive determinations but not a willing payer may still qualify for compensation under the state program if they show a loss and can overcome all challenges to the claim raised by the employer or the insurer. Contractors’ uncertainty on how to compute compensation may also cause variation in whether or how much a claimant will receive in compensation. While contractors with self-insurance told us that they plan to comply with Energy’s directives to not contest cases with positive determinations, some contractors were unclear about how to actually determine the amount of compensation that a claimant will receive. For example, one contractor raised a concern that no guidance exists to inform them about whether they can negotiate the degree of disability, a factor that could affect the amount of the workers’ compensation benefit. Other contractors will likely experience similar situations, as Energy has not issued guidance on how to consistently compute compensation amounts. While not directly affecting compensation amounts, a related issue involves how contractors will be reimbursed for claims they pay. Energy uses several different types of contracts to carry out its mission, such as operations or cleanup, and these different types of contracts impact how workers’ compensation claims will be paid. For example, a contractor responsible for managing and operating an Energy facility was told to pay the workers’ compensation claims from its operating budget. The contractor said that this procedure may compromise its ability to conduct its primary responsibilities. On the other hand, a contractor cleaning up an Energy facility was told by Energy officials that its workers’ compensation claims would be reimbursed under its contract, and therefore paying claims would not affect its ability to perform cleanup of the site. As a result of Energy’s policies and procedures for processing claims, claimants have experienced lengthy delays in receiving the determinations they need to file workers’ compensation claims. In particular, the number of cases developed during initial case processing has not always been sufficient to allow the physician panels to operate at full capacity. Moreover, even if these panels were operating at full capacity, the small pool of physicians qualified to serve on the panels would limit the agency’s ability to produce more timely determinations. Energy has recently allocated more funds for staffing for case processing, but is still exploring methods for improving the efficiency of its physician panel process. Energy’s case development process has not consistently produced enough cases to ensure that the physician panels are functioning at full capacity. To make efficient use of physician panel resources, it is important to ensure that a sufficient supply of cases is ready for physician panel review. Energy officials established a goal of completing the development of 100 cases per week by August 2003 to keep the panels fully engaged. However, as of September 2003, Energy officials stated that the agency was completing development on only about 40 cases a week. Further, while agency officials indicated that they typically assigned 3 cases at a time to be reviewed within 30 days, several panel physicians indicated that they received fewer cases, some receiving a total of only 7 or 8 during their first year as a panelist. Energy was slow to implement its case development operation. Initially, agency officials did not have a plan to hire a specific number of employees for case development, but they expected to hire additional staff as they were needed. When Energy first began developing cases, in the fall of 2002, the case development process had a staff of about 14 case managers and assistants. With modest staffing increases, the program quickly outgrew the office space used for this function. Though Energy officials acknowledged the need for more personnel by spring 2003, they delayed hiring until additional space could be secured in August. As of August 2003, Energy had more than tripled the number of employees dedicated to case development to a about 50, and Energy officials believe that they will now be able to achieve their goal of completing development of 100 cases a week that will be ready for physician panel review. Energy officials cited a substantial increase in the number of cases ready for physician panel review during October 2003, and reported preparing more than a hundred cases for panel review in the first week of November 2003. Energy shifted nearly $10 million from other Energy accounts into this program in fiscal year 2003, and plans to shift an additional $33 million into the program in fiscal year 2004, to quadruple its case-processing operation. With additional resources, Energy plans to complete the development of all pending cases as quickly as possible and have them ready for the physician panels. However, this would create a large backlog of cases awaiting review by physician panels. Because most claims filed so far are from workers whose medical conditions are likely to change over time, creation of such a backlog could further slow the decision process by making it necessary to update medical records before panel review. Even if additional resources allow Energy to speed initial case development, the limited pool of qualified physicians for panels will likely prevent significant improvements in processing time. Currently, approximately 100 physicians are assigned to panels of 3 physicians. In an effort to improve overall processing time, Energy has requested that NIOSH appoint an additional 500 physicians to staff the panels. NIOSH has indicated that the pool of physicians with the appropriate credentials and experience (including those already appointed) may be limited to about 200. Even if Energy were able to increase the number of panel physicians to 200, with each panel reviewing 3 cases a month, the panels would not be able to review more than 200 cases in any 30-day period given current procedures. Thus, even with double the number of physicians currently serving on panels, it would take more than 7 years to process all cases pending as of June 30, 2003, without consideration of the hundreds of new cases the agency is receiving each month. Energy officials are exploring ways that the panel process could be made more efficient. For example, the agency is currently planning to establish permanent physician panels in Washington, DC. Physicians who are willing to serve full-time for a 2 or 3-week period would staff these panels. In addition, the agency is considering reducing the number of physicians serving on each panel—for example, initially using one physician to review a case, assigning a second physician only if the first reaches a negative determination, and assigning a third physician if needed to break a tie. Energy staff are currently evaluating whether such a change would require a change in their regulations. Agency officials have also recommended additional sources from which NIOSH might recruit qualified physicians and are exploring other potential sources. For example, the physicians in the military services might be used on a part-time basis. In addition, physicians from the Public Health Service serve on temporary full-time details as panel physicians. Panel physicians have also suggested methods to Energy for improving the efficiency of the panels. For example, some physicians have stated that more complete profiles of the types and locations of specific toxic substances at each facility would speed their ability to decide cases. In addition, one panel physician told us that one of the cases he reviewed received a negative determination because specific documentation of toxic substances at the worker’s location was lacking. While Energy officials reported that they have completed facility overviews for about half the major sites, specific data are available for only a few sites. Agency officials said that the scarcity of records related to toxic substances and a lack of sufficient resources constrain their ability to pursue building-by building profiles for each facility. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For information regarding this testimony, please contact Robert E. Robertson, Director, or Andrew Sherrill, Assistant Director, Education, Workforce, and Income Security at (202) 512-7215. Individuals making contributions to this testimony include Amy E. Buck, Melinda L. Cordero, Beverly Crawford, Patrick DiBattista, Corinna A. Nicolaou, Mary Nugent, and Rosemary Torres Lerma. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Department of Energy (Energy) and its predecessor agencies and contractors have employed thousands of workers in the nuclear weapons production complex. Some employees were exposed to toxic substances, including radioactive and hazardous materials, during this work and many subsequently developed illnesses. Subtitle D of the Energy Employees Occupational Illness Compensation Program Act of 2000 allows Energy to help its contractor employees file state workers' compensation claims for illnesses determined by a panel of physicians to be caused by exposure to toxic substances in the course of employment at an Energy facility. Energy began accepting applications under this program in July 2001, but did not begin processing them until its final regulations became effective on September 13, 2002. The Congress mandated that GAO study the effectiveness of the benefit program under Subtitle D of this Act. This testimony is based on GAO's ongoing work on this issue and focuses on three key areas: (1) the number, status, and characteristics of claims filed with Energy; (2) the extent to which there will be a "willing payer" of workers' compensation benefits, that is, an insurer who--by order from, or agreement with Energy--will not contest these claims; and (3) the extent to which Energy policies and procedures help employees file timely claims for these state benefits. As of June 30, 2003, Energy had completely processed only about 6 percent of the nearly 19,000 cases it had received. More than three-quarters of all cases were associated with facilities in nine states. Processing had not begun on over half of the cases and, of the remaining 40 percent of cases that were in processing, almost all were in the initial case development stage. While the majority of cases (86 percent) associated with major Energy facilities in nine states potentially have a willing payer of workers' compensation benefits, actual compensation is not certain. This figure is based primarily on the method of workers' compensation coverage used by Energy contractor employers and is not an estimate of the number of cases that will ultimately be paid. Since no claimants to date have received compensation as a result of their cases filed with Energy, there is no actual experience about how contractors and state programs treat such claims. Claimants have been delayed in filing for state worker's compensation benefits because of two bottlenecks in Energy's claims process. First, the case development process has not always produced sufficient cases to allow the panels of physicians who determine whether the worker's illness was caused by exposure to toxic substances to operate at full capacity. While additional resources may allow Energy to move sufficient cases through its case development process, the physician panel process will continue to be a second, more important, bottleneck. The number of panels, constrained by the scarcity of physicians qualified to serve on panels, will limit Energy's capacity to decide cases more quickly, using its current procedures. Energy officials are exploring ways that the panel process could be more efficient.
|
Neither federal nor state regulators collected comprehensive data on the uses and prevalence of business-owned life insurance. Although no comprehensive data were available on the uses of such policies, businesses may purchase life insurance to ensure recovery of losses in the event of the untimely death of key employees and to fund pre- and postretirement employee benefits. Accounting standards require that the future costs of postretirement benefit plans be recorded as liabilities at their present value on current financial statements. The accounting standards do not require that such liabilities be directly offset with specified assets. However, businesses may choose to fund such future costs using life insurance, thereby becoming eligible for tax-free policy earnings and tax-free death benefit payments on the policies. When businesses use nonqualified plans to provide postretirement benefits, they avoid the funding and other restrictions of tax-preferred qualified plans, while retaining control over the plan assets. Federal bank regulators did not collect comprehensive data on the uses and prevalence of business-owned life insurance by banks and thrifts, although they collected some financial information on such policies as part of monitoring the safety and soundness of individual institutions. Regulatory officials said that they collect this information to support their supervision of individual institutions. For supervisory purposes, banks and thrifts are only required to disclose the cash surrender value of business- owned life insurance and earnings from these policies in their quarterly financial reports to the regulators if the amounts exceed certain thresholds. For example, the Federal Deposit Insurance Corporation (FDIC), Federal Reserve Board, and Office of the Comptroller of the Currency (OCC) require the institutions they regulate to disclose the cash surrender value of policies worth more than $25,000 in aggregate and that exceed 25 percent of “other assets,” which include such items as repossessed personal property. The Office of Thrift Supervision (OTS) requires the thrifts it supervises to report the cash surrender value of policies if the value is one of the three largest components of “other assets.” In addition to the banks and thrifts that meet a disclosure threshold, other institutions sometimes voluntarily provide data on their business-owned life insurance policies. Our preliminary results indicated that about one-third of banks and thrifts, including many of the largest institutions, disclosed the value of their business-owned life insurance holdings as of December 31, 2002, either voluntarily or because they met the reporting threshold. The remaining two-thirds either did not meet the reporting threshold or did not own business-owned life insurance. We found that 3,209 banks and thrifts (34 percent of all institutions) reported the cash surrender value of their policies at $56.3 billion. Twenty-three of the top 50 banks and thrifts— ranked by total assets—reported owning policies worth $36.9 billion, or 66 percent of the reported total of all banks and thrifts. Overall, 259 large banks and thrifts—those with assets of $1 billion or more, including those among the top 50—held 88 percent, or $49.4 billion, of the total reported cash surrender value of business-owned life insurance. The quarterly reports that commercial banks and FDIC-supervised thrifts submitted did not require them to categorize business-owned life insurance policies according to their intended use. OTS-supervised thrifts, in contrast, were required to report the value of their key-person policies and the value of business-owned life insurance policies held for other purposes as separate items, if they met the reporting threshold. However, since the disclosure threshold applied separately to the two categories, OTS-supervised thrifts could be required to report on only one type of policy, rather than the total value of their business-owned life insurance holdings, even if they held both key-person and other policies. According to SEC, agency regulations do not specifically require public companies to disclose the value or uses of business-owned life insurance in the financial statements submitted to the agency. The federal securities laws that SEC administers are designed to protect investors by requiring public companies to disclose information that is “material” to investors in their financial statements—that is, according to SEC, information that an investor would consider important in deciding whether to buy or sell a security or in making a voting decision related to a security that the investor owns. SEC officials said that for most companies, business-owned life insurance holdings are not likely to be material to the company’s financial results, and therefore would not be subject to SEC reporting requirements. IRS officials told us that the agency has not collected comprehensive information on the value of or income from business-owned life insurance policies, and agency officials said that they do not need this information. Specifically, businesses are generally not required to include the earnings or death benefits from business-owned life insurance in their taxable income. Businesses that are subject to the alternative minimum tax include income from death benefits and earnings from insurance when calculating the tax, but they are not required to list the insurance-related values or the uses of the policies on the alternative minimum tax form. Also, businesses that are required to complete Schedule M-1, Reconciliation of Income (Loss) per Books with Income per Return, as part of their Form 1120, U.S. Corporation Income Tax Return, would report earnings on business-owned life insurance as part of the income recorded on their books but not on the tax return. However, according to IRS officials, these earnings might not be separately identified as they are often “lumped” with other adjustments. State insurance regulators, concerned with state requirements, rates, and solvency issues, have collected extensive financial information from insurers, but not at the level of detail that would describe the uses or prevalence of business-owned life insurance policies. State insurance regulators use insurers’ financial statements to monitor individual companies’ solvency, and aggregate information on business-owned life insurance has not, in state regulators’ views, been necessary for such monitoring. Insurers’ financial statements list the number of all policies in force and premiums collected during the reporting period, but broken out only by individual and group policies, not by whether businesses or individuals owned the policies. In an effort to compile more comprehensive data on business-owned life insurance, we worked with the representatives of six insurance companies and the American Council of Life Insurers (ACLI) to develop a survey of the uses and prevalence of business-owned life insurance sales. Although the insurance companies’ representatives cooperated in a pretest of the survey, and ACLI representatives said that they would encourage their members to participate in the survey itself, the results of the pretest led us to conclude that we would not be able to obtain sufficiently reliable data to allow us to conduct the survey. These representatives told us that they do not have a business need to maintain the comprehensive data on business-owned life insurance that we needed for the survey. They said that insurers do not routinely summarize information on the numbers of policies and insured individuals, cash surrender value of policies, and uses of business-owned life insurance. They explained that various factors made it difficult to obtain summary information, including that individual businesses may own multiple policies; that the same individuals may be insured under multiple policies; and that when purchasing policies, businesses may state multiple policy uses or policy uses may change over time. They also explained that extensive efforts would be required for insurance companies to obtain information from their computer systems and, in some cases, paper files to identify business-owned policies on employees where the business is also the beneficiary. Our preliminary review of the financial statements of 32 life insurance companies that filed 10-K annual reports with SEC and that were among the 50 largest such companies ranked by assets, disclosed some information on business-owned life insurance. Although SEC did not require insurance companies to identify business-owned life insurance sales in their annual statements to the agency, nine insurers reported over $3 billion in business-owned life insurance premiums from 2002 sales. Five of the insurance companies also reported that total premiums from 2002 business-owned life insurance premiums ranged from 10 to 53 percent of each company’s 2002 total life insurance sales premiums. In addition, three insurance companies reported the value of their business-owned life insurance assets as totaling about $28 billion as of December 31, 2002. Insurance companies have also reported business-owned life insurance sales in response to industry surveys. CAST Management Consultants, Inc., conducts research on business-owned life insurance and, in a summary report, estimated 2002 annual business-owned life insurance premiums of $2.1 billion, based on the survey responses of 20 insurance carriers increased by CAST adjustments. CAST representatives declined to provide us any information about the complete survey, which is available only to “qualified market participants.” We could not, therefore, determine whether CAST was able to collect the information we sought to obtain by conducting our own survey. In addition, a representative of the A.M. Best insurer rating company said that the company collects information on business-owned life insurance, but does not currently report the data. A.M. Best reported aggregate premiums from business- owned life insurance for 1998 (the last year for which it reported data) as more than $10 billion for 20 large insurers. Some businesses included anecdotal information about how they intended to use business-owned life insurance in the annual financial statements they filed with SEC. Our preliminary analysis of 100 randomly selected Fortune 1000 public companies’ financial statements filed with SEC showed that 15 of the selected businesses referred to owning such policies, including 11 that provided information about their intended uses of the policies. The most commonly cited use of business-owned life insurance was to fund deferred executive compensation. One business reported using policies to help fund postretirement health care benefits, and another reported using the policies to help fund an employee benefit plan for management employees as well as executives. Some businesses have also provided survey responses on their uses of business-owned life insurance to fund executive benefit plans. Clark/Bardes Consulting conducts an annual executive benefits survey and reports on the uses of business-owned life insurance by companies to fund nonqualified deferred compensation plans and supplemental executive retirement plans. In the 2002 results from its survey of Fortune 1000 corporations, Clarke/Bardes reported that 65 percent of those companies that fund nonqualified deferred compensation plans and 68 percent of those that fund nonqualified supplemental executive retirement plans do so using business-owned life insurance. Finally, the federal government estimated that the current tax exclusion of earnings on the cash value of business-owned life insurance results in over a billion dollars in foregone tax revenues annually—these estimates do not reflect the exclusion of additional income from death benefit payments. In its “Estimates of Federal Tax Expenditures for Fiscal Years 2003-2007,” the Joint Committee on Taxation estimated that the foregone tax revenues resulting from the tax exclusion of investment income on life insurance for corporations would total $7.2 billion for 2003 through 2007. Similarly, the Office of Management and Budget, in its fiscal year 2004 budget “Analytical Perspectives,” estimated foregone tax revenues of $9.3 billion for 2003 through 2007 resulting from the tax exclusion of life insurance. The federal bank regulators, SEC, the IRS, and state insurance regulators had guidelines or requirements applicable to business-owned life insurance but did not identify significant regulatory concerns. The federal bank regulators had guidelines for purchases of business-owned life insurance by banks and thrifts. OCC and OTS guidelines describe the permissible uses of business-owned life insurance and require national banks and OTS-supervised thrifts to perform due diligence before purchasing policies and to maintain effective senior management and board oversight. According to agency officials, FDIC and the Federal Reserve Board follow OCC’s guidelines. The guidelines that are common among the regulators state that banks and thrifts can only purchase life insurance for reasons incidental to banking, including key-person insurance, insurance on borrowers, and insurance purchased in connection with employee compensation and benefit plans. Before purchasing policies, a bank’s or thrift’s management must conduct a prepurchase analysis that should, among other things, determine the need for insurance, ensure that the amount of insurance purchased is not excessive in relation to the estimated obligation or risk, and analyze the associated risks and the bank’s or thrift’s ability to monitor and respond to those risks. The guidelines also state that a bank or thrift should consider the size of its purchase of business-owned life insurance relative to the institution’s capital and diversify risks associated with the policies. The guidelines require banks and thrifts to document their decisions and monitor their policies on an ongoing basis. In addition, banks and thrifts using business-owned life insurance for executive compensation should ensure that total compensation is not excessive under regulatory guidelines. The federal bank regulators we spoke with said that their risk-based examination programs target any aspect of banks’ and thrifts’ purchases of business-owned life insurance that would raise supervisory concerns. The regulators monitor institutions’ safety and soundness through their risk- based examinations, which they said assess banks’ and thrifts’ compliance with guidelines on business-owned life insurance on a case-by-case basis. For example, all of the regulators said that if the value of the policies exceeded 25 percent of the regulator’s measure of the institution’s capital, they would consider whether further supervisory review or examination of these holdings was warranted. The regulators said that additional review or examination would be likely if the policies were held with one or very few insurers. As of December 31, 2002, 467 banks and thrifts reported business-owned life insurance holdings in excess of 25 percent of their tier 1 capital. We asked the bank regulators to explain their oversight of 58 institutions with the largest concentrations, all in excess of 40 percent of tier 1 capital. Bank regulatory officials said that their agencies were monitoring these institutions’ levels of holdings, had conducted preliminary reviews or detailed examinations, and concluded that major supervisory concerns do not exist. SEC officials said that the agency’s regulations for public companies do not specifically address business-owned life insurance; rather, SEC has relied on its broadly applicable disclosure requirements to surface any investor protection concerns. SEC requires public companies to prepare their financial statements in accordance with generally accepted accounting principles (GAAP), which would require them to disclose information about business-owned life insurance policies when such information is material. According to SEC officials, however, following GAAP would rarely require purchases of and earnings from business- owned life insurance to be shown as separate line items because they typically are not financially material to the company. SEC officials also said that the agency would have an oversight concern if it became aware of a public company’s failure to disclose material purchases of or earnings from business-owned life insurance, or if problems developed in accounting for these policies. However, they said that, to date, such problems have not arisen, and they have not had investor-protection concerns about public companies holding such insurance. The IRS had some requirements related to the tax treatment of business- owned life insurance. The Internal Revenue Code defines life insurance for tax purposes and sets out the current limitations on permissible tax deductions that businesses can claim for the interest on policy loans against life insurance policies. Federal laws and IRS regulations have changed some aspects of the tax treatment of business-owned life insurance. While policy owners may access the cash value of their policies by borrowing against them, policy owners’ ability to deduct the interest on such loans was limited by the Tax Reform Act of 1986 and further limited by the Health Insurance Portability and Accountability Act (HIPAA) of 1996, which amended Internal Revenue Code section 264. Before these limitations, some businesses were leveraging their life insurance ownership by borrowing against the policies to pay a substantial portion of the insurance premiums. Known as leveraged business-owned life insurance, these arrangements created situations where businesses incurred a tax-deductible interest expense while realizing tax-free investment returns. Various sources have reported that HIPAA curtailed new sales of leveraged policies, although such policies that were purchased in the past remain part of the life insurance policies currently in force. However, IRS officials expressed concern that HIPAA did not eliminate the tax arbitrage opportunities available through business- owned life insurance and that banks and other highly leveraged financial institutions may be indirectly borrowing to purchase policies on employees. IRS officials said that the agency is also concerned that banks are using separate account policies to maintain control over investments in a way that is inconsistent with the Internal Revenue Code. These officials said that the agency is continuing to study these business-owned life insurance issues at selected banks. Finally, in September 2003, the IRS issued final regulations on the tax treatment of split-dollar life insurance policies—policies in which the employer and employee generally share costs and benefits. Under the regulations, corporations cannot provide tax-free compensation to executives using split-dollar policies. State law requires that one party have an insurable interest in another to be able to take out a life insurance policy on that person and defines the conditions for one party to have an insurable interest in the life of another person. Historically, insurable interest related to a family’s dependency on an individual and a business’s risk of financial loss in the event of the death of a key employee. The significance of employers having an insurable interest in their employees is illustrated by the 2002 decision of a federal district court in Texas. The court found that Wal-Mart did not have an insurable interest in employees’ lives under Texas law, given the nature of the policies taken out on each of 350,000 Wal-Mart employees, and that under Texas law, Wal-Mart could not collect on the death benefits paid under policies covering deceased employees. NAIC, a membership organization of chief state insurance regulators that helps promote coordination among the states, initially developed model guidelines for business-owned life insurance in 1992 and revised them in 2002. The 1992 guidelines suggested that states consider including in their laws provisions that recognize employers’ insurable interest in employees, including nonmanagement employees who could expect to receive benefits. The 2002 revision added a recommendation for states to consider requiring employee consent to be insured and prohibiting employers from retaliating against employees who refused to grant their consent. Since NAIC adopted the revised guidelines, several states have passed legislation requiring employers to obtain employees’ written consent before taking insurance on them. In some states consent provisions apply to life insurance policies in general, while in others these provisions specifically address business-owned life insurance. Our preliminary analysis indicated that, as of July 31, 2003, more than 30 states required written consent, including several states with provisions specific to business-owned life insurance. However, most of these states exempted group life insurance policies from consent requirements. Also, in some states consent requirements were satisfied if an employee did not object to a notice of the employer’s intent to purchase a policy. Additionally, at least one state required employers to notify employees when purchasing business-owned life insurance, but did not require employee consent. Officials of NAIC and four state insurance departments—California, Illinois, New York, and Texas—stated that, in recent years, some state legislatures adopted laws broadening the definition of employers’ insurable interest to include broader groups of employees in order to permit using business-owned life insurance to finance employee benefit programs, such as current employee and retiree health care. The officials said that such laws responded in part to Financial Accounting Standard 106, which took effect in 1992 and requires businesses to report the present value of future postretirement employee benefits as employees earn them. Also, our preliminary analysis showed that several states limit the aggregate amount of insurance coverage on nonmanagement employees to an amount commensurate with the business’s employee benefit liabilities. In addition, a few states recognize an employer’s insurable interest in employees, provided that businesses use the proceeds solely to fund benefit programs. Insurance department officials from the four states also told us that they primarily address compliance with their respective laws through a review of the proposed policy forms that insurers must submit for approval before marketing policies in their states. For example, in New York, the insurance department developed a checklist of items that must be included on forms that will be used for business-owned life insurance policies to ensure that the forms comply with the state’s notification, consent, and other requirements. While NAIC officials said that state insurance regulators would generally have the authority to review policies currently in force for compliance with any state requirements, the officials from the four states said that they had not examined policies sold to confirm that employees consented to be insured or, where applicable, to test whether the amounts of coverage were appropriate. Officials in the four states said that their departments would investigate business-owned life insurance sales through their market conduct examinations of insurers if they observed a pattern of consumer complaints about such sales.However, the officials said that generally they had not received complaints about business-owned life insurance. Also, NAIC officials told us that the organization maintains a national database of consumer complaints made to state insurance regulators and that business-owned life insurance has not been a source of complaints. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Business-owned life insurance is held by employers on the lives of their employees, and the employer is the beneficiary of these policies. Unless prohibited by state law, businesses can retain ownership of these policies regardless of whether the employment relationship has ended. Generally, business-owned life insurance is permanent, lasting for the life of the employee and accumulating cash value as it provides coverage. Attractive features of business-owned life insurance, which are common to all permanent life insurance, generally include both tax-free accumulation of earnings on the policies' cash value and tax-free receipt of the death benefit. To address concerns that businesses were abusing their ability to deduct interest expenses on loans taken against the value of their policies, Congress passed legislation to limit this practice, and the Internal Revenue Service (IRS) and Department of Justice pursued litigation against some businesses. But concerns have remained regarding employers' ability to benefit from insuring their employees' lives. This testimony provides some preliminary information from ongoing GAO work on (1) the uses and prevalence of business-owned life insurance and (2) federal and state regulatory requirements for and oversight of business-owned life insurance. GAO's preliminary work indicated that no comprehensive data are available on the uses of business-owned life insurance policies; however, businesses can purchase these policies to fund current and future employee benefits and receive tax advantages in the process. Federal bank regulators have collected some financial information on banks' and thrifts' business-owned life insurance holdings, but the data are not comprehensive and do not address the uses of the policies. The Securities and Exchange Commission (SEC), the IRS, state insurance regulators, and insurance companies told GAO that they generally have not collected comprehensive data on the sales or purchases of these policies or on their intended uses, because they have not had a need for such data in fulfilling their regulatory missions. In an effort to collect comprehensive data, GAO considered surveying insurance companies about their sales of business-owned life insurance. However, based on a pretest with six insurance companies, GAO determined that it would not be able to obtain sufficiently reliable data to allow it to conduct a survey. GAO found, however, that some insurers have voluntarily disclosed information about sales of business-owned policies and that some noninsurance businesses have included examples of their uses in annual financial reports filed with SEC. As part of their responsibility to oversee the safety and soundness of banks and thrifts, the federal bank regulators have issued guidelines for institutions that buy business-owned life insurance. Also, they told GAO that they have reviewed the holdings of many institutions with significant amounts of business-owned life insurance and concluded that major supervisory concerns do not exist. SEC officials said that the agency has not issued specific requirements for holders of business-owned life insurance, relying instead on its broadly applicable requirement that public companies disclose information material to investors in their financial statements; SEC did not have investor protection concerns about public firms holding business-owned life insurance. The IRS had some requirements related to the tax treatment of business-owned life insurance and expressed some concerns about compliance with these requirements. State laws governing business-owned life insurance differed; the four states' regulators that GAO interviewed described some limited oversight of the policies, and these regulators and NAIC reported no problems with them.
|
Approximately 2.6 million federal employees throughout the United States and abroad execute the responsibilities of the federal government. Fed employees work in every state, with about 90 percent outside the Washington, D.C., metropolitan area. Federal workers perform functions across a multitude of sectors, from those vital to the long-term well-being of the country—such as environmental protection, intelligence, social work, and financial services—to those directly charged with aspects of public safety—including corrections, airport and aviation safety, medical services, border protection, and agricultural safety. Worker protection strategies are crucial to sustain an adequate workforce a during a pandemic. During the peak of an outbreak of a severe influenz pandemic in the United States, an estimated 40 percent of the workforce ly could be unable to work because of illness, the need to care for ill fami s to members, or fear of infection. While the commitment of federal worker carry out the missions of their agencies during natural and man-made disasters and emergencies is evident from past disasters, critical federal workers have sometimes been left to fend for themselves during such situations. For example, in the aftermath of Hurricane Katrina in 2005, many essential federal personnel in New Orleans did not have housin and, therefore, were not able to return to work. Unlike oil and gas w in New Orleans, whose companies sought to secure housing for them, local federal workers did not have an advocate that would ensure the speedy reconstitution of essential services. In many cases, essential federal employees queued up for temporary housing in long lines. The federal government has issued guidance to assist organizations of all types in developing plans for pandemic events, including a national strategy that discusses the threat and potential impact of a pandemic influenza event and an implementation plan for the national strategy that identifies roles and responsibilities for the federal government sector, and others. HHS has also published a series of checklists inten to aid preparation for a pandemic across all segments of society. These ded include checklists for organizations such as state and local governments U.S. businesses, individuals and families, schools, health care organizations, and community organizations. As pandemic influenza presents unique challenges to the coordination of the federal effort, joint and integrated planning across all levels of government and the private sector is essential to ensure that available national capabilities and authorities produce detailed plans and response actions that are complementary, compatible, and coordinated. All federal agencies are expected to develop their own pandemic plans that along with other requirements, describe how each agency will prov for the safety and health of its employees and support the federal government’s efforts to prepare for, respond to, and recover from a pandemic. Because the dynamic nature of pandemic influenza requires that the scope of federal government continuity of operations (COOP) planning includes preparing for a catastrophic event that is not geographically or temporally bounded, the Federal Emergency Management Agency (FEMA) concluded that planning for a pandemic requires a state of preparedness that is beyond traditional federal government COOP planning. For example, for pandemic planning purposes, essential functions may be broader than 30-day traditional COOP-essential functions. Federal agency pandemic planning guidance can be found at http://www.pandemicflu.gov/plan/federal/index.html. The Implementation Plan issued in May 2006 directs federal agencies to have operational pandemic plans. Agencies’ responses to our survey questions indicate that the agencies’ preparedness efforts are less than uniform. Although all of the 24 CFO Act agencies reported being engaged in planning for pandemic influenza to some degree, several agencies reported that they were still developing their pandemic plans. According to the survey responses, the development of practices for federal workforce protection in the event of a pandemic is also at the beginning stages for several agencies. In November of 2006, the HSC issued the Key Elements of Departmental Pandemic Influenza Operational Plan (Key Elements), which had a checklist for federal agencies to use in their pandemic preparedness. The Key Elements checklist covered subjects dealing with the safety and health of department employees, essential functions and services and how agencies will maintain them in the event of significant and sustained absenteeism, support of the federal response, and communication with stakeholders during a pandemic. The Key Elements stated that to ensure uniform preparedness across the U.S. government, the HSC was including a request that by December 2006 the agencies certify in writing to the HSC that they were addressing applicable elements of the checklist. A letter to the council stating that an agency was addressing the elements in the checklist in its planning was sufficient for certification. According to White House counsel from the prior administration, all of the 24 CFO Act agencies required to certify with the HSC did so, although not all of the agencies met the December 2006 deadline. Subsequently, in August 2008, the HSC revised the Key Elements to reflect current federal government guidance on pandemic planning. The HSC requested that all department and agency heads recertify that their pandemic plans were addressing all the applicable elements of pandemic planning stipulated in the updated checklist by October 15, 2008. The updated checklist provided revisions of some key elements and added new elements. Additionally, the revised checklist required that agencies plan for a severe pandemic, which requires planning for prolonged implementation of community mitigation measures that could affect workforce absenteeism, such as school closures, for up to 12 weeks. A new planning element also asked if the agency planned to purchase and stockpile antiviral medications and personal protective equipment for employees identified through risk assessments. Our survey questions for the pandemic coordinators of the 24 CFO Act agencies focused on areas similar to the elements from the HSC checklist dealing with the safety and health of agency employees and essential functions. In addition to asking agencies about their pandemic plans, we asked them whether they have identified essential functions other than first response that cannot be performed remotely in the event of a pandemic, planned measures to protect workers who will not be able to work remotely, established social distancing strategies, tested their IT capabilities, and communicated their human capital pandemic policies. Survey responses represent the main department or agency only unless components are specifically mentioned. In the introduction to the Key Elements, the HSC recognized that pandemic planning is not a static process and encouraged departments and agencies to revise their plans and procedures as new federal guidance is developed. However, several of the agencies we surveyed reported that they were still formulating their pandemic plans in May 2008. For example, the Small Business Administration (SBA) stated that the agency had begun drafting its pandemic plan but had not completed or cleared it. In February 2009, SBA reported that it had begun to draft a more complete pandemic influenza annex to its COOP plan with an estimated completion date of spring 2009. The Department of Defense (DOD) had completed its overarching departmentwide plan, which tasked its components to develop COOP pandemic plans. The department was coordinating the plans among the combatant commands and military services. DOD officials commented that some DOD components have had pandemic influenza plans in place for several years. In addition, DOD installations have been required to have Force Health Protection Plans for years, and DOD reported that the installations are tailoring these plans to include pandemic influenza considerations. All of the 24 CFO Act agencies surveyed, with the exception of OPM, the National Science Foundation (NSF), and the Department of Housing and Urban Development (HUD), required their components to develop pandemic plans. OPM indicated that all of its essential functions are performed at the department level. NSF reported not having any essential functions as defined by Federal Continuity Directive 2 but that it does have important government functions that the agency intends to continue during a pandemic. According to an NSF continuity manager, all of NSF’s government functions are performed at the department level. HUD did not explain why it did not require its components to develop pandemic plans. The Environmental Protection Agency (EPA), SBA, the General Services Administration (GSA), the Department of State (DOS), the Department of Energy (DOE), and the Nuclear Regulatory Commission (NRC) required regional and program offices, in addition to components and bureaus, where applicable, to develop pandemic plans, and as mentioned previously, the DOD combatant commands and services were required to prepare and validate plans. Six of the agencies surveyed—the Department of Commerce (DOC), the Department of Education (Education), EPA, Treasury, the National Aeronautics and Space Administration (NASA), and SBA—reported requiring their components to incorporate pandemic planning into or develop pandemic annexes or addenda to their COOP plans. DOC, for example, reported providing templates to each of its components to assist them in developing their own annexes or addenda to their COOP plans. The Implementation Plan instructs agencies that institutional planning efforts should address the question of the agency’s essential functions and how they will be maintained in the event of significant and sustained absenteeism. Furthermore, the Key Elements asks for plans to include definitions and identification of essential functions needed to sustain agency mission and operation. This includes the determination of which, if any, essential functions, or nonessential operational support functions can be suspended and for what duration before adversely affecting agency mission. The Key Elements also calls on agencies to identify positions, skills, and personnel needed to continue essential functions and develop a plan to ensure and consider appropriate level of staffing to continue these functions. Identifying essential functions and enumerating the employees who would perform them is the first step in training those employees, communicating the risks and expectations of working during a pandemic, and budgeting and planning for measures that would mitigate those risks. Of the 24 agencies surveyed, 19 reported that they have identified essential functions at both the department and component levels that cannot be continued through telework in the event of pandemic influenza or, in the case of OPM and the U.S. Agency for International Development (USAID), determined that all of their essential functions could be performed remotely. NSF reported that all of its important government functions could be performed remotely. Of the 5 agencies reporting that they had not identified such functions, DOJ reported identifying essential functions at the component level but not at the departmental level, noting that the department’s plan is being revised. DOJ stated that upon completion the plan will address department-essential functions that cannot be continued via telework. At the time of our survey, GSA reported not identifying its essential functions in the event of a pandemic while 3 agencies—DOD, SBA, and HUD—were still identifying essential functions or determining which essential functions could not be continued through telework. DOD reported that its classified work prohibits telework for approximately 26,200 essential civilian personnel and its mission requirements preclude telework for approximately 89,500 positions. DOD has approximately 700,000 civilian employees on its payroll. DOD stated that it is finalizing a list of essential functions at the department and component levels. SBA reported that it was expanding on its basic COOP planning to account for the circumstances of a pandemic, stating that the agency has identified its primary essential functions for COOP purposes, functions that could be performed for the most part remotely and through telework. HUD reported that it has identified its COOP-essential functions but has not confirmed that they could be continued through telework. Table 1 lists some examples agencies provided of their essential functions that cannot be performed remotely in the event of a pandemic. DOL reported identifying essential functions in accordance with federal pandemic guidance. DOL stated that in recognition that an influenza pandemic will last much longer than a traditional 30-day or less COOP event, the DOL pandemic plan and component agency pandemic plans include functions beyond the essential functions in the DOL and agency COOP plans. The department expects that performance of its essential functions will ebb and flow based on the availability of personnel and telecommunications. DOL agencies identified which work would be accomplished through telework and which could be done safely in the office using social distancing methods. As part of its ongoing planning, DOL requires its agencies to continuously identify who would accomplish the essential functions and if the work could be done through telework, cross-train at least three employees for each function, and ensure that employees have the equipment needed to work at home and test their ability to do so. Some, but not all, DOL component agencies have identified which essential functions can only be performed within a DOL facility with notice to the affected employees. Identifying essential functions and the employees who perform them is the first step before informing these employees that they may be expected to continue operations in the event of a pandemic, as well as preparing them for the risks of performing such functions on-site. Eighteen agencies reported that they have notified some or all employees in department-level essential functions that they may be expected to continue operations during a pandemic and 16 reported doing so for employees in component- level essential functions. Three pandemic coordinators did not know whether their employees had been notified. A number of agencies reported having informed some employees who perform essential functions that they may be expected to continue operations, despite not having determined the number of such employees. We asked the pandemic coordinators from the 24 CFO Act agencies whether they had planned or budgeted for any of seven potential measures to protect workers whose duties require their on-site presence during a pandemic. The measures included in our survey were among the recommendations for worker protection issued through the Occupational Safety and Health Administration (OSHA), HHS, or FEMA guidance. They included procurement of personal protective equipment such as masks and gloves; supplemental cleaning programs for common areas; distribution of hygiene supplies (hand sanitizers, trash receptacles with hands-free lids, etc.); obtaining antiviral medications; arrangements to obtain pandemic vaccines to the extent available; prioritization of employees for vaccinations; and prioritization of employees for antiviral medications. The guidance recommends the measures according to risk assessments for employees, and therefore, based on the agencies’ mission and activities, not all measures are equally appropriate for all agencies. Figure 1 details the agency responses to the measures they plan to protect their employees during a pandemic. As the figure shows, procurement of personal protective equipment and distribution of hygiene supplies had the highest number of positive responses. Sixteen agencies reported arranging for obtaining antiviral medication and supplemental office cleaning programs for common areas. Agencies reported arrangements to obtain vaccines, should they become available, less frequently. Eight agencies said that they had planned for all seven measures to some degree. Agency responses to this set of questions emphasized different approaches to planning for employee protective measures in the event of a pandemic. For example, DOD reported investing approximately $24 million in antibiotics to treat bacterial infections secondary to pandemic influenza. DOD also noted that a pandemic influenza vaccination strategy for key civilian personnel within DOD is currently in development. DOJ said its planning and budgeting for the measures are limited to departmental first responders from its law enforcement components and leadership. However, DOJ also reported that it plans to advise all components to budget for emergency equipment and supplies in their future budget submissions, in accordance with Federal Continuity Directive 1 requirements. DHS reported that it had done fit testing of employees for N95 respirators and training on the proper use of other personal protective equipment and had pre-positioned stockpiles of the equipment for employees in 52 locations. DOS noted that it had provided pandemic influenza-specific training to janitorial staffing, with a focus on maintaining proper disinfection of restrooms, offices, and common areas as well as on their own protection. The Key Elements asks agencies if they have considered implementation of social distancing policies to prevent influenza pandemic spread at work. Influenza is thought to be primarily spread through large respiratory droplets that directly contact the nose, mouth, or eyes. These droplets are produced when infected people cough, sneeze, or talk, sending the infectious droplets into the air and into contact with other people. Large droplets can only travel a limited distance; therefore, people should limit close contact with others when possible. Examples of social distancing strategies include requiring six feet of separation between people or canceling events and closing or restricting access to certain buildings. Employees may decrease their risk of infection by practicing social distancing and minimizing their nonessential contacts and exposure to highly populated environments. In many instances, low-cost and sustainable social distancing practices can be adopted by employees at the workplace for the duration of a pandemic outbreak. The agencies reported considering a variety of social distancing strategies in the context of pandemic preparedness. For example, the survey revealed that the most frequently cited social distancing strategies involved using telework and flexible schedules for their workforce. Eighteen agencies were considering low-cost social distancing strategies, such as planning for restrictions on meetings and gatherings and canceling unnecessary travel. Only 8 agencies reported considering alternatives to public transportation for their employees. Figure 2 shows the number of agencies responding positively about their plans to use various social distancing strategies in the context of pandemic preparedness. The agencies reported some other examples of social distancing strategies. For instance, DOD’s pandemic plan provides authority to installation commanders to implement Emergency Health Powers to impose movement restriction and use of containment strategies, such as isolation and quarantine. As a result of pandemic exercises, DOD also plans to restructure cubicles and other work space during a pandemic. The Department of Agriculture (USDA) intends to break up workdays into shifts to minimize the number of people on-site performing essential functions, whereas the Social Security Administration (SSA) reported planning to stagger breaks and strategic reassignments. Although the planning process has not been completed, DOL noted that it plans to implement parking restrictions for essential employees who would need to be physically in the office and post signage for elevators and restrooms to limit use to one person at a time. In addition, NRC reported that it enhanced telephone conferencing capability so that it can locate and virtually assemble teams, managers, and staff as needed. Many of the agencies’ pandemic influenza plans rely on social distancing strategies, primarily telework, to carry out the functions of the federal government in the event of a pandemic outbreak. Accordingly, the Key Elements asks if agencies have ensured that their telecommunications infrastructures are capable of handling telework arrangements. As part of their pandemic planning, agencies need to review their telework infrastructures and look for ways to expand their capacities, if necessary. In our survey, agencies reported testing their IT capabilities to varying degrees. Only one agency, NSF, stated that it tested its IT infrastructure to a great extent. NSF reported assessing its telework system formally several times each year and each day through various means. The agency noted that it has an annual COOP exercise that tests the IT infrastructure it would use in a pandemic situation. Twice a year, tests are done to ensure that the NSF computer service recovery site can provide a connection to the agency’s IT infrastructure. NSF also stated that it has a majority of staff with telework agreements in place and who telework at least on an episodic basis. In contrast, five of the surveyed agencies acknowledged that they had tested their IT network capacity to little or no extent. Table 2 shows the agency responses to this question. Several agencies provided more detail on their IT network testing efforts. For example, DOT stated that over the past 2 years, the department had a number of IT and telework exercises. One of these occurred on April 17, 2008, when the department tested its telework capacity for all headquarters operations during the visit of Pope Benedict XVI, who conducted a Mass at the Washington Nationals Stadium, 1 block from DOT headquarters. Other examples of IT capacity testing included the Office of the Secretary of Defense’s live 2-day pandemic influenza-based exercise, that included employees who teleworked from home or other alternative worksites. An HHS component, the Division of Payment Management, reported executing a business continuity exercise, which incorporated a scenario of responding to an outbreak of influenza in the Washington, D.C., area. The division directed 40 percent of its employees, 31 employees plus 3 contractors, to work from home. The goal of the exercise was to test employees’ access to critical systems and determine IT gaps, the ability to continue transactions, and the ability to communicate during an emergency. DOL stated that it has established a committee to focus on increasing its telework testing and providing guidance for agency program managers to do more direct tests. On the other hand, SSA noted that while it has telework arrangements that can be used during a pandemic outbreak, the agency has elected not to develop a specific telework contingency because telework does not lend itself to the agency’s primary mission. Federal Continuity Directive 1 requires that each agency implement a process to communicate its human capital guidance for emergencies— pay, leave, staffing, and other human resources flexibilities—to managers and make staff aware of that guidance to ensure that the agency continues essential functions during an emergency. Given the potential severity of pandemic influenza, it is important that employees understand the policies and requirements of their agencies and the alternatives, such as telework, that may be available to them. Many employees and their supervisors will have questions about their rights, entitlements, alternative work arrangements, benefits, leave and pay flexibilities, and hiring flexibilities available during the turmoil created by a pandemic. Twenty-one of the 24 pandemic coordinators surveyed reported making information available to their employees on how human capital policies and flexibilities will change in the event of a pandemic outbreak. Three agencies—DOC, GSA, and SSA—reported that they have not. Of the agencies that reported making information available, 2 had done so indirectly. HUD stated that it shared information with unions, and Treasury reported that it briefed its human capital officers on the human capital policies and flexibilities available to address pandemic issues. NRC reported that in September 2008 its pandemic plan was completed and made available to staff through the agencywide document management system. The plan reflected human capital policies and flexibilities. Many of the agencies that made information available did so through their internal Web sites, both by posting their own plans and guidance and by linking to OPM guidance on human capital policies. Of those agencies, several also held town hall meetings or all-staff briefings to share guidance with employees. A number of agencies reported distributing pamphlets or brochures that contained human capital information. BOP, a component of DOJ, has the mission of protecting society by confining offenders in the controlled environments of prisons and community-based facilities that are safe, humane, cost-efficient, and appropriately secure and that provide work and other self-improvement opportunities to assist offenders in becoming law-abiding citizens. BOP has 114 correctional facilities with a central office located in Washington, D.C., and 6 regional offices. The central office provides administrative oversight of its facilities, and the 6 regional offices directly support operations of the facilities in their respective geographic areas of the country. As of January 8, 2009, the agency was responsible for the custody and care of 201,113 federal inmates. Approximately 35,000 federal employees ensure the security of federal prisons and provide inmates with programs and services. According to BOP officials, the warden is permitted to use all facility staff, including noncorrectional services staff, such as secretaries, nurses, or dentists, for correctional service assignments during emergencies and at other designated times. One of BOP’s published core values is that all employees are “correctional workers first,” regardless of the specific position to which an individual is hired, and both correctional services staff and noncorrectional services staff are responsible for the safety and security of the facility. BOP operates facilities at different security levels, and each facility is designated as either minimum, low, medium, or high security—with increasing security features, inmate to staff ratios, and control of inmate movement with each increasing security level—and administrative facilities that have special missions, such as the detention of pretrial offenders and the treatment of inmates with serious or chronic medical problems. Some BOP facilities are part of BOP’s 13 federal correctional complexes, which consist of two or more colocated facilities. BOP facilities are given a security designation based on the level of security and staff supervision the facility is able to provide. DOJ’s pandemic influenza plan focuses on minimizing the effects of a pandemic on its workforce and operations via techniques such as social distancing, infection control, personal hygiene, personnel training, and telework. The department’s plan is designed to supplement the traditional, all-hazards COOP plan. According to DOJ’s plan, each DOJ component is required to identify its specific responsibilities for maintaining essential functions during a pandemic influenza outbreak, comply with Federal Continuity Directives 1 and 2 and FEMA guidelines, and certify compliance with DOJ’s Security and Emergency Planning Department. DOJ’s primary function with its components in pandemic planning is its periodic random assessments of component continuity programs. BOP’s pandemic influenza plan was developed through its Office of Emergency Preparedness and was disseminated to its central office and six regional offices in May 2008. In conjunction with BOP’s pandemic plan, BOP’s Health Services Division developed four supplemental pandemic flu modules for facility-level planning—Surveillance and Infection Control, Antiviral Medications and Vaccines, Health Care Delivery, and Care of the Deceased—which provide detailed instructions for health-related aspects of pandemic flu emergency response. Specifically, the modules contain guidelines, standard operating procedures, checklists, and screening forms. The final modules became available to individual BOP facilities in August 2008, and the deadline to submit facility-specific pandemic plans was extended from September to November 2008. Prior to the plan’s release, BOP held conferences with the Health Services Division and infection control officers to solicit feedback on the draft plan’s feasibility and to encourage the facilities to start implementing elements of the plan, such as early coordination with local communities, surveillance of seasonal influenza, and promotion of good health habits among the correctional workers and the inmates. BOP’s Antiviral Medications and Vaccines outlines guidance on stockpiling, distribution, and dispensation of antiviral medications. The module also requires the facilities to review HHS priority groups for receiving antiviral medication and pandemic vaccine; develop local procedures for dispensing antiviral medication and vaccine to employees and inmates according to the central regional office guidance issued by the medical director; and coordinate with local health departments to ensure the facility’s inclusion in the Strategic National Stockpile (SNS),which is a national repository of medical supplies that is designed to supplement and resupply local public health agencies in the event of a national emergency. BOP headquarters provided funding to the central regional offices to stockpile Tamiflu, an antiviral medication, and a list of GSA-approved sources to procure additional supplies. Based on a historical review of the 1918 pandemic influenza and HHS planning assumptions, BOP intends to supply antiviral medication to 15 percent of the correctional workers and inmates in each facility if the influenza outbreak is geographically spread throughout the United States. BOP’s pandemic plan anticipates that its supply of Tamiflu will come from two sources—BOP’s established stockpile and each BOP facility’s coordinated effort with its local health department to ensure inclusion in the SNS for antiviral medication for treatment. According to a regional BOP official, antiviral medication is already stockpiled at designated storage sites in each region, and each storage site is responsible for plans to distribute the antiviral medication throughout its respective region. For example, the North Central Regional Office in Kansas City, Missouri, reported managing its stockpile through a GSA contract with McKesson Pharmaceuticals. Under the terms of the contract, the regional office can exchange the antiviral medication after 5 years if a pandemic does not occur. Upon expiration of the antiviral medication, the contract requires either recertification of the existing medications or a new shipment. At the time of our review, no BOP-wide pandemic or health care management exercise had been conducted; however, the Office of Emergency Preparedness was planning such exercises. At the same time, individual institutions and regional health services offices have conducted exercises on specific aspects of pandemic preparedness. For example, the North Central Regional Office in Kansas City reported participating in pandemic tabletop exercises and interagency tests coordinated by the Kansas City FEB. Regional directors have had basic pandemic training, but there have not been exercises on how to manage a pandemic or manage a local facility in the event of a pandemic. The regional managers have ongoing conferences and have been trained on overarching BOP pandemic plans and strategies, such as social distancing, hand hygiene, and stockpiling. BOP’s pandemic plan addresses the need for infection control measures to mitigate influenza transmission and calls for education of correctional workers and the inmate population. Accordingly, all facilities are instructed that they should have readily available and ample supplies of bar soap and liquid soap in the restrooms, alcohol-based wipes throughout the facility, and hand sanitizers if approved by the warden. A BOP official noted that alcohol-based antibacterial hand sanitizers would not be available to the inmates because of the sanitizer’s high alcohol content, which can be misused by the inmates. The Surveillance and Infection Control details recommendations for use of personal protective equipment such as surgical or procedure masks; N95 respirators, which BOP stipulates should only be used in the context of an OSHA-defined respiratory protection program; and gloves, when directly involved in caring for ill correctional workers and inmates. BOP’s pandemic influenza plans also require training and education of correctional workers and inmates on pandemic influenza and aspects of facility management in case of an outbreak at the component and facility levels. The use of social distancing measures to protect correctional workers in the event of a pandemic presents a challenge. Although BOP’s Surveillance and Infection Control advocates social distancing during a pandemic outbreak, according to several BOP officials, social distancing measures are difficult to implement at the facility level. In older facilities, such as USP Leavenworth, there may be a greater need for correctional workers to be physically present and work in proximity to one another and the inmates to maintain facility security, address emergencies, and deal with the inmate population. On the other hand, recently constructed facilities such as the Allenwood Federal Correctional Complex have closed-circuit video monitoring systems throughout the facilities, which enable the correctional workers to better monitor the inmate population and minimize contact. However, BOP officials said that there are many situations in which close contact is inevitable between correctional workers and inmates and where personal protective equipment, such as gloves and masks, would not be feasible. In the event of a fight between inmates, for example, correctional workers would not have time to put on gloves or masks and any in-place masks would be likely to fall off. In addition, according to a medical officer at USP Leavenworth, gloves cannot be worn for a long period of time without compromising the health of the skin. Another BOP official said that various facilities have unique requirements that they need to factor into planning for the use of social distancing measures. Examples include prisons with different layouts; facilities where inmates have increased needs, for example, inmates with diabetes or those who need wheelchairs; and facilities where there are inmates who cannot be colocated for security reasons. A unique pandemic planning challenge facing federal correctional workers is the maintenance of an effective custodial relationship between them and the inmates in federal prisons. According to BOP officials, this relationship depends on communication and mutual trust, as correctional workers in federal prisons do not carry weapons or batons inside the cellblocks. Rather, they use verbal methods of communication to keep order. The BOP officials at USP Leavenworth said that they would not allow a situation where correctional workers wear N95 respirators or surgical masks but the inmates do not. Seeing a correctional worker wearing a mask may cause fear among inmates and could potentially contribute to an unstable situation. The BOP officials at the Allenwood Federal Correctional Complex said that they would provide personal protective equipment to both correctional workers and inmates and use antiviral medication combined with social distancing strategies to mitigate the spread of influenza. An Allenwood Federal Correctional Complex official noted that education of staff and the inmate population about pandemic influenza would be an important part of the facility’s pandemic effort. The guidance provided by BOP’s central office and regional offices does not clearly determine prioritization and allocation of pandemic pharmaceutical interventions to the facilities. For example, an official at USP Leavenworth said that the facilities do not know how much antiviral medication they can rely on from the SNS in addition to the 15 percent BOP allocation. The distribution of antiviral medications to Leavenworth correctional workers and inmates would take into account a variety of factors, such as age; health factors, including preexisting conditions; and severity of the pandemic event. Based on these factors, the numbers of antivirals needed would be difficult to calculate in advance. In addition, priority would always depend on the situation, and the warden working with the facility’s medical director would make the final determinations. Despite the challenges BOP faces with pandemic influenza planning, the bureau has advantages, which are unique to its facilities. Every correctional facility is a closed and self-contained system, and each facility is somewhat self-sufficient, maintaining a 30-day supply of food, water, and other necessities for any type of contingency. Correctional facilities also have well-tested experience in emergency and health hazard planning and management and infection control, which provide them with a solid foundation to build on for pandemic influenza preparedness. Additionally, correctional facilities generally have strong ties with their local communities, important because pandemic influenza will be largely addressed by the resources available to each community it affects. For example, in addition to their own medical staff, BOP facilities rely on local hospitals and work with community first responders in emergencies. Having medical staff on board, an advantage some of the other agencies lack, also makes pandemic planning and decision making easier. FMS, a component of Treasury, provides central payment services to federal agencies, operates the federal government’s collections and deposit systems, provides governmentwide accounting and reporting services, and manages the collection of delinquent debt owed to the government. FMS is the primary disburser of payments to individuals and businesses on behalf of federal agencies, disbursing more than $1.6 trillion in federal payments annually, including Social Security payments, veterans’ benefits, and income tax refunds, to more than 100 million people. FMS has about 2,100 employees, one-third of whom are located in four regional financial centers—Austin, Texas; Kansas City, Missouri; Philadelphia, Pennsylvania; and San Francisco, California. The regional financial centers issue the majority of their payments by electronic fund transfers and the rest by wire transfers and paper checks. The centers are production facilities that rely heavily on integrated computer and telecommunications systems to perform their mission. However, they also rely on light manufacturing operations to print and enclose checks for releasing at specific times of the month. For the most part, the regional financial centers are planning that in the event of a pandemic, the nature of their business will be unchanged, but there will be issues with sickness, absenteeism, communication, and hygiene that they must address. Employees whose positions require, on a daily basis, direct handling of materials or on-site activity that cannot be handled remotely or at an alternative worksite are not eligible for telework. According to an FMS official, even with a minimum crew on-site to produce paper checks, there will still be instances when employees will need to be within 3 feet of other employees. For example, a certification process for the checks includes internal controls, which necessitates having more than one employee present in a confined space. The Kansas City Financial Center (KFC) estimated that it would need 13 essential employees to continue on-site operations in the event of a pandemic, including employees such as payment control technicians, mail processing clerks, and production machinery repairers. The Philadelphia Financial Center (PFC) explained that its peak production workload is toward the end of the month when it is preparing the monthly Social Security benefit payments. At this point in the month, the PFC will need the majority of the payment and mail operations branch employees present, approximately 25 employees. Treasury’s pandemic plan is an annex to its COOP plan and describes how departmental offices and its bureaus will discharge their responsibilities in the event of a pandemic. The Treasury pandemic plan describes the department’s operational approach to employee safety and COOP and the manner in which Treasury will communicate with its stakeholders. To facilitate consistent planning across Treasury, its Office of Emergency Preparedness provided all department offices and bureaus with guidance for departmental planning from the Implementation Plan. According to an FMS official, Treasury also directed its components to www.pandemicflu.gov for additional guidance. FMS officials said that they have a biweekly teleconference to discuss business continuity planning, including the pandemic plans for the regional financial centers. An FMS official commented that the primary guidance from FMS to the regional centers came from the Key Elements provided by the HSC. The KFC reported that the Kansas City FEB’s Continuity Working Group held several workshops to discuss pandemic planning. At these workshops, and in conjunction with online guidance from the Office of Management and Budget, OPM, and FEMA, the KFC developed its own plan, striving for consistency in assimilating the guidance from all sources. FMS officials reported that the labor union representing FMS’s bargaining unit employees, the National Treasury Employees Union, was involved in the pandemic planning process for FMS. The FMS Security Division is responsible for ensuring uniformity in pandemic planning across the regional financial centers. The four regional financial centers’ pandemic plans follow the same basic template with an overview and center objectives followed by sections on succession planning, human resource issues, telework issues, communication, and hygiene. All of the regional financial centers’ pandemic plans contain detailed guidance for employees on human capital policies in the event of a pandemic. All of the regional plans also have guidance to maintain links with their respective FEBs in order to be involved in local planning and communications. At the KFC, for example, through monthly meetings and special workshops sponsored by the Kansas City FEB, the regional financial center has had interactions with state and local entities, including representatives from the Missouri state emergency network and two local county health offices. PFC officials also reported participating in two tabletop exercises focused on emergency planning that were hosted by the Philadelphia FEB. As part of the center pandemic plans, officials researched the types of supplies they would need based on the risks faced in their facilities. For example, the janitorial staff now routinely wipes off door handles, tabletops, and other high-traffic areas. As part of the KFC’s plan, the center stocks such items as N95 respirators, gloves, hand sanitizers, disinfectants, and fanny packs that include items such as ready to eat meals, hand-cranked flashlights, small first-aid kits, and emergency blankets. The KFC Deputy Director commented that in the event of a pandemic, the KFC would encourage the use of N95 respirators and gloves and that the facility had made a decision to pre-position these supplies. The KFC plans to stock enough for 15 to 20 employees per day for the first pandemic wave. Preceding the first wave, the KFC plans to order additional supplies from GSA at the onset of the first pandemic trigger. KFC officials believe that this will allow the center to have enough supplies to last during subsequent pandemic waves. The KFC has also discussed housing some employees on-site during a pandemic, but this will be a greater possibility once the exercise facility, including showers and lockers, is finished. The KFC Deputy Director said that the organization is aware that the basis of part of the U.S. economy rests on the regional financial centers and that they will need to issue payments even during a pandemic. PFC officials reporting having in stock approximately 1,200 N95 respirators, hand sanitizers, and gloves, and the PFC has pre-positioned masks and gloves in each branch. PFC officials noted that additional supplies are being procured. Although FMS said that continuing communication with employees is needed, training, education, and materials have been provided to managers concerning essential functions and employee safety and health in the event of a pandemic. Essential employees have been told in broad terms that operations will continue during a pandemic. For example, the KFC Director has asked that designated critical employees be approached to determine in the event of a pandemic crisis whether they would be receptive to sheltering in place. An FMS official reported that the agency presented a pandemic preparedness briefing in 2006, which shared with the regional facilities’ employees pandemic-related subjects, such as cough etiquette. FMS also reported communicating the elements needed for a home pandemic preparedness kit as well as personal pandemic planning to all employees. The PFC stated that it plans to obtain informational materials on safety and health during a pandemic from local health care facilities for distribution to employees. The center incorporated training on pandemic awareness into its annual safety and health training. The FMS regional financial centers face some unique pandemic planning challenges. Since the regional financial centers are production facilities with large open spaces as well as enclosed office areas, pandemic planning requires different responses for different areas. For example, in the office and common areas, cleaning and disinfecting will be a key component. An FMS official said that the employees’ response and diligence in following disease containment measures would be what determines the success of those measures. Scheduling of production personnel is also a challenge. Since the production of the checks must be done according to a deadline and internal controls must be maintained, schedules are not flexible. The KFC explained that its peak production workload is toward the end of the month when it is preparing the monthly Social Security benefit payments. PFC officials noted that although they could identify certain positions that could be performed remotely, there are issues surrounding personally identifiable information, which must be protected and which requires that special equipment needs be addressed. The PFC is exploring its telework options as part of its pandemic planning, but officials acknowledged that protecting sensitive data would be a significant consideration of any formal telework program. FMS officials had not made any arrangements for pandemic pharmaceutical interventions for the regional financial centers. According to an FMS official, Treasury asked its components to determine the number and courses of antiviral medications needed for very high-risk, high-risk, and medium-risk staff with critical professional responsibilities, consistent with HSC guidance documents. Aside from that action, FMS had not determined priorities for medical countermeasures in part because the relatively small number of essential employees required to be on-site, as well as the large open spaces in the regional facilities, makes social distancing measures more feasible. FAA, a component of DOT, expects the National Airspace System to function throughout an influenza pandemic, in accordance with the preparedness and response goal of sustaining infrastructure and mitigating impact to the economy and the functioning of society. FAA’s Interim Plan for Sustaining Essential Government Services (SEGS) During a Pandemic states that since an influenza pandemic would not damage physical infrastructure, FAA facilities would remain operational and day- to-day operations would continue based on the number of available personnel. Maintaining the functioning of the National Airspace System will require that FAA’s air traffic controllers, who ensure that aircraft remain safely separated from other aircraft, vehicles, and terrain, continue to work on-site. Under nonpandemic circumstances, FAA’s over 15,000 air traffic controllers guide more than 7,000 aircraft in the United States each hour during peak hours and about 50,000 aircraft each day through the National Airspace System. While FAA expects the demand for air traffic control, which manages cargo as well as passenger travel, to be reduced in the event of a severe pandemic outbreak, its contingency plans assume full air traffic levels as a starting baseline. According to an FAA official, although passenger travel may be diminished, the shipping of cargo may increase. DOT and FAA pandemic plans and guidance provide the basis for the air traffic management facility pandemic plans. DOT’s Guidance to the Office of the Secretary of Transportation (OST) and Operating Administrations (OA) addresses the protection of employees and explicitly distinguishes pandemic plans from COOP plans, emphasizing a pandemic’s duration and expected absenteeism rate and stating that plans must address workforce protective policies, equipment, and measures. The guidance requires that each component use an accompanying template to develop a plan to sustain essential government services (SEGS) during a pandemic. The guidance set deadlines of March 24, 2006, for the plans and July 31, 2006, for each operating administration office to conduct an exercise to validate its individual SEGS plan. FAA’s SEGS plan defines essential services in the event of a pandemic outbreak more broadly than those of COOP, because of the longer duration of a pandemic. The essential services comprise all the services that FAA deems necessary to provide to the aviation sector and employees to keep the National Airspace System operational. The plan addresses sustaining such services amid high employee absenteeism at the peak of a pandemic wave. In broadening its categorization of essential services, FAA considered whether and for how long the functions can be deferred; whether the functions can be performed off-site; the interchangeability of the occupation, such as those with limited interchangeability because of certification requirements; as well as operational contingency measures such as devolution, functional backups, and system redundancies. FAA’s SEGS plan also acknowledges employee protection measures, stating that FAA will ensure the ready availability of soap and water, tissues and waste receptacles, and environmental cleaning supplies throughout work facilities. The Air Traffic Organization (ATO), FAA’s line of business responsible for the air traffic management services that air traffic controllers provide, had not yet directed facilities, such as its air route traffic control centers, to develop pandemic-specific plans or incorporate these pandemic plans into their all-hazards contingency plans. FAA officials said that all-hazards contingency and continuity plans are adapted to the facility level and are regularly implemented during natural disasters such as hurricanes. Although these plans are not specific to a pandemic, FAA officials reported that the all-hazards plans allow ATO to mitigate the impact of adverse events, including reduced staffing levels, on National Airspace Systems operations. FAA reported that ATO completed a national-level pandemic plan in 2006 as part of FAA’s SEGS plan that addressed essential missions and services, as well as general direction on social distancing and workforce protection. FAA is incorporating detailed HHS antiviral stockpiling guidance, issued in December 2008, into an FAA workforce protection policy that it estimates will be completed by mid-2009. ATO will then update its national-level pandemic plan with detailed protective measures for its workforce, including air traffic controllers. ATO will also use the national-level updates to direct its facilities to develop pandemic- specific plans or enhance their preexisting all-hazards contingency plans to incorporate and implement workforce protection measures at the local field facility level. FAA was also expecting the results of a powered air purifying respirator (PAPR) feasibility study, completed in November 2008, to help inform pandemic planning at the facility level. The objective of the study was to determine whether PAPRs are suitable for long-term use and whether air traffic controllers can communicate with aircraft and other controllers while wearing the PAPRs, as controllers cannot communicate adequately while wearing N95 respirators or surgical facemasks. At this time, FAA has provided PAPRs for short-term use by air traffic controllers so that they can transfer control of air traffic to other air traffic facilities, per existing contingency plans. This use was intended primarily for situations involving asbestos in air route traffic control centers. PAPRs cost approximately $1,000 each plus filter and battery expenses, and FAA estimates the total cost for PAPRs for its air traffic controller workforce would reach $15 million. In addition to the cost, the study findings suggested there are many potential problems, including noise, visibility, and comfort, with the PAPR approach that FAA would have to address. The study concluded that FAA would need to evaluate many concerns in a more operationally realistic environment before recommending PAPRs for use by air traffic controllers. Because of the nature of these concerns, FAA agency officials said that the long-term use of PAPRs in a pandemic appears to be impractical. FAA also plans to augment its agencywide pandemic plan with a workplace protection policy. Among the issues this policy would cover are the classification of employees’ workplace exposure risk and the identification of categories of critical employees that should be given upgraded personal protective equipment beyond what would be indicated by their workplace exposure risk. Once the FAA-wide workforce protection policy is determined, ATO and other lines of business will be expected to incorporate it into their line of business-specific pandemic plans or revise and elaborate those policies where they exist and implement the policy. Both DOT and FAA’s pandemic plans emphasize employee awareness training and both agencies already offer information and training to employees through their intranet sites; however, the air traffic controllers we interviewed did not generally access the intranet. DOT and FAA intranet sites provide checklists for personal and family preparedness; simple cleaning and decontamination guidance; hygiene reminders; social distancing practices, such as no handshake policies and the use of teleconferences in place of in-person meetings; and links to sites for pandemic influenza-related information from the Centers for Disease Control and Prevention, World Health Organization, and OPM. FAA’s intranet also has pandemic influenza frequently asked questions and links to the latest Centers for Disease Control and Prevention guidance on public health measures to reduce the spread of influenza and other communicable diseases. FAA plans to publish its pandemic influenza plan on its intranet. However, FAA officials responsible for pandemic planning have acknowledged that disseminating information through agency e-mail or its intranet site is not effective for communicating with air traffic controllers, as they do not have ready access to either during their shifts. FAA has additional media through which to communicate pandemic awareness to its employees. For example, FAA has developed a “Pandemic Flu 101” training program, which is undergoing testing, and it has arrangements in place for managers to alert air traffic controllers of critical information and announcements when they are on duty. FAA also plans to provide copies of its pandemic plan to employees who do not have ready access to the intranet during duty hours. Managers will ensure that new hires review the FAA pandemic plan as well as other applicable documents and that employees undergo annual refresher training. Protecting air traffic controllers in the event of a pandemic outbreak is particularly challenging for several reasons. Air traffic controllers work in proximity to one another; the 6 feet of separation recommended for social distancing during a pandemic by the Centers for Disease Control and Prevention and OSHA is not possible for them. Figure 3 shows federal employees working in an air traffic control tower. In addition, air traffic controllers cannot use personal protective equipment such as N95 respirators or surgical masks, as these impede the clear verbal communication necessary to maintain aviation safety. FAA officials and air traffic controllers we interviewed also reported that the common workstations that air traffic controllers share are not regularly sanitized between users. FAA must certify that any sanitizer, many of which are caustic chemicals, does not corrode sensitive equipment necessary to ensure flight safety. FAA is exploring this issue to determine if any sanitizer can be used safely. Moreover, cross-certification of air traffic controllers is problematic. Attaining full performance levels for the controllers takes up to 3 years, and air traffic controllers proficient in one area of airspace cannot replace controllers proficient in another airspace without training and certification. This could result in reduced air traffic management services. Finally, FAA regulations on medication for air traffic controllers are strict because certain medications may impair an air traffic controller’s performance, and the Office of Aviation Medicine’s policy on the use of Tamiflu for prophylactic use by on-duty controllers was still in draft as of March 2009. An FAA official said that FAA would make final the policy for this use when the workforce protection policy is approved. Although the Implementation Plan includes action items aimed at developing and tracking progress relative to the national response for pandemic preparedness, there is no mechanism in place to track the progress of federal agencies’ workforce preparedness efforts. Action items in the Implementation Plan specify roles and responsibilities as well as deadlines and performance measures, and the HSC has issued public progress reports on the status of the action items. The survey results from the 24 CFO Act agency pandemic coordinators, as well as information from the case study agencies, indicate that a wide range of pandemic planning activities are under way and that all of the agencies are taking steps to some degree to protect their workers in the event of a pandemic. However, agencies’ progress is uneven, and while we recognize that the pandemic planning process is evolving and is characterized by uncertainty and constrained resources, some agencies are clearly in the earlier stages of developing their pandemic plans and being able to provide the health protection related to the risk of exposure their essential employees may experience. For example, our previous work showed that agencies’ plans lack important elements, such as identifying which essential veterinarian functions must be performed on- site and how they will be carried out if absenteeism reaches 40 percent— the rate predicted at the height of the pandemic and used for planning purposes. An example of an essential veterinarian function is helping to ensure the safety of meat and poultry products. Under the HSC’s Implementation Plan, DHS was charged with, among other things, monitoring and reporting to the Executive Office of the President on the readiness of departments and agencies to continue their operations while protecting their workers during an influenza pandemic. While directed by the plan, however, the report was not included as a specific action item. DHS officials reported that in late 2006 or early 2007 they asked HSC representatives with direct responsibility for the Implementation Plan for clarification on the issue of reporting agencies’ ability to continue their operations while protecting their workers during a pandemic. DHS officials said they were informed that they did not have to prepare a report. Instead, according to White House counsel representatives, the HSC planned to take on the monitoring role through its agency pandemic plan certification process. The HSC, as noted earlier, had requested that agencies certify that they were addressing the applicable elements of a pandemic checklist in their plans in late 2006 and again in late 2008. As originally envisioned in the Implementation Plan, the report was to be directed to the Executive Office of the President. There was no provision in the plan, however, for the report to be made available to the Congress. We have previously reported on the importance of internal control monitoring to assess the quality of performance over time. Without appropriately designed monitoring and reporting, the President and the Congress cannot fully assess the ability of the agencies to continue their operations while protecting federal employees in the event of a pandemic. The HSC’s certification process, as implemented, did not provide for monitoring and reporting as envisioned in the Implementation Plan regarding agencies’ abilities to continue operations in the event of a pandemic while protecting their employees. Although the council had asked agencies to certify that they were addressing the applicable elements of a pandemic planning checklist, the process did not include any assessment of, or reporting on, agencies’ progress as was the case for the action items in the plan. Moreover, according to agency officials we interviewed, this certification process was the only effort to check on individual agencies’ pandemic plans. Given the threat of pandemic influenza, heightened by recent events, it is imperative that agencies have pandemic plans that ensure their ability to continue operations while protecting their workers who serve the American public. The survey of the 24 CFO Act agencies showed that while some have progressed in their planning to address how their employees’ safety and health will be protected and have identified the essential functions they will maintain in the face of significant and sustained absenteeism, several agencies have yet to complete such necessary initial steps. It is important to recognize that agency pandemic plans will continue to be revised and improved with additional time and information regarding pandemic preparedness and that some agencies face greater complexities in their planning than others. However, some agencies are not close to having operational pandemic plans, particularly at the facility level. Federal agencies must progress to establish operational plans to ensure the maintenance of essential services during times in which widespread disease will affect the health care system, the broader economy, and society as a whole. The three case study agencies illustrate that filtering pandemic plans down to individual facilities and making them operational present challenges for the agencies. Because the primary threat to continuity of operations during a pandemic is the threat to employee health, agencies’ plans to protect their workforce need to progress to be operational at the facility level. However, unlike other action items in the Implementation Plan that address the federal response to pandemic influenza, there is no real monitoring mechanism in place to ensure that agencies’ workforce pandemic plans are complete and ensure that the agencies can protect their workers in the event of a pandemic. The process of monitoring should ensure that federal agencies are making progress in developing their plans to protect their workforce in the event of a pandemic and have the information and guidance they need to develop operational pandemic plans. The HSC has been serving as the hub of federal preparedness activities for pandemic flu, coordinating activities across HHS, DHS, and other federal agencies. However, the council’s certification process has not included any assessment or reporting on the status of agency plans. Having DHS monitor and report on the status of agencies’ pandemic plans to protect the safety and health of their employees while maintaining essential operations could enhance agencies’ accountability for this responsibility and serve as an effective way of tracking agencies’ progress in making their pandemic plans operational by planning for the protection of their workforce. Although the directive in the Implementation Plan required DHS to report to the Executive Office of the President, the Congress may want DHS to report to it on agencies’ progress on their pandemic plans to allow it to carry out its oversight role. Given the important role that the federal government will play in responding to a pandemic, planning to ensure the safety and well-being of federal employees is vital to the success of government operations. To help support its oversight responsibilities, the Congress may want to consider requiring DHS to report to it on agencies’ progress in developing and implementing their pandemic plans, including any key challenges and gaps in the plans. To ensure agencies’ greater accountability in developing operational plans that will protect their workforce in the event of a pandemic, we recommend that the HSC request that the Secretary of Homeland Security monitor and report to the Executive Office of the President on the readiness of agencies to continue their operations while protecting their workers during an influenza pandemic. The reporting should include an assessment of the agencies’ progress in developing their plans, including any key challenges and gaps in the plans. The request should also establish a specific time frame for reporting on these efforts. We provided the Acting Executive Secretary of the HSC and the Secretary of Homeland Security with a draft of this report for review and comment. The Acting Executive Secretary of the HSC commented that the report makes useful points regarding opportunities for enhanced monitoring and reporting within the executive branch concerning agencies’ progress in developing plans to protect their workforce. She noted that the council will give serious and careful consideration to the report findings and recommendations in this regard. The Under Secretary for Management at DHS said that in the coming weeks and months, the department would be involved in efforts to ensure that government entities are well prepared for what may come next. She expressed her appreciation for the report’s findings and recommendations, which she said would contribute to the department’s efforts. The HSC’s written comments are reprinted in appendix III, and DHS’s comments are reprinted in appendix IV. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We are sending copies of this report to the Homeland Security Council, the Department of Homeland Security, the Department of Justice, the Department of the Treasury, the Department of Transportation, relevant congressional committees, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6543 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to determine (1) the extent to which federal agencies have reported plans under way to protect their workforce should an influenza pandemic occur and have reported identifying essential functions, other than first response, that cannot be accomplished remotely in the event of pandemic influenza; (2) the plans selected agencies have established for certain occupations performing essential functions other than first response; and (3) the opportunities to improve federal agencies’ planning enabling them to protect their workforce while maintaining their essential functions in the event of a pandemic. To address the first objective, we developed and administered a Web- based survey. Our intent was to survey the pandemic coordinators from the 24 agencies covered by the Chief Financial Officers Act of 1990. We developed the survey questions based on guidelines for worker protection from the Homeland Security Council (HSC), Occupational Safety and Health Administration, Department of Health and Human Services (HHS), and Federal Emergency Management Agency. We asked the pandemic coordinators questions about (1) their pandemic plans, (2) the department- and component-level functions the agencies consider essential in the event of a pandemic that are not first response and cannot be continued remotely, (3) measures planned to protect workers who will not be able to work remotely, (4) social distancing strategies, (5) information technology testing, and (6) communication of human capital pandemic policies. Furthermore, in addressing the first objective, we reviewed national pandemic plans, prior GAO work assessing influenza, and additional relevant documents that assess influenza, public health, and other emergency preparedness and response issues. We defined essential functions based on Federal Continuity Directive 1 as those functions that enable an organization to provide vital services, exercise civil authority, maintain the safety of the public, and sustain the industrial and economic base during disruption of normal operations. We defined first responders as emergency personnel called to the scene of a crisis or responding to emergency calls for assistance and medical personnel. The scope of our work did not include an independent evaluation of the effectiveness of the workforce protection measures recommended by federal lead pandemic planning agencies. From April 8 through April 17, 2008, we conducted a series of pretests with current and former federal pandemic coordinators and emergency managers to further refine our questions, clarify any ambiguous portions of the survey, and identify potentially biased questions. Upon completion of the pretests and the development of the final survey questions and format, we sent an announcement of the upcoming survey to the 24 pandemic coordinators on May 13, 2008. These pandemic coordinators were notified that the survey was available online on May 15, 2008. We sent a reminder e-mail message to nonrespondents on May 28, 2008, and conducted follow-up calls over the next few weeks. The survey was available online until July 25, 2008, and the results were confirmed or updated in early 2009. All 24 pandemic coordinators completed the survey for a response rate of 100 percent. To address the second and third objectives and to provide a more in-depth examination of agencies’ pandemic planning, we reviewed agency-level pandemic planning for protection of employees for three case study occupations. Our case studies included correctional workers from the Department of Justice’s Bureau of Prisons (BOP); production staff responsible for disbursing federal payments from the Department of the Treasury’s Financial Management Service (FMS); and air traffic controllers from the Department of Transportation’s Federal Aviation Administration (FAA). The primary criteria for selecting the case studies were that they represent non-first response occupations involved in an essential function that federal employees need to provide on-site. In addition, we excluded from our case study selections occupations in agencies that have a primary role in the federal response to pandemic influenza. To assess the extent to which the case study agencies, BOP, FMS, and FAA have operational plans to protect their workforce, we reviewed agency and component pandemic plans and conducted interviews with agency officials, employees in the case study occupations, and facility managers and emergency planners for the sites at which the employees work. We also met with union representatives from the American Federation of Government Employees, the National Treasury Employees Union, and the National Air Traffic Controllers Association to get their perspective on plans to protect the federal workforce in the event of a pandemic. In addition, we conducted interviews with the executive directors of the Kansas City, Minnesota, and Oklahoma Federal Executive Boards (FEB) to better understand federal planning for workforce protection in the event of a pandemic at the regional level. Minnesota and Oklahoma were selected because we had identified them in a previous report as leaders in pandemic planning; Kansas City was selected because of the large population of federal workers in its jurisdiction, including many in our case study occupations. To better understand the challenges and assess the progress made in planning to protect employees, we visited several facilities where the employees in our case study occupations worked. Kansas City, Kansas; Kansas City, Missouri; and Leavenworth, Kansas, were selected as site visit locations because all of the case study agencies had facilities in the metropolitan statistical area that were also in the jurisdiction of an FEB, namely the Kansas City FEB. We selected as site visit facilities the United States Penitentiary in Leavenworth, Kansas, and BOP’s North Central Regional Office in Kansas City, Kansas, as the supporting regional office for that facility; the Kansas City Financial Center in Kansas City, Missouri; and FAA’s Central Regional Office in Kansas City, Missouri. We also visited the Allenwood Federal Correctional Complex in Allenwood, Pennsylvania. We selected FAA air traffic facilities to cover the array of types of facilities in which air traffic controllers work. We visited the Ronald Reagan Washington National Airport in Arlington, Virginia; the Potomac Terminal Radar Approach Control Facility in Warrenton, Virginia; the Washington Air Route Traffic Control Center in Leesburg, Virginia; and the Air Traffic Control Systems Command Center in Herndon, Virginia. Although we did not conduct a site visit, the Philadelphia Financial Center provided us with written answers to our questions. We conducted interviews with officials from HHS, the Department of Homeland Security (DHS), the Office of Personnel Management (OPM), and the Department of Labor (DOL). We met with HHS officials to get a better understanding of how access to antiviral medications and vaccines by federal agencies is envisioned in the event of a pandemic. HHS is responsible for the overall coordination of the public health and medical emergency response during a pandemic. DHS has responsibility for coordinating the overall domestic federal response during an influenza pandemic, including implementing policies that facilitate compliance with recommended social distancing measures, developing a common operating picture for all federal agencies, and ensuring the integrity of the nation’s infrastructure. OPM has responsibility for providing direction to the FEBs and the Chief Human Capital Officers Council as well as responsibility for developing human capital policy guidance for federal employees in the event of a pandemic. DOL’s Occupational Safety and Health Administration has responsibility for promoting the safety and health of workers. We also met with White House counsel from the past and current administrations representing the HSC to determine what role the council played in ensuring uniform pandemic preparedness across the U.S. government. We conducted this performance audit from January 2008 to April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, William J. Doherty, Assistant Director, and Judith C. Kordahl, Analyst-in-Charge, supervised the development of this report. Alisa Beyninson, Ryan Little, Ulyana Panchishin, and Nicholas Petrovski made significant contributions to all aspects of this report. David Dornisch and Andrew Stavisky assisted with the design and methodology. Karin Fangman provided legal counsel. Mallory Barg Bulman verified the information in the report.
|
Protecting federal workers essential to ensuring the continuity of the country's critical operations will involve new challenges in the event of a pandemic influenza outbreak. This requested report discusses (1) the extent to which agencies have made pandemic plans to protect workers who cannot work remotely and are not first responders, (2) the pandemic plans selected agencies have for certain occupations performing essential functions other than first response, and (3) the opportunities to improve agencies' workforce pandemic plans. GAO surveyed pandemic coordinators from 24 agencies and selected three case study occupations for review: federal correctional workers, staff disbursing Treasury checks, and air traffic controllers. The Homeland Security Council's (HSC) 2006 National Strategy for Pandemic Influenza Implementation Plan required federal agencies to develop operational pandemic plans, and responses from the pandemic coordinators of the 24 agencies GAO surveyed indicate that a wide range of pandemic planning activities are under way. However, the responses also showed that several agencies had yet to identify essential functions during a pandemic that cannot be performed remotely. In addition, although many of the agencies' pandemic plans rely on telework to carry out their functions, several agencies reported testing their information technology capability to little or no extent. GAO's three case study agencies also showed differences in the degree to which their individual facilities had operational pandemic plans. The Bureau of Prisons' correctional workers had only recently been required to develop pandemic plans for their correctional facilities. Nevertheless, the Bureau of Prisons has considerable experience limiting the spread of infectious disease within its correctional facilities and had also made arrangements for antiviral medications for a portion of its workers and inmates. The Department of the Treasury's Financial Management Service, which has production staff involved in disbursing federal payments such as Social Security checks, had pandemic plans for its four regional centers and had stockpiled personal protective equipment such as respirators, gloves, and hand sanitizers at the centers. Air traffic control management facilities, where air traffic controllers work, had not yet developed facility pandemic plans or incorporated pandemic plans into their all-hazards contingency plans. The Federal Aviation Administration had recently completed a study to determine the feasibility of the use of respirators by air traffic controllers and concluded that their long-term use during a pandemic appears to be impractical. There is no mechanism in place to monitor and report on agencies' workforce pandemic plans. Under the National Strategy for Pandemic Influenza Implementation Plan, the Department of Homeland Security (DHS) was required to monitor and report on the readiness of agencies to continue operations while protecting their employees during an influenza pandemic. The HSC, however, informed DHS in late 2006 or early 2007 that no specific reports on this were required to be submitted. Rather, the HSC requested that agencies certify to the council that they were addressing in their plans the applicable elements of a pandemic checklist in 2006 and again in 2008. This process did not include any assessment or reporting on the status of agency plans. Given agencies' uneven progress in developing their pandemic plans, monitoring and reporting would enhance agencies' accountability to protect their employees in the event of a pandemic. GAO has previously reported on the importance of internal control monitoring to assess the quality of performance over time. Without appropriately designed monitoring and reporting, the President and the Congress cannot fully assess the ability of the agencies to continue their operations while protecting their federal employees in the event of a pandemic.
|
This section discusses EPA’s risk assessment and risk management practices and the May 2009 IRIS process. EPA’s IRIS Program is an important source of information on health effects that may result from exposure to chemicals in the environment. As figure 1 shows, the toxicity assessments in the IRIS database fulfill the first two critical steps of the risk assessment process—providing qualitative hazard identification and dose-response assessment (see definitions below). IRIS information can then be used with the results of exposure assessments (typically conducted by EPA’s program or regional offices) to provide an overall characterization of the public health risks for a given chemical in a given situation. EPA defines a risk assessment, in the context of human health, as the evaluation of scientific information on the hazardous properties of environmental agents (hazard characterization), the dose-response relationship (dose-response assessment), and the extent of human exposure to those agents (exposure assessment). In final form, a risk assessment is a statement regarding the probability that populations or individuals so exposed will be harmed and to what degree (risk characterization). The development of risk assessments is directly dependent on the development of toxicity assessments such as those developed by the IRIS Program. A typical IRIS toxicity assessment is based on two sequential analyses: qualitative hazard identification and quantitative dose-response assessment. Among other things, a hazard identification identifies health hazards that may be caused by a given chemical at environmentally relevant concentrations; this identification describes the potential noncancer and cancer health effects of exposure to a chemical that research studies have suggested or determined. For cancer effects, EPA describes the carcinogenic potential of a chemical in a narrative which includes one of five weight-of-the-scientific-evidence descriptors, ranging from “carcinogenic to humans” to “not likely to be carcinogenic to humans.” The second analysis is the dose-response assessment, which characterizes the quantitative relationship between the exposure to a chemical and the resultant health effects; this assessment describes the magnitude of hazard for potential noncancer effects and increased cancer risk resulting from specific exposure levels to a chemical or substance. The quantitative dose-response analysis relies upon credible research data, primarily from either animal (toxicity) or human (epidemiology) studies. The noncancer dose-response assessments may include an oral reference dose (RfD)—an estimate of the daily oral exposure to a chemical that is likely to be without an appreciable risk of deleterious effects during a person’s lifetime—expressed in terms of milligrams per kilogram per day and an inhalation reference concentration (RfC)—an estimate of the daily inhalation exposure to a chemical that is likely to be without an appreciable risk of deleterious effects during a person’s lifetime— expressed in terms of milligrams per cubic meter. The focus of IRIS toxicity assessments has been on the potential health effects of long-term (chronic) exposure to chemicals. According to OMB, EPA is the only federal agency that develops qualitative and quantitative assessments of both cancer and noncancer risks of exposure to chemicals, and EPA does so largely under the IRIS Program. The risk characterization information, which is derived from toxicity and exposure assessments—exposure assessments identify the extent to which exposure actually occurs—can be used to make risk management decisions designed to protect public health. For example, IRIS assessments support scientifically sound decisions, policies, and regulations under such key statutes as the Clean Air Act, the Safe Drinking Water Act, and the Clean Water Act, as well as for setting Superfund cleanup standards of hazardous waste sites. Risk management, as opposed to risk assessment, involves integrating the risk characterization information with other information—such as economic information on the costs and benefits of mitigating the risk, technological information on the feasibility of managing the risk, and the concerns of various stakeholders—to decide when actions to protect public health are warranted. More specifically, an initial risk management decision would be to determine whether the health risks identified in a chemical risk assessment warrant regulatory or other actions. As a result, the development of IRIS assessments is of key interest to stakeholders, such as other federal agencies and their contractors, chemical companies, and others who could be affected if regulatory actions were taken. That is, stakeholders could face increased cleanup costs and other legal liabilities if EPA issued an IRIS assessment for a chemical that resulted in a risk management decision to regulate the chemical to protect the public. EPA’s process for developing IRIS assessments—established in May 2009—consists of seven steps. In announcing its revised process in May 2009, EPA noted that the new process would ensure that the majority of assessments would be completed within 2 years (23 months)—a significantly shorter time than the estimated completion time frame of about 6 to 8 years under the previous process. We note that the seven steps are preceded by a literature search and data call-in, which is not included as part of the process or its time frames. Results of the literature search are posted on the IRIS website and announced in the Federal Register, along with a request for information—the data call-in—about any pertinent studies not listed. According to EPA officials, the literature search and data call-in are not part of the process because the agency does not dedicate full-time staff to them. EPA officials told us that after the literature search, they place IRIS assessments in one of three categories—standard, moderately complex, or exceptionally complex— on the basis of such factors as the number of available scientific studies on the chemical, the number of potential health effects identified in these studies, the staff resources required to complete the assessment, and the level of stakeholder interest. However, this process, as written, does not distinguish among different types of assessments with varying complexity. Table 1 outlines the steps in the IRIS assessment process, along with the planned time frames established by EPA. All IRIS assessments undergo external peer review, but exceptionally complex assessments are generally peer reviewed by EPA’s Science Advisory Board panels and in some cases by National Academies panels. These peer reviews typically require more planning and take longer than the reviews for less complicated assessments. Peer reviews for all other assessments are typically conducted by expert panels that are independently assembled by an EPA contractor. All panel members, including Science Advisory Board and National Academies panels, are composed of individuals with expertise in various scientific and technical disciplines who retain their primary involvement in academia, industry, state government, and environmental organizations. As we reported in 2008, an overarching factor that can affect EPA’s ability to complete IRIS assessments in a timely manner is the compounding effects of delays. Once a delay in the assessment process occurs—for example, suspending work on an assessment to wait for additional studies—work that has been completed can become outdated, necessitating rework of some or all of the steps in the assessment process. Even a single delay can have far-reaching, time-consuming consequences, in some cases requiring that the assessment process essentially start over. EPA’s May 2009 IRIS assessment process addresses some of the problems we identified in our March 2008 report. However, progress in other areas has been limited. EPA’s initial gains in productivity under the revised process have not been sustained. EPA has not significantly reduced its workload of ongoing assessments, which would enable the agency to routinely start new assessments and keep existing assessments current. EPA has not met established time frames for IRIS assessment process steps. EPA has addressed concerns we raised in our March 2008 report regarding the transparency of the IRIS process. Since May 2009, all federal agency and White House office comments from both the interagency science consultation and discussion (steps 3 and 6b of the IRIS process) are available to the public on EPA’s IRIS website. In addition, EPA has made publicly available documents that show EPA’s responses to selected “major” interagency comments for all draft IRIS assessments that have completed an interagency review step since June 2011. As we have previously reported, we believe that interagency coordination can enhance the quality of EPA’s IRIS assessments. Previously, OMB considered its comments and changes, and those of other federal agencies, to be “deliberative”—that is, they were not part of the public record. We believe the input from other federal agencies is now obtained in a manner that better ensures that EPA’s scientific analysis is given appropriate weight. As a result, stakeholders, including EPA regional and program offices, the public, and industry, can now see which other federal agencies comment and the nature of their comments, making IRIS assessments more transparent. Transparency is especially important because agencies providing input, such as DOD and NASA, may have a vested interest in the outcome of the assessment should it lead to regulatory or other actions. For example, these agencies may be affected by the potential for increased environmental cleanup costs and other legal liability if EPA issued an IRIS assessment that resulted in a decision to regulate a chemical to protect the public. Officials we spoke with from other federal agencies—including DOD, NASA, and the Department of Health and Human Services (HHS)—all agreed that making their comments publicly available was a good practice. In addition, EPA now manages the interagency science consultation and discussion (steps 3 and 6b of the IRIS process, formerly OMB-managed interagency reviews). As we recommended in 2008, the process now includes time limits for all parties, including OMB and other federal agencies, to provide comments to EPA on draft assessments. Prior to May 2009, OMB managed these steps, and EPA was not allowed to proceed with assessments until OMB notified EPA that it had sufficiently responded to comments from OMB and other federal agencies. EPA has also streamlined its IRIS process, as we recommended in our 2008 report, by consolidating some process steps and eliminating others that had provided opportunities for other federal agencies to suspend IRIS assessments to conduct additional research. Shortly after it implemented its revised IRIS assessment process in May 2009, EPA experienced a surge of productivity in terms of the number of IRIS assessments it issued. Specifically, from May 2009 through September 30, 2011, EPA completed 20 IRIS assessments—more than doubling the total productivity it achieved during fiscal years 2007 and 2008. However, 16 of these were completed in the first year and a half of implementing the revised process, and productivity fell sharply during fiscal year 2011, with EPA issuing 4 IRIS assessments (see fig. 2). In completing 4 IRIS assessments in fiscal year 2011, EPA fell significantly short of its original plan to complete 20 assessments—a goal that it had revised to 9 as of August 2011. In addition, EPA is unlikely to meet its fiscal year 2012 goal of completing 40 assessments. As of September 30, 2011, 12 of the 40 assessments that EPA plans to complete in fiscal year 2012 are still being drafted (step 1 of the IRIS process). See appendix II for the status of chemicals in the IRIS assessment process as of September 30, 2011. On the basis of the planned time frames EPA established under its revised process, once these 12 IRIS assessments are drafted, EPA will require at least 345 days, or 11½ months, to complete the remaining IRIS process steps and issue the assessments—making it unlikely these will be completed in 2012. The increased productivity occurring after May 2009 does not appear to be entirely attributable to the revised IRIS assessment process. According to our analysis of EPA data, the agency’s ability to complete more assessments was not due to a fundamental gain in how quickly assessments are completed, but rather to EPA’s ability to clear up the backlog of assessments that had undergone work under the previous IRIS process and had been delayed for multiple reasons. Most of the assessments completed from May 2009 through September 2011 had been in process 5 years or longer and thus had already passed through some key process steps prior to the implementation of the revised process. In addition, most of these completed IRIS assessments were for standard and moderately complex assessments—that is, they were less challenging to complete than those for more complex chemicals. Specifically, 17 of 20 assessments issued from May 2009 through September 30, 2011, were in process for 5 years or longer, and 2 of the 20 were for exceptionally complex assessments (see table 2). For example, 1 exceptionally complex assessment that EPA did complete was for trichloroethylene (TCE). For information on TCE, as well as on some other key chemicals for which EPA has not completed IRIS assessments, see appendix III. As of September 30, 2011, EPA had 55 IRIS assessments ongoing and 14 on hold—down from the 88 assessments that were in various stages of development when it implemented its revised IRIS assessment process in May 2009. Since May 2009, EPA has undertaken 6 new assessments, dropped 5 assessments that it determined were no longer required, completed 20 assessments, and continued to have 14 assessments on hold (see table 3). According to EPA officials, assessments that have been put on hold will be resumed when the agency has resources available to staff them. However, this tally of IRIS assessments does not reflect the true extent of EPA’s workload or the backlog of demand for IRIS assessments. Beyond the 55 ongoing IRIS assessments and 14 on hold, the demand for additional IRIS assessments is unclear. With existing resources devoted to addressing its current workload of ongoing assessments, EPA has not been in a position to routinely start new assessments. In late 2010, for the first time since 2007, EPA solicited nominations for new IRIS assessments from EPA program and regional offices, as well as from the public and federal agencies that participate in IRIS interagency reviews. However, as of September 30, 2011, EPA officials had not decided which chemicals to include on the IRIS agenda and thus include in their workload. Moreover, instead of nominating new chemicals for assessment in 2010, one regional office requested that the IRIS Program focus its efforts on completing assessments currently under way. In addition, in 2007, the Office of Air and Radiation—which develops national programs, policies, and regulations for controlling air pollution and radiation exposure—requested that ongoing assessments be expedited for 28 chemicals that it identified as high-priority and required to fulfill its regulatory mandates. As of September 30, 2011, 17 of the 28 assessments the office identified are ongoing, and 3 are on hold. See appendix IV for EPA’s expected completion dates for IRIS assessments currently in the assessment process. In addition, other assessments in the IRIS database may need to be updated. As we reported in March 2008, EPA data from 2001 through 2003 indicated that 287 of the assessments in the IRIS database at that time may need to be updated. In October 2009, EPA announced in the Federal Register the establishment of the IRIS Update Project. The stated purpose of the project was to update IRIS toxicity values, such as oral reference doses or inhalation reference concentrations, that are more than 10 years old. However, according to EPA officials, since the project was announced, little progress has been made toward updating these assessments. We note that even if EPA were to overcome the significant productivity difficulties it has experienced in recent years and meet its goal of completing 40 assessments in fiscal year 2012, it is not clear that this level of productivity would meet the needs of EPA program offices and other users. IRIS assessments have taken longer than the time frames established under the revised IRIS process. Since implementing the revised process, most IRIS assessments have exceeded the established time frames for each step of the process. EPA officials, however, told us that the time frames established for the steps in the revised IRIS assessment process apply only to standard assessments—and not to moderately or exceptionally complex assessments. While EPA officials have said that they are trying to hold moderately complex assessments to the established time frames, EPA does not have a written policy that describes the applicability of these time frames or written criteria for designating IRIS assessments as standard, moderately complex, or exceptionally complex. Consequently, it is unclear how IRIS users will know which assessments are standard, moderately complex, or exceptionally complex and what time frames will be required to complete them. According to EPA officials, NCEA management, including IRIS Program management, is tracking the time it takes for each IRIS assessment to complete the various steps in the IRIS process. However, EPA has not yet analyzed these data to determine whether the time frames established for each step or the overall 23-month process are realistic. According to EPA officials, they do not yet have the data needed to draw conclusions regarding completion time frames. On the basis of our analysis of EPA data, however, we determined how long each IRIS process step was taking on average compared with the time frames established for each step under the May 2009 revised process. We performed this analysis for the 55 assessments that were ongoing, as of September 30, 2011, and the 20 assessments that were completed after May 2009. Because none of the 20 IRIS assessments completed from May 2009 through September 2011 were initiated after the revised process was implemented, it was not possible to fully evaluate the extent to which EPA is adhering to the new 23-month time frame. Further, we combined our analysis of steps 4 and 5 because EPA data do not indicate when step 4 ends and when step 5 begins, and we combined steps 6 and 7 for the same reason. According to our analysis, on average, assessments of all types have taken longer than the established time frames for every step in the IRIS process (see table 4). Some other federal agencies that participate in interagency reviews expressed concern that in some cases time and resource constraints present challenges as they try to meet EPA’s time frames for the two interagency review steps. In addition to the time limits established under the revised process, in an effort to increase productivity and complete more IRIS assessments, EPA officials said that, beginning in April 2011, the agency began to accelerate the number of draft assessments sent through the interagency review steps. However, officials from other federal agencies— including HHS and DOD—told us that they have advised EPA that the accelerated pace of interagency reviews in the second half of fiscal year 2011 strained their resources. In addition, the official from NASA told us that not only are the increased pace of reviews straining the agency’s resources, but that it has also affected the ability to provide in-depth independent technical reviews and interagency comments. EPA officials also told us that the interagency reviewer at NASA is so concerned with the pace of the interagency reviews under the revised process that NASA officials have asked OMB to form an interagency work group to discuss the reviews. EPA faces both long-standing and new challenges in implementing the IRIS Program. First, the National Academies has identified recurring issues with how the IRIS Program develops and presents its assessments and has suggested improvements. Second, EPA has not consistently provided reliable information on ongoing and planned IRIS assessments to IRIS users. Third, unresolved discussions with OMB regarding EPA’s responses to Data Quality Act challenges may impede EPA’s ability to issue completed IRIS assessments. The National Academies and EPA’s Science Advisory Board have identified several recurring issues with how EPA develops and presents IRIS assessments. For example, in April 2011, the National Academies in its independent scientific review of EPA’s draft IRIS assessment of formaldehyde provided a critique of EPA’s development and presentation of draft IRIS assessments. Overall, the National Academies noted some recurring methodological problems in the draft IRIS assessment of formaldehyde. In addition, in the report the National Academies also identified recurring issues concerning clarity and transparency with EPA’s development and presentation of its draft IRIS assessments. The National Academies and Science Advisory Board have identified similar clarity and transparency issues in peer review reports over the past 5 years. Some of these reports stated that EPA should more clearly explain its reasons for including or excluding the scientific studies supporting draft IRIS assessments. In addition, some reports stated that EPA should more transparently present its justifications for its methodological approaches. Independent of its review of the formaldehyde assessment, the National Academies also provided a “roadmap for revision” that made suggestions for improvements to the IRIS draft development process, during which EPA selects and evaluates evidence (the literature search) and drafts an assessment (step 1). The National Academies’ “roadmap for revision” suggested that EPA take the following steps, among others: use clear, standardized methods to identify and select study evidence; use a standardized approach to evaluate and describe study strengths and weaknesses and the weight of evidence, describe and justify the assumptions and models used, and adopt a standardized approach to characterizing uncertainty factors; and present methodology and findings more clearly and more concisely through better use of graphics and tables and use a template to facilitate a consistent description of the approach to study selection. The National Academies’ report on the draft IRIS assessment of formaldehyde specifically noted that EPA should not delay the finalization of the assessment in order to implement any of the suggestions it made regarding the overall IRIS process. As of September 30, 2011, according to EPA officials, the agency is revising the assessment in response to the National Academies’ suggestions, but the status page on EPA’s website for formaldehyde lists “TBD”—to be determined—as the posting date for the final assessment. In July 2011, EPA announced that it planned to respond to the National Academies’ suggestions by implementing changes to the way it develops draft IRIS assessments. In announcing the planned changes, EPA stated that it would take the following actions: enhance its approach to identifying and selecting scientific study provide more complete documentation of its approach to evaluating scientific study evidence and indicate which criteria were most influential in its evaluation of the weight of evidence; and concisely state the criteria used to include or exclude studies, continue to use existing IRIS guidelines to enhance the clarity and transparency of its data evaluation and presentation of findings and conclusions, eliminate the need for some report text using standardized tables, and portray toxicity values graphically. According to EPA officials, in implementing these changes, EPA will subject those assessments that are in earlier stages of development to more extensive changes than those in later stages of development. It will change the latter “as feasible” without repeating steps in the overall IRIS process. However, EPA has not provided a more detailed description of how the National Academies’ suggestions will apply to each of the assessments in its current inventory of IRIS assessments. Without a more precise description of which drafts would be considered “in the earlier stages of development” or what “more extensive changes” would entail, it is too soon to provide a comprehensive assessment of EPA’s approach. In addition, it is not transparent to stakeholders and other interested parties which assessments will be subject to these changes and which will not. EPA established the Board of Scientific Counselors (BOSC), an advisory committee composed of non-EPA technical experts from academia, industry, and environmental communities, to provide independent advice, information, and suggestions to the Office of Research and Development (ORD) research program—which houses the IRIS Program. Part of BOSC’s mission is to evaluate and provide advice concerning the utilization of peer review within ORD to sustain and enhance the quality of science in EPA. It is unclear if BOSC will have a role in reviewing EPA’s response to the National Academies’ suggestions. We reviewed two IRIS assessments—one completed and one still in draft form—that reflect changes EPA has made in response to the National Academies’ suggestions. First, for its assessment of urea, finalized in July 2011, EPA streamlined the report by moving sections of text from the body to an appendix, which shortened the body of the assessment from 89 to 57 pages, making it more concise. In addition, we reviewed the draft IRIS assessment of diisobutyl phthalate (DIBP), which EPA provided to us, that was undergoing agency review (step 2) and reflects some of the National Academies’ suggestions regarding presentation. For example, it includes (1) descriptive and pictorial explanations of the study selection methods used; (2) tables that, among other things, give side-by-side comparisons of studies considered in determining the oral reference dose for the chemical; and (3) brief descriptions of the strengths and weaknesses of various studies considered. For these two assessments, it appears that EPA has begun to enhance the readability of its assessments by making changes that appear to be in line with the suggestions made by the National Academies. EPA uses two primary mechanisms—the IRIS agenda and a website feature known as IRISTrack—to make information on the status of IRIS assessments available to EPA program and regional offices, other federal agencies, and the public. EPA has not effectively used these two mechanisms, or a third that we recommended in March 2008—that the agency provide a 2-year notice of its intent to assess specific chemicals— to consistently provide reliable information on IRIS assessments to stakeholders and other interested parties. First, EPA has not published an IRIS agenda in the Federal Register— identifying the chemicals that EPA plans to assess (both new and ongoing assessments)—since it announced its 2008 IRIS agenda in December of 2007. EPA started developing an annual IRIS agenda and providing it to the public in a notice in the Federal Register in 1997. In late 2010, EPA began to solicit nominations for its fiscal year 2011 IRIS agenda from its program and regional offices, as well as from the public and federal agencies that participate in IRIS interagency reviews. However, as of September 30, 2011, EPA had not published its fiscal year 2011 agenda. In addition, some of the information provided in the Federal Register notices about the IRIS agenda has been incomplete. For example, an October 2010 Federal Register notice contained a list of chemicals currently on the IRIS agenda but did not distinguish between chemicals the agency was actively assessing and those it had designated for future assessment. We reported on similar issues in March 2008— noting that EPA had identified some assessments that had been suspended as ongoing. Second, EPA has not kept information on the status of the individual ongoing assessments up to date in IRISTrack—an issue we also reported on in 2008. EPA’s IRISTrack, a feature of its website, is intended to provide stakeholders and other interested parties with information on draft IRIS assessments—specifically, estimated start and end dates for steps in the IRIS process. For example, officials from the Office of Water indicated that that their office relies heavily on IRISTrack for information about the status of IRIS assessments. In addition to not updating IRISTrack, EPA recently removed some key information presented in IRISTrack. Now, in some cases, the IRISTrack date for the beginning of draft development (step 1) understates the actual duration of an assessment—sometimes by many years. For example, IRISTrack indicates that draft development for the dioxin assessment began in the first quarter of fiscal year 2009; in fact, as we have reported, EPA has been assessing dioxin since 1991. IRISTrack also understates the duration of assessments of other chemicals of key concern—for formaldehyde, naphthalene, and TCE. Therefore, current and accurate information regarding when an assessment will be started, which assessments are currently ongoing, and when an assessment is projected to be completed is presently not publicly available. Third, EPA does not provide at least 2 years’ notice of its intent to assess specific chemicals, as we recommended the agency should do in our March 2008 report to give agencies and other interested parties the opportunity to conduct research needed to fill any data gaps. In commenting on our report, EPA agreed to consider our recommendation, and EPA officials recently stated that they continue to agree with it, but as of September 30, 2011, the agency still had not taken steps to implement our recommendation. Discussions between EPA and OMB officials regarding Data Quality Act challenges related to specific draft IRIS assessments have been ongoing for over a year without resolution. If these unresolved discussions continue, they could contribute to delays of IRIS assessments. According to EPA officials, OMB would like to return to its role in the prior assessment process, in which it managed interagency reviews and made the final determination as to whether EPA has satisfactorily responded to comments from OMB and officials in other federal agencies. The Information Quality Act, commonly called the Data Quality Act, requires OMB to issue governmentwide guidelines to “ensure and maximize the quality, objectivity, utility, and integrity of information, including statistical information,” disseminated to the public. In addition, it required agencies to issue their own guidelines, set up administrative mechanisms to allow affected parties to seek the correction of information they considered erroneous, and report periodically to OMB information about Data Quality Act challenges (“requests for correction” of agency information) and how the agencies addressed them. Under its data quality guidelines, when EPA provides opportunities for public participation by seeking comments on information, such as during a rulemaking, the agency uses the public comment process rather than EPA guidelines to address concerns about EPA’s information. This is consistent with OMB’s data quality guidelines, which encourage agencies to incorporate data quality procedures into their existing administrative practices rather than create new and potentially duplicative or contradictory processes. According to EPA’s data quality guidelines, the public comment period serves the purposes of the guidelines, provides an opportunity for correction of information, and does not duplicate or interfere with the orderly progression of draft documents through an established process—in this case, the IRIS assessment process. That is, the external peer review and associated public comment period provide the public with the opportunity to raise questions regarding the quality of the information being used to support an IRIS assessment. According to EPA officials, federal agency responses to data quality challenges must be cleared by OMB before EPA sends responses to the parties filing challenges—although no law or guidance specifically provides for such reviews. In June and July 2010, EPA received Data Quality Act challenges regarding two draft IRIS assessments. According to EPA officials, in its draft responses to these data quality challenges, EPA declined to review the challenged data because, according to agency policy, draft IRIS documents are not subject to data quality challenges. EPA used the same approach in 2006 when responding to and declining a similar challenge regarding a draft IRIS assessment; at that time, OMB approved the EPA response. EPA sent its draft responses for the two more recent challenges to OMB for approval in September 2010 and January 2011. EPA’s data quality guidelines set a goal of responding to Data Quality Act challenges within 90 days, but EPA officials said that they still await a decision by OMB. According to EPA officials, OMB is delaying a decision because OMB would like to return to its role in the prior assessment process, in which it managed interagency reviews and made the final determination as to whether EPA has satisfactorily responded to comments from OMB and officials in other federal agencies. EPA officials told us that as of September 30, 2011, the issues regarding data quality challenges had not delayed the progress of draft IRIS assessments. Meanwhile, OMB staff told us that they had sent comments to EPA on the draft responses and await EPA’s reply to their comments. It appears to GAO that the discussions of these issues between EPA and OMB officials, which have been ongoing for over a year without resolution, have highlighted the agencies’ differences regarding the revised IRIS process. If these differences persist, they could contribute to the compounding effects of delays in the IRIS process, discussed here and in our earlier work. For example, in August 2011, EPA received a third data quality challenge on an assessment that EPA had expected to be finalized at the end of fiscal year 2011. For reasons that remain unclear, EPA now projects that this assessment will not be finalized until fiscal year 2012. We note that the assessment had entered the interagency science discussion (step 6b) in July 2011. EPA asked interagency reviewers to submit written comments by August 26, 2011, but as of September 2011, OMB reviewers have not yet submitted comments. The IRIS process reforms EPA began implementing in May 2009 have restored EPA’s control of the process and increased its transparency. Notably, EPA has addressed concerns we raised in our March 2008 report regarding the transparency of comments from both the interagency science consultation and discussion steps in the IRIS process. Making these comments publicly available is especially important because agencies providing input may have a vested interest in the outcome of the assessment should it lead to regulatory or other actions. As a result, stakeholders, including EPA regional and program offices, the public, and industry, can now see which other federal agencies comment and the nature of their comments, making IRIS assessments more transparent. In addition, EPA now manages the interagency science consultation and discussion steps and has streamlined the IRIS process. Progress in other areas, however, has been more limited. For example, even for its less challenging assessments, EPA took longer than its established time frames for accomplishing steps in the revised process— calling into question the feasibility and appropriateness of the established time frames in the IRIS assessment process for standard assessments. Thus, the established time frames may not be feasible. It is also unclear whether the established time frames apply to moderately complex assessments because EPA does not have a written policy that describes the applicability of the time frames, although EPA officials said they are trying to hold moderately complex assessments to the 23-month time frame. Similarly, EPA does not have written criteria for designating IRIS assessments as standard, moderately complex, or exceptionally complex. We note that EPA has not analyzed the time frames to determine whether the actual time taken for each step of the overall 23-month process is realistic. Such an analysis would provide more accurate information for EPA to use in establishing time frames for these assessments. Not having established time frames for these assessments also creates uncertainty for many stakeholders with significant interest in IRIS assessments. EPA also faces both long-standing and new challenges in implementing the IRIS Program. Notably, the National Academies and Science Advisory Board have identified recurring issues of clarity and transparency of draft IRIS assessments. Consequently, as part of its independent scientific review of EPA’s draft IRIS assessment of formaldehyde, the National Academies also provided suggestions in a “roadmap for revision” that included suggestions for improving EPA’s development and presentation of draft IRIS assessments in general. The report identified recurring methodological issues with how the IRIS Program develops and presents its assessments and suggested improvements. EPA announced that it intends to address the issues raised in the National Academies’ report but has not publicly indicated how these proposed changes would be applied to its current inventory of IRIS assessments. Many of the issues raised in the National Academies’ report have been brought to the agency’s attention previously. It is unclear whether any independent entity with scientific and technical credibility, such as EPA’s Board of Scientific Counselors, will have a role in reviewing EPA’s planned response to the National Academies’ suggestions to ensure that EPA addresses these long-standing issues. In addition, EPA has not addressed other long-standing issues regarding the accuracy and availability of information on the status of IRIS assessments to IRIS users—including stakeholders such as EPA program and regional offices, other federal agencies, and the public. For example, since 2007, EPA has not published in the Federal Register an IRIS agenda that includes information on chemicals the agency is actively assessing or when it plans to start assessments of other listed chemicals. The agency also has not updated IRISTrack to display all current information on the status of assessments on the IRIS agenda, including estimated start dates and end dates of steps in the IRIS process. In addition, EPA has recently removed some key information presented in IRISTrack that showed the duration of IRIS assessments. Now, in some cases, the IRISTrack date for the beginning of draft development underestimates the actual duration of an assessment—sometimes by many years. Therefore, current and accurate information regarding when an assessment will be started, which assessments are currently ongoing, and when an assessment is projected to be completed is presently not publicly available. Finally, as we recommended the agency should do in our March 2008 report, EPA does not provide at least 2 years’ notice of its intent to assess specific chemicals, which would give agencies and other interested parties the opportunity to conduct research needed to fill any data gaps. To improve EPA’s IRIS assessment process, we are making the following six recommendations: To better ensure the credibility of IRIS assessments by enhancing their timeliness and certainty, we recommend that the EPA Administrator require the Office of Research and Development to assess the feasibility and appropriateness of the established time frames for each step in the IRIS assessment process and determine whether different time frames should be established, based on complexity or other criteria, for different types of IRIS assessments, and should different time frames be necessary, establish a written policy that clearly describes the applicability of the time frames for each type of IRIS assessment and ensures that the time frames are realistic and provide greater predictability to stakeholders. To better ensure the credibility of IRIS assessments by enhancing their clarity and transparency, we recommend that the EPA Administrator require the Office of Research and Development to submit for independent review to an independent entity with scientific and technical credibility, such as EPA’s Board of Scientific Counselors, a plan for how EPA will implement the National Academies’ suggestions for improving IRIS assessments in the “roadmap for revision” presented in the National Academies’ peer review report on the draft formaldehyde assessment. To ensure that current and accurate information on chemicals that EPA plans to assess through IRIS is available to IRIS users—including stakeholders such as EPA program and regional offices, other federal agencies, and the public—we recommend that the EPA Administrator direct the Office of Research and Development to annually publish the IRIS agenda in the Federal Register each fiscal indicate in published IRIS agendas which chemicals EPA is actively assessing and when EPA plans to start assessments of the other listed chemicals; and update IRISTrack to display all current information on the status of assessments of chemicals on the IRIS agenda, including projected and actual start dates, and projected and actual dates for completion of steps in the IRIS process, and keep this information current. We provided a draft of this report to the Administrator of EPA for review and comment. In written comments, which are included in appendix V, EPA agreed with the report’s recommendations. EPA also provided technical comments, which we incorporated into the report as appropriate. Specifically, EPA agreed that it should (1) assess the feasibility and appropriateness of the established time frames for each step in the IRIS assessment process by using available program performance measures collected since the current IRIS process was established to evaluate determine whether different time frames should be established, based on complexity or other criteria, for different types of IRIS assessments, (2) determine if different time frames are necessary, establish a written policy that clearly describes the applicability of the time frames for each type of IRIS assessment and ensures that the time frames are realistic and provide greater predictability to stakeholders, (3) continue to implement the 2011 suggestions for improving IRIS assessments in the “roadmap for revision” presented in the National Academies’ peer review report on the draft formaldehyde assessment and seek independent review through the Science Advisory Board to ensure that the agency is addressing the recommendations, (4) annually publish the IRIS agenda in the Federal Register each fiscal year, (5) indicate in published IRIS agendas which chemicals EPA is actively assessing and when EPA plans to start assessments of the other listed chemicals, and (6) update IRISTrack to display all current information on the status of assessments of chemicals on the IRIS agenda, including projected and actual start dates, and projected and actual dates for completion of steps in the IRIS process, and keep this information current. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator of EPA, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This appendix details the methods we used to assess the Environmental Protection Agency’s (EPA) management of its Integrated Risk Information System (IRIS). For this review, our objectives were to evaluate (1) EPA’s progress in completing IRIS assessments under the May 2009 process and (2) the challenges, if any, that EPA faces in implementing the IRIS Program. To address these objectives, we reviewed relevant EPA documents, including documents outlining the April 2008 and the May 2009 versions of the IRIS assessment process; documents related to IRIS performance metrics; chemical nomination forms submitted by EPA regional and program offices, federal agencies, and others; and documents and other information on the public EPA website, including the IRIS database and IRISTrack, the assessment tracking system available at the IRIS website. In addition, we reviewed other relevant documents, including Federal Register notices announcing, among other things, IRIS agendas, as well as documents related to EPA’s meetings with other federal agencies involved in interagency reviews of draft IRIS assessments. We did not evaluate the scientific content or quality of IRIS assessments; however, we did review the National Academies’ peer review report on the draft IRIS assessment of formaldehyde to evaluate their suggestions for overall improvements to the development of IRIS assessments and other peer review reports by the National Academies and EPA’s Science Advisory Board to evaluate their suggestions for improvements to draft IRIS assessments. In addition, we interviewed officials from EPA’s National Center for Environmental Assessment (NCEA) who manage the IRIS Program, including the Acting Center Director, the Associate Director for Health, and the IRIS Program Acting Director, to obtain their perspectives on, among other things, the May 2009 IRIS process and the effects of changes from the April 2008 IRIS process, the extent to which EPA has made progress in completing timely, credible chemical assessments, challenges EPA faces in completing assessments, and EPA’s process for responding to Data Quality Act challenges. We interviewed officials from EPA’s Office of Environmental Information to obtain their perspectives on EPA’s process for responding to data quality challenges. We also attended two Board of Scientific Counselors (BOSC) meetings to understand the board’s role in providing advice, information, and recommendations about the Office of Research and Development (ORD) research programs, including IRIS. For the first objective, we obtained and analyzed data from fiscal year 1999 through September 30, 2011, including data, spreadsheets, project plans, and other documents used in IRIS assessment planning, development, and completion. From the data we gathered, we analyzed information on IRIS productivity, including information on the number of IRIS assessments completed and initiated, the status of IRIS assessments that are currently in progress or on the IRIS agenda, and the completion dates and durations of IRIS assessment process steps completed or currently in progress for given chemical assessments. In addition, we assessed the reliability of the data we received from EPA for our first objective. Our assessment consisted of interviews and e-mail exchanges with EPA officials about the data system, the method of data input, and internal data controls and documentation, among other areas. We also corroborated the data with other sources, where possible. For example, we verified the information provided in tables of IRIS assessment start dates and completion dates of IRIS assessment process steps through interviews and e-mail exchanges with the NCEA officials responsible for maintaining these data. Through our assessment, we determined that the data were sufficiently reliable for our purposes. For the second objective, we interviewed the chair of the National Academies Committee to Review EPA’s Draft IRIS Assessment of Formaldehyde to obtain his perspective on the National Academies’ suggestions for improvements to the IRIS assessment process. We interviewed officials from the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs (OIRA) to obtain their perspectives on interagency review of draft IRIS assessments, OMB’s process for responding to EPA with regard to Data Quality Act challenges, and OMB’s process for reviewing and approving EPA guidance documents. In addition, we interviewed officials from the Department of Defense (DOD), the National Aeronautics and Space Administration (NASA) and the Department of Health and Human Services (HHS)—including representatives from the Centers for Disease Control and Prevention’s National Center for Environmental Health (NCEH)/ Agency for Toxic Substances and Disease Registry (ATSDR), National Institute for Occupational Safety and Health (NIOSH). We also interviewed HHS officials from the Food and Drug Administration (FDA); the National Institute of Environmental Health Sciences/National Toxicology Program and the Office of the Secretary. We also interviewed representatives from a chemical industry group and a nonprofit research and educational organization. We conducted this performance audit from July 2010 to December 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Step 2–EPA internal review (8 assessments in step) Step 3– Interagency science consultation (2 assessments in step) Step 4–External peer review and public comment (9 assessments in step) Step 5–EPA draft revision (7 assessments in step) Step 6a and b–Final EPA review / Interagency science discussion (1 assessment in step) Step 7– Completion and posting (4 assessments in step) Benzopyrene Polychlorinated biphenyls (PCBs) (noncancer) Arsenic, inorganic (cancer) Dichloro- benzene, 1,2- Dichloro- benzene, 1,3- Dichloro- benzene, 1,4- Formaldehyde Polycyclic aromatic hydrocarbon (PAH) mixtures Tetrachloro- dibenzo-p-dioxin, 2,3,7,8- (dioxin) Dichloro- methane Ethylene oxide (cancer) Tetrachloro- ethylene (perchloro- ethylene or perc) Tetrahydrofuran (THF) Trichloroethylene (TCE). In September 2011, EPA finalized and posted an IRIS assessment of TCE, 13 years after initiating it. A degreasing agent used in industrial and manufacturing settings, TCE is a common environmental contaminant in air, soil, groundwater, and surface water. TCE has been found in the drinking water at Camp Lejeune, a large Marine Corps base in North Carolina. It has also been found at Superfund sites and many industrial and government facilities, including aircraft and spacecraft manufacturing operations. TCE has been linked to cancer, including childhood cancer, and other significant health hazards, such as birth defects. In 1995, the International Agency for Research on Cancer, part of the World Health Organization, classified the chemical as “probably carcinogenic to humans,” and in 2000, the Department of Health and Human Services’ National Toxicology Program concluded that TCE is “reasonably anticipated to be a human carcinogen.” However, between 1989 and September 2011, the IRIS database contained no quantitative or qualitative data on TCE. Because of questions raised by peer reviewers about the IRIS cancer assessment for TCE, EPA withdrew it from the IRIS database in 1989 and did not initiate a new TCE cancer assessment until 1998. In 2001, EPA issued a draft IRIS assessment of TCE that characterized TCE as “highly likely to produce cancer in humans.” The draft assessment was peer reviewed by a Science Advisory Board panel and released for public comment. In the course of these reviews, issues arose concerning, among other things, EPA’s use of emerging risk assessment methods and the uncertainty associated with these new methods. To help address these issues, EPA and other agencies sponsored a National Academies peer review panel to provide guidance. The National Academies panel recommended in its 2006 report that EPA finalize the draft assessment using available data, noting that the weight of evidence of cancer and other health risks from TCE exposure had strengthened since 2001. Nonetheless, the TCE assessment was returned to draft development. It then underwent a third peer review, again through Science Advisory Board, which issued its report in January 2011. EPA revised the draft in response to the Science Advisory Board’s comments and, in September 2011, finalized and posted the TCE assessment. Dioxin. The term “dioxin” applies to a family of chemicals that are often the byproducts of combustion and other industrial processes. Complex mixtures of dioxins enter the food chain and human diet through emissions into the air that settle on soil, plants, and water. When animals ingest plants, commercial feed, and water contaminated with dioxins, the dioxins accumulate in the animals’ fatty tissue. EPA’s initial assessment of dioxin, which was published in 1985, focused on the dioxin TCDD (2,3,7,8-tetrachlorodibenzo-p- dioxin), which animal studies dating to the 1970s had shown to be the most potent cancer-causing chemical studied to date. EPA began work on updating this assessment in 1991. From 1995 through 2000, the revised draft assessment underwent a full peer review, as well as three peer reviews of key segments of the draft. As we have reported previously, EPA officials said in 2002 that the version of the revised assessment then in progress would conclude that dioxin may adversely affect human health at lower exposure levels than had previously been thought, and that most exposure to dioxins occurs from eating such American dietary staples as meat, fish, and dairy products. EPA was moving closer to finalizing the assessment when, in 2003, a congressional appropriations committee directed the agency not to issue the assessment until it had been peer reviewed by the National Academies. The National Academies issued its peer review report in July 2006. EPA then revised the draft assessment in response to the National Academies’ recommendations, releasing it for public comment in May 2010 and sending it to the Science Advisory Board for another peer review. In August 2011, the Science Advisory Board panel issued its peer review report. Having been tasked with evaluating EPA’s responses to the National Academies review and its incorporation of studies that have become available since 2006, the panel concluded that the draft IRIS assessment of dioxin was “generally clear, logical, and responsive to many but not all of the suggestions of the NAS.” Among other things, the Science Advisory Board panel recommended that EPA discuss both linear and nonlinear models for cancer risks associated with dioxin exposure in its revised report. Three days after the Science Advisory Board issued its report, EPA announced that it would split the dioxin assessment into two parts, completing the noncancer portion of the assessment first and then addressing the Science Advisory Board’s comments and completing the cancer portion of the assessment. EPA expects to complete the noncancer portion of the dioxin assessment by January 2012, and states that it will complete the cancer portion as expeditiously as possible thereafter. The effort to update the assessment of dioxin, which could have significant health implications for all Americans, has been ongoing for 20 years. Formaldehyde. Formaldehyde is a gas widely used in such products as pressed wood, paper, pharmaceuticals, leather goods, and textiles. The IRIS database currently lists formaldehyde as a “probable human carcinogen”; however, the International Agency for Research on Cancer classifies it as “carcinogenic to humans.” In June 2011, the Department of Health and Human Services’ National Toxicology Program classified formaldehyde as “known to be a human carcinogen” in its Report on Carcinogens. The report stated that epidemiological studies “have demonstrated a causal relationship between exposure to formaldehyde and cancer in humans”— specifically, nasopharyngeal cancer, sinonasal cancer, and myeloid leukemia. The current IRIS assessment of formaldehyde dates to 1989, when the cancer portion of the assessment was issued, and 1990, when the noncancer portion was added. The last significant revision of the formaldehyde assessment dates to 1991. As we have reported previously, EPA began efforts to update the IRIS assessment of formaldehyde in 1997. In 2004, EPA received a congressional directive to await the results of a National Cancer Institute study that was expected to take, at most, 18 months before finalizing the draft assessment. That study was completed in May 2009, and in June 2010, EPA released the draft assessment, which assessed both cancer and noncancer health effects, to the National Academies for peer review. In May 2011, the National Academies published its peer review report. As of September 30, 2011—14 years after EPA began work to update the IRIS formaldehyde assessment— the agency had indicated no timetable for finalizing the assessment. Continued delays in the revision of the IRIS assessment of formaldehyde have the potential to affect the quality of EPA’s regulatory actions. For example, in August 2011, EPA announced a proposed rule under the Clean Air Act related to certain emissions from natural gas processing plants. Because a newer IRIS assessment of formaldehyde has not been completed, the proposed rule relies on the existing IRIS value for formaldehyde, last updated in 1991. EPA had expected to complete the formaldehyde assessment by the end of fiscal year 2011, but withdrew the projected completion date from the IRIS website after the publication of the National Academies’ peer review report on the draft assessment. As of April 2011, EPA expected to complete the formaldehyde assessment in the fourth quarter of fiscal year 2011. However, as of September 30, 2011, the IRIS website provided no projected completion date for the assessment. Tetrachloroethylene (Perc). Tetrachloroethylene—also called perchloroethylene or perc—is a manufactured chemical used in, for example, dry cleaning, metal degreasing, and textile production. Perc is a widespread groundwater contaminant and the National Toxicology Program has determined that it is “reasonably anticipated to be a human carcinogen.” Currently, the IRIS database contains only a noncancer assessment based on oral exposure to perc, posted in 1988; it gives no information on potential cancer effects or potential noncancer effects associated with inhalation of perc. EPA began work to update this assessment, and to include information on cancer and noncancer inhalation risk, in 1998. As we have reported previously, EPA completed its internal review of the draft perc assessment in February 2005 and the interagency review in March 2006. However, when the Assistant Administrator of EPA’s Office of Research and Development requested that additional analyses be conducted, EPA was delayed in sending the draft assessment to the National Academies for peer review. In June 2008, EPA sent the draft assessment to the National Academies, which released its peer review report in February 2010. EPA is in the process of responding to the National Academies’ suggestions, 13 years after the agency began work on the draft perc assessment. As a result, IRIS users, including EPA regional and program offices, continue to lack both cancer values and noncancer inhalation values to help them make decisions about how to protect the public from this widespread groundwater contaminant. EPA had expected to complete the perc assessment by the end of fiscal year 2011, but as of September 30, 2011, it had not done so. Naphthalene. Naphthalene is used in jet fuel and in the production of such widely used commercial products as moth balls, dyes, insecticides, and plasticizers. The current IRIS assessment of naphthalene, issued in 1998, lists the chemical as a “possible human carcinogen”; since 2004, the National Toxicology Program has listed it as “reasonably anticipated to be a human carcinogen.” As we have reported previously, EPA began updating the cancer portion of its naphthalene assessment in 2002. By 2004, EPA had drafted a chemical assessment that had completed internal peer reviews and was about to be sent to an external peer review committee. Once it returned from external review, the next step, at that time, would have been a formal review by EPA’s IRIS Agency Review Committee. If approved, the assessment would have been completed and released. However, in part because of concerns raised by DOD, OMB asked to review the assessment and conducted an interagency review of the draft. In their 2004 reviews of the draft IRIS assessment, both OMB and DOD raised a number of concerns about the assessment and suggested to EPA that it be suspended until additional research could be completed to address what they considered to be significant uncertainties associated with the assessment. Although all of the issues raised by OMB and DOD were not resolved, EPA continued with its assessment by submitting the draft for external peer review, which was completed in September 2004. However, according to EPA, OMB continued to object to the draft IRIS assessment and directed EPA to convene an additional expert review panel on genotoxicity to obtain recommendations about short-term tests that OMB thought could be done quickly. According to EPA, this added 6 months to the process, and the panel, which met in April 2005, concluded that the research that OMB was proposing could not be conducted in the short term. Nonetheless, EPA officials said that the second expert panel review did not eliminate OMB’s concerns regarding the assessment, which they described as reaching a stalemate. In September 2006, EPA decided, however, to proceed with developing the assessment. By 2006, the naphthalene assessment had been in progress for 4 years, and EPA decided that the noncancer portion of the existing IRIS assessment was outdated and needed to be revisited. Having made this decision, the agency returned both portions of the assessment, cancer and noncancer, to the drafting stage. We reported in March 2008 that EPA estimated a 2009 completion date for the naphthalene assessment. As of September 30, 2011, however, the assessment remained in the draft development stage, even though EPA program offices had identified the naphthalene assessment as a high-priority need for the air toxics and Superfund programs. As of September 30, 2011, EPA expects to complete the naphthalene assessment in the third quarter of fiscal year 2013. Royal Demolition Explosive. This chemical, also called RDX or hexahydro-1,3,5-trinitro-1,3,5-triazine, is a highly powerful explosive used by the U.S. military in thousands of munitions. Currently classified by EPA as a “possible human carcinogen,” this chemical is known to leach from soil to groundwater. RDX can cause seizures in humans and animals when large amounts are inhaled or ingested, but the effects of long-term, low-level exposure on the nervous system are unknown. As we reported in March 2008, as is the case with naphthalene, the IRIS assessment of RDX could require DOD to undertake a number of actions, including steps to protect its employees from the effects of this chemical and to clean up many contaminated sites. We reported at that time that EPA had started an IRIS assessment of RDX in 2000, but it had made minimal progress on the assessment because EPA had agreed to a request by DOD to wait for the results of DOD-sponsored research on this chemical. In 2007, EPA resumed work on the assessment, although some of the DOD-sponsored research was still outstanding at the time. EPA decided to suspend work on the assessment in 2009 in order to focus on assessments that were further along in the IRIS process. According to EPA’s project plan for RDX, in March 2010, EPA received a letter from DOD requesting that EPA complete the assessment. In addition, in 2010, EPA’s Superfund Program labeled the RDX assessment as a high priority because of the presence of the chemical at federal facilities. In June 2010, EPA renewed work on the RDX assessment, but as of September 30, 2011, it remained in the draft development stage (step 1). An EPA official told us in October 2011 that EPA plans to contact DOD officials to confirm that the draft assessment of RDX adequately captures the findings of the DOD- sponsored research. In addition, the EPA official said that the agency plans to contact officials at HHS’s Agency for Toxic Substances and Disease Registry to ensure that the two agencies have coordinated research efforts on this chemical. EPA projects that it will finalize the assessment of RDX in the first quarter of fiscal year 2013. In addition to the individual named above, Diane LoFaro, Assistant Director; Christine Fishkin, Assistant Director; Summer Lingard; Mark Braza; Jennifer Cheung; Nancy Crothers; Lorraine Ettaro; Robert Grace; Gary Guggolz; Richard P. Johnson; Michael Kniss; Nadia Rhazi; and Kiki Theodoropoulos made key contributions to this report. Also contributing to the report were Tim Bober, Michelle Cooper, Anthony Pordes, Benjamin Shouse, Jena Sinkfield, and Nicolas Sloss.
|
The Environmental Protection Agency's (EPA) Integrated Risk Information System (IRIS) Program supports EPA's mission to protect human health and the environment by providing the agency's scientific position on the potential human health effects from exposure to various chemicals in the environment. The IRIS database contains quantitative toxicity assessments of more than 550 chemicals and provides fundamental scientific components of human health risk assessments. In response to a March 2008 GAO report on the IRIS program, EPA revised its IRIS assessment process in May 2009. GAO was asked to evaluate (1) EPA's progress in completing IRIS assessments under the May 2009 process and (2) the challenges, if any, that EPA faces in implementing the IRIS program. To do this work, GAO reviewed and analyzed EPA productivity data, among other things, and interviewed EPA officials. EPA's May 2009 revisions to the IRIS process have restored EPA's control of the process, increased its transparency, and established a new 23-month time frame for its less challenging assessments. Notably, EPA has addressed concerns GAO raised in its March 2008 report and now makes the determination of when to move an assessment to external peer review and issuance--decisions that were made by the Office of Management and Budget (OMB) under the prior IRIS process. In addition, EPA has increased the transparency of the IRIS process by making comments provided by other federal agencies during the interagency science consultation and discussion steps of the IRIS process available to the public. Progress in other areas, however, has been limited. EPA's initial gains in productivity under the revised process have not been sustained. After completing 16 assessments within the first year and a half of implementing the revised process, EPA completed 4 assessments in fiscal year 2011. Further, the increase in productivity does not appear to be entirely attributable to the revised IRIS assessment process and instead came largely from (1) clearing the backlog of IRIS assessments that had undergone work under the previous IRIS process and (2) issuing assessments that were less challenging to complete. EPA has taken longer than the established time frames for completing steps in the revised process for most of its less challenging assessments. However, EPA has not analyzed its established time frames to assess the feasibility of the time frame for each step or the overall 23-month process. The agency's progress has also been limited in completing assessments that it classifies as exceptionally complex and reducing its ongoing assessments workload. Beyond the 55 ongoing IRIS assessments, the backlog of demand for additional IRIS assessments is unclear. With existing resources devoted to addressing its current workload of ongoing assessments, EPA has not been in a position to routinely start new assessments. EPA faces both long-standing and new challenges in implementing the IRIS program. First, EPA has not fully addressed recurring issues concerning the clarity and transparency of its development and presentation of draft IRIS assessments. For example, as part of its independent scientific review of EPA's draft IRIS assessment of formaldehyde, the National Academies provided suggestions for improving EPA's development and presentation of draft IRIS assessments in general, including that EPA use a standardized approach to evaluate and describe study strengths and weaknesses and the weight of evidence. EPA announced that it planned to respond to the National Academies' suggestions by implementing changes to the way it develops draft IRIS assessments. Given that many of the issues raised by the National Academies have been long-standing, it is unclear whether any entity with scientific and technical credibility, such as an EPA advisory committee, will have a role in conducting an independent review of EPA's planned response to the suggestions. In addition, EPA has not addressed other long-standing issues regarding the availability and accuracy of current information to users of IRIS information, such as EPA program offices, on the status of IRIS assessments, including when an assessment will be started, which assessments are ongoing, and when an assessment is projected to be completed. GAO recommends, among other things, that EPA assess the feasibility of the established time frames for each step in the IRIS assessment process and make changes if necessary, submit for independent review to an entity with scientific and technical credibility a plan for how EPA will implement the National Academies' suggestions, and ensure that current and accurate information on chemicals that EPA plans to assess through IRIS is available to IRIS users. EPA agreed with GAO's recommendations and noted specific actions it will take to implement them. GAO recommends, among other things, that EPA assess the feasibility of the established time frames for each step in the IRIS assessment process and make changes if necessary, submit for independent review to an entity with scientific and technical credibility a plan for how EPA will implement the National Academies suggestions, and ensure that current and accurate information on chemicals that EPA plans to assess through IRIS is available to IRIS users. EPA agreed with GAOs recommendations and noted specific actions it will take to implement them.
|
The Sempra and Intergen plants are located in close proximity to each other near Mexicali, Mexico—an area 3 miles south of the U.S.-Mexican border and Imperial County, California (see fig. 1). Final permitting and construction for both of the plants and the associated transmission lines to the United States began in 2001, and commercial operations commenced in July 2003. Fuel for the plants is provided by a 145-mile cross-border natural gas pipeline built by Sempra Energy, which began operating in September 2002. The Sempra plant, known as Termoelèctrica de Mexicali, consists of one natural gas-fired, combined-cycle power-generating unit with a total capacity of 650 megawatts. In this type of plant, electricity is produced by a combination of gas turbines and steam turbines. Heat from the gas turbine exhaust, which would otherwise be released to the atmosphere with exhaust gases, is captured and used by a heat recovery steam generator to produce steam, which in turn is used by the steam turbine to generate additional electricity. The Sempra plant operates with an export permit from the Mexican government and produces electricity exclusively for export to the United States. The facility is equipped with the latest pollution control technologies, including selective catalytic reduction systems to reduce NOX emissions and an oxidizing catalyst system to reduce carbon monoxide (CO) emissions. The Intergen plant, which consists of two natural gas-fired combined-cycle units (collectively known as the La Rosita Power Complex), has a total capacity of 1,060 megawatts. The first unit provides two-thirds of its 750 megawatt capacity to Mexico, with the remaining one-third available for export to the United States. The second unit has a generating capacity of 310 megawatts, all of which is designated for export to the U.S. market. (See fig. 2.) Originally, only the second unit was designed to include a selective catalytic reduction system, but as of April 7, 2005, all four of the combustion turbines within the two units have been equipped with these systems to control NOX emissions. ) to produce ordinary nitrogen and water vapor. An oxidizing catalyst is similar in concept to catalytic converters used in automobiles. The catalyst, normally coated with a metal, such as platinum, is used to promote a chemical reaction with the oxygen present to convert carbon monoxide into carbon dioxide and water vapor. Although no U.S. emissions requirements apply to these plants, Sempra and Intergen required a presidential permit to construct and connect the new transmission lines needed at the U.S.-Mexican border to export electricity into the United States. Because of the similarities of the proposals submitted by the companies, DOE decided to consider them together in a single environmental assessment, required as part of the permitting process. In December 2001, DOE completed the environmental assessment and issued a finding of no significant impact and presidential permits for both of the proposed projects. Following these decisions, Sempra and Intergen constructed the transmission lines and began commercial operations. However, as a result of subsequent litigation, on July 8, 2003, the U.S. District Court for the Southern District of California instructed DOE to prepare a more comprehensive environmental review, which included an assessment of the health impacts from the power plants as part of its analysis. DOE’s environmental impact statement was issued in final form in December 2004. DOE found that the proposed power plants presented a low potential for environmental impacts and published a record of decision in the Federal Register on April 25, 2005, authorizing presidential permits to be granted for both transmission lines to the respective power plants as presently designed. use of natural gas at the Sempra and Intergen facilities greatly reduces sulfur dioxide emissions compared with other fuels such as coal or oil. For example, U.S. coal contains an average of 1.6 percent sulfur, and oil burned at electric utility power plants ranges from 0.5 percent to 1.4 percent sulfur; comparatively, natural gas has less than 0.0005 percent sulfur. The emissions from the Sempra and Intergen power plants in Mexicali are comparable to emissions from similar plants recently permitted in California and are low relative to emissions from the primary sources of pollution in Imperial County, which are various forms of dust and motor vehicles. However, if the plants were located in Imperial County, they would be required, among other things, to offset their emissions by reducing emissions from other pollution sources in the region. Power plants in Mexico are not required to report to federal agencies in the United States on actual emissions of key pollutants generated during plant operations. Therefore, we believe that the best data available to estimate emissions from the Sempra and Intergen power plants comes from emission performance tests conducted by independent third-party contractors hired by the power plants. The average emissions from the Sempra and Intergen plants based on the results of the third-party testing are presented in table 2. Intergen average(US export) (Mexico) NOX (ppm) (ppm) VOC (lbs/hr) Legend: NOX = nitrogen oxides; ppm = parts per million; PM = particulate matter; lbs/hr = pounds per hour; CO = carbon monoxide; NH levels are computed based on a 1, 3, or 24- hour average. VOC emissions were undetectable during tests conducted at the plant. As shown in table 3, the average NOX emissions from the Intergen power plant were the only emissions that exceeded the range of emissions from recently permitted plants in California. This was the case, in part, because the plant was not originally designed to meet California requirements. However, as of April 7, 2005, all combustion turbines at the Intergen plant had been equipped with the selective catalytic reduction control technology for nitrogen oxide that is common in the newer California plants. With the exception of one turbine, which will continue to operate at a maximum NOX limit of 3.5 ppm, all other turbines are expected to emit NOX at a level below 2.5 ppm. According to Intergen plant officials, the last two turbines to be equipped with selective catalytic reduction systems have been meeting these levels since the systems became operational in March and April 2005, respectively. Data provided by plant officials, based on continuous monitoring of all emissions from these turbines over a 1 week period, also indicate that both turbines are achieving the expected NOX reductions. One way to assess the environmental impact of emissions from power plants is to examine the tons of pollutants they emit on an annual basis. The third-party performance tests discussed above provide the best available data to estimate annual emissions likely to occur during actual operations at the Sempra and Intergen plants because the data are based on observations of the actual equipment in operation. Other options for estimating annual emissions from these plants include using (1) the maximum allowable emissions levels for similar plants in California and (2) the emissions estimates that DOE developed during its environmental impact assessment of the Sempra and Intergen plants. Table 4 presents annual emissions estimates based on each of these three alternative operating assumptions. Under the first scenario, annual emissions levels were estimated using the values determined by third-party contractors during turbine performance tests at the Sempra and Intergen plants. For the two Intergen turbines that did not have selective catalytic reduction systems installed when the testing was conducted, we estimated annual NOX emissions using the testing values recorded for the similar turbine that was operating with such equipment. We did so because these two turbines are now equipped with selective catalytic reduction systems and their future emissions are likely to be similar to those from the turbine that was using this technology during the tests. These estimates do not take into account start-up and shutdown operations of the plant, which may contribute to increased plant emissions for approximately 1 to 2 hours. However, the total annual estimate is based on the conservative assumption that the plants are operating at maximum emission levels, 24 hours a day, 365 days a year. The actual operation of the plants, and the resulting emissions, would be less than this because of scheduled maintenance, forced outages, and varying electrical demand in California. The second estimates of annual emissions were based on maximum allowable emissions determined during the permitting process for similar California plants. These maximum allowable emissions are higher than the estimates based on third-party testing data. California grants permits to construct power plants on a case-by-case basis. As a condition of receiving a permit, the state places limits on emissions of individual pollutants. These limits are based on the use of best available control technology and take into consideration energy, environmental, and economic impacts. Under this estimating scenario, the annual estimates also account for short term variations in emissions levels that may occur during start-up and shutdown operations and are based on the conservative assumption that the plants are operating at maximum emissions levels, 24 hours a day, and 365 days a year. Another way to examine the environmental impact of a power plant is to evaluate the amount of pollution emitted per unit of electricity produced. This calculation has been used within the energy industry to measure how efficiently power plants produce electricity. As illustrated in table 6, the Sempra and Intergen plants produce much lower emissions of NOX for each megawatt of energy generated than do other power plants operating in Imperial County and the border region of Baja California, Mexico. For example, Sempra’s estimated emission rate for NOX of .04 pounds per megawatt of electricity is over 35 times lower than that rate at El Centro, the only major fuel-fired plant operating in Imperial County in 2002. If the Sempra and Intergen plants were located in Imperial County, to help improve air quality, California regulations would require, among other things, offsets for all emissions from the plants that contribute to nonattainment of the PM and ozone standards in the county. Under the specific offsetting rules established by the Imperial County Air Pollution Control District, the operators of each plant would be required to reduce emissions from other pollution sources in Imperial County by at least 1.2 tons for every ton of emissions the plants released. In addition to offsetting emissions of PMIntergen would also be required to offset all emissions of VOC, which, in combination with NOX, contribute to the formation of ozone. As shown in table 7, potential offsets identified by the Imperial County Air Pollution Control District in DOE’s environmental impact statement include (1) paving roads, (2) retrofitting emission controls on existing power plants in Imperial County, (3) funding projects designed to increase the use of natural gas in motor vehicles, (4) controlling Imperial County airport dust, and (5) retrofitting diesel engines for off-road heavy duty vehicles. According to the Air Pollution Control District, repaving approximately 23 miles of roads could reduce PM emissions in Imperial County by about 650 tons per year—more than the estimated annual PMIn addition to the potential offsets identified above for Imperial County, according to DOE, mitigation measures may be even more abundant and cost-effective if applied on the Mexican side of the border. Some potential projects include paving roads in Mexicali, Mexico; replacing older automobiles and buses with newer, less polluting ones; and converting brick kilns to run on natural gas. However, according to DOE, it does not have the authority to impose or enforce offsets in Mexico. Finally, if the power plants were located in California, the Intergen plant would likely be required to make additional equipment modifications to be consistent with other plants recently constructed in California. These modifications would include installing additional carbon monoxide control equipment and achieving a small reduction in NOX emissions in one of the plant’s four combustion turbines. Although emissions testing data indicate that Intergen’s carbon monoxide levels are generally comparable to those of California plants without this equipment, nearly all of the plants recently permitted in California have installed oxidizing catalyst systems to control carbon monoxide emissions. In addition, the Intergen plant would likely be required to lower the maximum NOX emissions in one turbine by 1.0 ppm—from 3.5 ppm to 2.5 ppm. Although this turbine is currently equipped with a selective catalytic reduction system to control NOX emissions, Intergen has stated that certain technical aspects of the design of the turbine prevent it from attaining emissions levels of 2.5 ppm. Emissions from the Sempra and Intergen power plants may contribute to adverse health impacts in Imperial County, but the extent of those impacts is unknown for several reasons. First, in its environmental impact statement, DOE did not calculate the total health impacts in the county because it did not analyze all the likely asthma-related or other health impacts from the increased pollution caused by the Sempra and Intergen plants. Second, DOE did not analyze the health impacts from increased power plant emissions on particularly susceptible populations, such as asthmatic children and low-income populations. Finally, because of uncertainty in DOE’s modeling of ozone increases due to emissions from the power plants, the health impacts related to ozone may be larger than DOE estimated. calculated that emissions from the power plants would be expected to increase asthma hospitalizations in the county by less than one case per year. However, DOE’s analysis did not quantify all of the health impacts from the increase in PM emissions. Health experts told us that the potential impact on asthmatics would be broader than the minimal increase in hospitalizations described in the environmental impact statement because hospitalization occurs only in the most acute asthma cases. According to these experts, an increase in PM Approximately 13,000 of these asthmatics experienced asthma symptoms during the previous year. The health experts we spoke with agreed that hospitalizations and other adverse health effects are part of a pyramid of potential adverse health effects. While the number of hospitalizations is represented at the top of the pyramid, other adverse health impacts such as emergency room visits, physician visits, asthma medication use, and increased asthma symptoms are layered vertically downward, with the number of people increasing in each subsequent group as you move to the bottom of the pyramid (see fig. 4). In addition, the DOE study did not address the extent to which increased emissions of particulate matter would cause other adverse health impacts, such as other respiratory or cardiovascular problems. These impacts could include chronic obstructive pulmonary disease, pneumonia, cardiovascular disease, as well as increased symptoms of upper and lower respiratory disease, decreased lung function, or premature death. According to the project manager of DOE’s analysis, the expected incidence of other adverse health effects resulting from PM exposure has not been quantified because of a lack of data. Studies funded by EPA, the Health Effects Institute, and others have concluded that certain groups are likely to be more susceptible to particulate matter than others, and therefore experience more adverse health effects. For example, these studies identified asthmatics, especially children, as a potentially susceptible subpopulation. According to data from the 2003 California Health Interview Survey, approximately 19 percent of Imperial County children ages 1 through 17 have been diagnosed with asthma, or about 9,000 children. In addition, the relationship between socioeconomic factors and asthma exacerbation has been documented in various studies. Imperial County is ranked as one of the poorest counties in California, with some of the highest poverty and unemployment rates in the state. An estimated 22 percent of the overall population lives below the national poverty level, in comparison with 13 percent statewide. Results from a 2001 California asthma report indicate that asthmatic adults with family incomes below the national poverty level are nearly twice as likely to experience symptoms every day or every week as those with incomes three times the poverty level, in part, because they have less access to health care. Finally, residents of Imperial County are currently exposed to airborne particulate pollution exceeding the Clean Air Act’s health-based National Ambient Air Quality Standard for PM that was 30 micrograms, or 63 percent higher, than the national standard of 50 micrograms per cubic meter of air, as shown in table 8. As a result, Imperial County residents can be expected to have higher incidence of adverse health effects caused by airborne particulate pollution than residents living in areas with less of that contaminant. . However, DOE did not fully explore these conditions to determine their potential health impacts. DOE believes that because the increases in emissions from the plants are below EPA’s significant impact levels any health impacts will be negligible. However, some health studies have found that even the smallest incremental increase in particulate matter air pollution increases the incidence of adverse health effects. DOE conducted air dispersion ozone modeling for the Imperial Valley- Mexicali air basin to determine what impact emissions from the Sempra and Intergen plants would have on the formation of ozone. DOE concluded, based on its modeling, that there would be no meaningful change in ozone levels as a result of the operation of the Sempra and Intergen power plants. Consequently, DOE concluded that the health impacts from ozone formation as a result of plant emissions would be minimal. However, if the modeling is not accurate, then the health impacts could be larger than DOE estimated. EPA officials have raised concerns about the accuracy of DOE’s modeling of estimated ozone increases. In its comments on DOE’s draft environmental impact statement, EPA stated that it is difficult to quantify the impact of a small number of facilities (i.e., the two power plants) on the maximum ozone concentration in an air basin. The lack of area- specific information, such as temperature, relative humidity, and levels of volatile organic compounds (an ozone precursor), in the Imperial County-Mexicali air basin makes modeling ozone formation particularly difficult. Because these data were not available, DOE used surrogate values from Phoenix, Arizona. Furthermore, DOE’s analysis relied on air monitoring data and the EPA ozone model to determine the potential influence of NO emissions—the primary pollutant emitted from the Sempra and Intergen power plants—on ozone concentrations in Imperial Valley. DOE concluded that increased NOX emissions from the plants could produce a decrease in ozone concentrations. In its comments on the draft environmental impact statement, EPA stated that peak ozone concentrations generally occur in areas away from sources of high NOX emissions, not at the monitor where high NOensure that mitigation projects are completed satisfactorily. Finally, DOE acknowledged in the environmental impact statement that mitigation measures may be more abundant and cost-effective in Mexico. However, DOE told us that while it has the authority to require the plants to take mitigation measures in the United States, it does not have the authority to require or enforce such measures in Mexico. Because the Sempra and Intergen power plants are not subject to either the federal Clean Air Act or the California Clean Air Act, they are not required to provide offsets for their emissions. In addition, relevant agreements among the United States, Canada, and Mexico may not provide adequate mechanisms to address adverse health impacts resulting from emissions from these plants. As a result, policymakers have limited options to ensure that emissions from these plants do not adversely affect the health of residents in Imperial County. Existing U.S. law provides few options to ensure that emissions from the Intergen and Sempra plants do not adversely affect the health of residents in Imperial County. Because the Intergen and Sempra plants are not located in the United States, federal and California environmental agencies do not have authority over the plants. The federal Clean Air Act contains no language extending the statute’s coverage to pollution sources that are located outside of the United States. Similarly, the text of the California Clean Air Act limits its application to pollution sources that are located in California. Because neither of these laws applies to the Sempra or Intergen plants, U.S. environmental agencies have no authority under existing law to require the plants to implement pollution control measures. Similarly, existing international agreements provide few options to ensure that emissions from the Sempra and Intergen plants do not adversely affect the health of residents of Imperial County. The governments of the United States and Mexico have ratified two agreements that are of particular importance to environmental conditions in the border region. The first was signed at La Paz, Mexico, in 1983. The La Paz Agreement creates a framework for promoting cooperation between the United States and Mexico on issues of environmental protection in the border region. For example, the agreement states that the United States and Mexico will “cooperate in the solution of the environmental problems of mutual concern in the border area,” and that high officials from the two countries will meet annually to review the agreement’s implementation. The agreement does not require either government to implement specific pollution control requirements or provide a course of action for either country to pursue if a particular project in the border region harms the health of border region inhabitants. The other environmental agreement, the North American Agreement on Environmental Cooperation (NAAEC), also provides few options to ensure that emissions from the Sempra and Intergen plants do not adversely affect the health of residents of Imperial County. The United States, Canada, and Mexico signed the NAAEC in 1993 to supplement the provisions of the North American Free Trade Agreement. The NAAEC provides a dispute resolution procedure under which the United States, Mexico, or Canada may request consultation with another party to the agreement regarding whether there has been a persistent pattern of failure by that other party to effectively enforce its environmental law. The parties must make every attempt to resolve the matter through the consultative process. However, if consultation fails to lead to a satisfactory resolution, then either party may take a series of steps that may culminate in the meeting of an impartial, five-member arbitration panel. This panel can determine whether the party complained against has persistently failed to enforce its environmental law. If the panel issues a decision finding such a persistent failure, it may formulate an action plan to remedy the enforcement failure and may ultimately impose monetary penalties if the enforcement failure persists. Thus, the NAAEC dispute resolution procedure provides an option for U.S. policymakers, but only if the Mexican government persistently fails to enforce the Mexican environmental laws that apply to the two plants. The NAAEC dispute resolution procedure does not provide a useful option for U.S. policymakers if the Intergen and Sempra plants comply with Mexican law, even if the plants adversely affect the health of the residents of Imperial County. There are some actions policymakers could take to protect Imperial County from increased emissions from the Sempra and Intergen power plants. For example, the Congress could enact legislation restricting the importation of electricity generated by power plants whose electrical output is dedicated exclusively to the United States if they do not meet certain U.S. emission and offset requirements. While this action would have benefits to air quality and health, it would also have costs, such as possibly reducing energy supplies available to southern California. In 2003 a bill was introduced in the Senate and House that would have prohibited the exportation of natural gas from the United States to Mexico for use in power plants near the U.S. border if the plants do not provide air quality protection that is at least equivalent to the protection provided by air quality requirements applicable in the United States. Each chamber referred the bill to committee; neither the Senate nor the House committee reported the bill to the full chamber for consideration. Similarly, DOE could modify its regulations that apply to applicants for presidential permit seeking to build new international transmission lines to import electricity into the United States from Mexico. The modified regulations could require that the lines connect to plants that employ specified emissions controls and obtain offsets in the United States. However, limiting the import of electricity from Mexico into California could jeopardize some electricity supplies for parts of southern California, which could be problematic especially during peak consumption periods. According to the California Independent System Operator, the demand for energy in California is growing at nearly 4 percent annually. During the summer of 2004, the peak demand record set in 1999 was broken seven times, and the California Independent System Operator believes that the record will likely fall again during the summer of 2005. Moreover, both of the above options would need to be assessed to determine if they are compliant with the North American Free Trade Agreement (NAFTA). The agreement allows either the U.S. or Mexico to restrict energy imports for a range of reasons, including protection of human life or health. However, such import restrictions must meet a variety of conditions. For example, though NAFTA recognizes a country’s right to license imports and exports of energy, any such licensing system must be consistent with NAFTA and not frustrate its overall objectives of eliminating trade barriers, promoting fair competition, and increasing investment opportunities. NAFTA also requires energy regulatory agencies to minimize disruptions to contractual relationships in applying their regulations. A third option would be for the United States and Mexico to expand cooperation under the existing binational initiative to address transboundary air pollution in the U.S.-Mexico border region by providing economic incentives, such as emissions trading, to reduce pollution. Such programs have proven successful in the United States in reducing emissions that contribute to acid rain. At the international level, the United States and Canada have developed an air pollution agreement that could possibly serve as a model for a similar agreement between the United States and Mexico. However, based on the Canadian example, developing the legal and regulatory framework needed to create a binational emissions trading program with Mexico is likely to take a significant amount of time. The United States and Canada initiated cooperative efforts in 1980 through a memorandum of understanding, and 11 years later the 1991 U.S.-Canada Air Quality Agreement identified market-based mechanisms, including emissions trading, as areas for further discussion. In April 1997, the United States and Canada agreed on a joint plan of action for addressing transboundary air pollution, expanding the initial focus on acid rain to also examine ground-level ozone and particulate matter. In 2000, the Air Quality Agreement was formally expanded to address transboundary ground-level ozone issues. In 2004, the United States and Canada initiated a joint study to examine the feasibility of establishing a binational emissions trading program. Issues being addressed include the legal authority and the air pollution monitoring, assessment, and reporting system that would be needed to implement such a program. At the state level, Texas passed legislation in 2001 that authorized the state’s environmental agency to accept reductions in emissions from brick kilns in Ciudad Juarez, Mexico, to satisfy new state emission control requirements passed by the Texas Legislature in 1999. In return for air emission allowances under Texas law, the local utility, El Paso Electric, arranged the destruction of older, high-polluting, open-top kilns and replaced them with less polluting closed-top kilns. This emission control project serves the Paso Del Norte air basin, which is officially recognized in the La Paz Agreement, and includes El Paso, Texas, and Ciudad Juarez, Mexico. Finally, another potential option is the development of a binational clean air trust fund that could provide grants and loans to support projects that would improve the air quality of U.S. and Mexican cities that share air basins in the border region. Implementing such a program could help offset emissions generated by a variety of sources, including power plants in Mexico that are not required to offset their emissions. Funds from a variety of sources, such as appropriations from both nations’ legislatures, fast-lane fees for cars and trucks at ports of entry, and fees from airports and railroads operating along the border, could be held in a joint U.S.-Mexican trust fund for distribution to states, counties, cities, or local air pollution control districts along the shared border. The binational clean air trust fund could also potentially obtain funds from power plants located in the U.S.-Mexico border region that are looking for opportunities to offset their emissions, although they are not required to do so by law. Both Intergen and Sempra have shown an interest in supporting projects aimed at improving the air quality in the border region. For example, Intergen supports an applied research grant program to improve air quality in the California–Mexico border region, and Sempra is developing a fund to support the implementation of environmental projects, such as road paving, in the border city of Mexicali, Mexico, that it expects to implement before the end of 2005. The Sempra and Intergen plants near Mexicali, Mexico, are modern power plants that use advanced air pollution control technologies. As a result, the pollution they emit is comparable to that emitted by similar plants that have recently received permits to operate in California and is low relative to dust and emissions from vehicles, the primary sources of pollution in Imperial County. Nevertheless, the plants emit some pollutants into an air basin that already does not meet some air quality standards and is home to many asthmatic children and a low-income population that may be particularly susceptible to adverse health consequences from any level of pollution increase. DOE concluded in its environmental impact statement that pollution from the plants would not result in significant health impacts in Imperial County and therefore did not require the plants to offset their emissions. However, the DOE analysis did not fully examine several issues that could have led to an assessment of a larger adverse health impact in Imperial County. In addition, if the plants were located 3 miles north in Imperial County, California, they would be required to fund projects to reduce pollution from other sources to offset their emissions regardless of whether there was a documented adverse health impact. However, now that DOE has determined that no offsets are required, options available to U.S. policymakers in the short term to directly address the existing health concerns are limited. In the long term, the United States and Mexico could implement an emissions trading program or a clean air trust fund to address pollution in the border area, but such programs are likely to take years, and require significant binational effort to develop. We provided draft copies of this report to the Department of Energy (DOE) and to the Environmental Protection Agency (EPA) for their review and comment. We received a written response from DOE’s Director, Office of Electricity Delivery and Energy Reliability. EPA provided technical comments which we incorporated in the report. DOE disagreed with our assertion that it did not analyze all of the likely asthma-related and other health impacts of increased pollution from the Sempra and Intergen power plants. Specifically, DOE stated that the environmental impact statement for the two plants (1) notes the full range of respiratory effects associated with exposure to airborne particulate matter (PM. While DOE’s environmental impact statement acknowledges that increases in PMimpact. Moreover, DOE’s analysis does not differentiate among different population subgroups in terms of their susceptibility to the effects of air pollution but instead characterizes potential adverse health effects for the population as a whole. Consequently, we continue to believe that DOE’s environmental impact statement did not address the full range of potential health impacts on susceptible populations in Imperial County. Finally, DOE does not agree that the health impacts from ozone formation may be larger than it estimated in the environmental impact statement. DOE said that it addressed EPA’s concerns regarding the uncertainty in the ozone modeling in the final environmental impact statement. However, while EPA acknowledged in its comments on the final environmental impact statement that the document clarified the limitations of the ozone modeling analysis, it also reiterated its support for off-site mitigation efforts to address these limitations to ensure that there is no net increase in air pollution in Imperial County. As a result, we continue to believe that ozone formation may have larger health impacts than estimated in the final environmental impact statement. DOE’s specific comments and our detailed responses are presented in appendix II of this report. We are sending copies of this report to the Secretary of Energy, and the Administrator of the Environmental Protection Agency, and appropriate congressional committees. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this report were to determine (1) how emissions from the Sempra and Intergen power plants compare to emissions from recently permitted plants in California and emissions from sources in Imperial County, and what emissions standards the plants would be subject to if they were located in Imperial County; (2) the health impacts of emissions from the power plants on Imperial County residents; and (3) what options exist for U.S. policymakers to ensure that emissions from these power plants do not adversely affect the health of Imperial County residents. To address all three of these objectives we visited the Sempra and Intergen plants in Mexicali, Mexico; interviewed plant representatives, various U.S. federal, state, and local air quality officials, and other stakeholders; and reviewed relevant documents and studies. To determine emissions from the Sempra and Intergen plants we obtained data from emissions performance tests conducted at the plants by third party contractors (GE Mostardi Platt and Air Hygiene). These tests were designed to document the average emissions of selected pollutants (nitrogen oxide, particulate matter, carbon monoxide, volatile organic compounds, and ammonia) from the combustion turbines at each of these plants. The results of these tests were reported in standard units of measurement, namely parts per million or pounds per hour. According to the contractors, they completed the tests according to Environmental Protection Agency and California-approved methods and conducted quality assurance activities related to their test results. We assessed the reliability of the data by (1) reviewing documentation of test objectives and quality control procedures provided by the third party contractors, (2) conducting interviews with plant officials to determine the scope and generalizability of the tests, and (3) reviewing reports of actual NOX emissions submitted to the Mexican government to ensure consistency with the test results. Based on this assessment, we determined that the data were sufficiently reliable for the purposes of this report. To determine annual emissions estimates from these plants, we used the results from the emissions tests to calculate the annual tonnage that these plants would be likely to emit. We computed these values based on the conservative assumption that these plants would be operating 24 hours a day, 365 days a year. In addition to the estimates obtained from the testing results, we also used the maximum allowable emission limits of comparable plants in California to develop a more conservative estimate of annual emissions from these plants. For Sempra, we utilized the Elk Hills power plant as the primary basis for developing comparative estimates. This natural gas-fired power plant is partially owned by Sempra Energy and utilizes very similar equipment and pollution control technology as the Mexicali plant. For the Intergen plant, we used a combination of comparable estimates because no similar, Intergen owned facilities were recently constructed in California. To estimate nitrogen oxide (NOX) and ammonia (NH) was estimated using the average allowable emissions limit from all comparable plants in California permitted between 2000 and 2004. Because the Intergen plant is not equipped with an oxidation catalyst, carbon monoxide (CO) was estimated using a specific plant in California, permitted in 2000, that was the only one licensed without such control equipment. Finally, because some California permits establish volatile organic compounds (VOC) limits in parts per million and others do so in pounds per hour, we were not able to develop an average for all recently permitted plants in California. For this reason, we used emissions limits from the Elk Hills power plant to estimate annual emission levels of VOC at the Intergen plant. To determine how estimated emissions from the Sempra and Intergen plants compare to recently permitted plants in California, we developed a range of maximum allowable emission limits for all natural gas-fired power plants in California with similar specifications, licensed between 2000 and 2004. This time frame was chosen because it corresponded to the dates that the Sempra and Intergen plants in Mexicali were designed, permitted, and began commercial operations. Because all California power plants are permitted on a case-by-case basis, emissions limits may vary with each project. Therefore, we used the entire range of emission limits for the 23 plants that were identified during our selection process. We then compared the range of emission limits from the 23 plants that we identified with the third party testing results we obtained from the Sempra and Intergen plants. To determine how the emissions from these plants compare to emissions from sources in Imperial County we utilized the 2004 estimated annual average emissions inventory for Imperial County developed by the California Air Resources Board. We also met with officials from the California Air Resources Board and reviewed emissions reports for stationary sources obtained from the Imperial County Air Pollution Control District. To determine the levels of nitrogen oxide emissions from the Sempra and Intergen plants in relation to existing plants in Imperial County and Baja California, Mexico, we obtained reports developed for the Mexican government that included annual emissions of nitrogen oxides based on data from the continuous emissions monitoring system on each turbine. Comparable data for the El Centro plant in Imperial County and the two Baja California plants were obtained from a report produced by the Commission for Environmental Cooperation of North America. To assess the reliability of these data sources we (1) spoke with officials at the California Air Resources Board and reviewed documentation related to data collection and quality control procedures used to develop the annual emissions inventory, and (2) corroborated the emissions data related to the El Centro plant with the EPA Clean Air Markets database. Based on these assessments, we determined that the data were sufficiently reliable for the purposes of this report. To determine what emissions standards the plants would be subject to if located in Imperial County we reviewed the principal federal regulations applicable to new power plants located in the United States and the emission limits of similar plants recently permitted in California. The primary federal regulations we reviewed were those established under EPA’s New Source Review program for new or modified major pollution sources. We reviewed selected state and local air pollution regulations because state and local agencies have responsibility for implementing specific permitting activities as part of the federal program. The state and local regulations we reviewed included the permitting conditions of several power plants licensed by the California Energy Commission to determine the standard permitting criteria and the air quality rules established by the Imperial County Air Pollution Control District for sources located in Imperial County. To identify the potential health impacts from emissions generated by the Sempra and Intergen power plants, we reviewed the health assessment in DOE’s environmental impact statement. We met with the project manager of DOE’s health assessment to gather additional information about the assessment methodology. We reviewed EPA’s comments on the environmental impact statement, and interviewed EPA officials and health experts regarding DOE’s health assessment methodology. In addition, we reviewed relevant EPA reports, and other health studies regarding the impacts of particulate matter and ozone on human health. Finally, we reviewed a recent California health survey to obtain current information on asthmatic populations in Imperial County and other California counties. To determine the policy options available to ensure that emissions from the Sempra and Intergen plants do not adversely affect the health of Imperial County residents, we reviewed the federal Clean Air Act, the California Clean Air Act, key provisions of the North American Free Trade Agreement, as well as environmental agreements between the United States and Mexico, such as the La Paz agreement, and a trilateral agreement between the United States, Mexico, and Canada—the North American Agreement on Environmental Cooperation; and academic research. We also participated in a transboundary air quality management conference where officials from various federal, state, and local agencies in the United States and Mexico met to discuss strategies to address binational air pollution. We conducted our work between September 2004 and August 2005 in accordance with generally accepted government auditing standards. The following are GAO’s comments on DOE’s written comments provided in their letter dated July 29, 2004. 1. While DOE’s environmental impact statement acknowledges that increases in PM and ozone because it does not meet air quality standards for these two pollutants. 2. Asthma hospitalizations are just one measure of potential adverse health impacts from increased emissions of particulate matter. While asthma hospitalizations are more severe and likely to occur less often than doctor visits or increased medication use for asthma, they cannot be considered representative of the “full range” of potential adverse health impacts associated with asthma in Imperial County. In addition to asthma-related adverse health effects, numerous studies have linked increased exposure to particulate matter to other non-asthma-related adverse health effects, such as chronic bronchitis, chronic lung disease, pneumonia, and cardiovascular disease. 3. We disagree with DOE that hospitalizations are the best parameter for representing impacts on asthma. While asthma hospitalizations in Imperial County may be well documented, the 2003 California Health Survey provides information on other asthma-related health impacts in Imperial County. For example, the survey contains information on the number of Imperial County residents who take medication to control asthma. In addition, the survey presents information on the number of Imperial County residents who had asthma symptoms within a specified time frame and who visited an emergency room or urgent care facility for asthma-related health problems during that time frame. Such information could have been used, along with information on hospitalizations, to create a more complete estimate of the potential asthma-related health effects from increases in pollution from the power plants. 4. The report does not use the health effects pyramid, or suggest it should be used, to compute instances of potential health effects from air pollution. However, we believe that the health effects pyramid is useful for understanding the variety of ways in which increased pollution can aggravate asthma suffering in Imperial County. In so doing, it also highlights the full range of potentially quantifiable effects related to asthma. 5. During our review of the health effects literature, we identified a number of studies that support a linear relationship between increases in particulate matter pollution and increased incidence of cardiovascular diseases. 6. Asthmatic children are not the only susceptible population mentioned in our report, and asthma hospitalization is not the only potential health impact. Consequently, we continue to believe that DOE’s environmental impact statement did not address the full range of potential health impacts on susceptible populations in Imperial County. DOE’s quantification of just one adverse health impact for the entire population of Imperial County masks the differential effects that can beset more susceptible subpopulations in the County. 7. In commenting on the final environmental impact statement, EPA acknowledged that DOE had clarified the limitations and uncertainties of the ozone modeling analysis. However, in its comments EPA said it continues to support and encourage off-site mitigation efforts to address the limitations in the ozone modeling to ensure that there is no net increase of air pollution in Imperial County. 8. We believe that EPA’s comment regarding peak ozone concentrations is relevant because it is presented in the context of EPA’s comments on the draft environmental impact statement. We also note in the report that DOE took action in response to EPA’s comment. In addition to the contact named above, Leo G. Acosta, Charles Bausell, Nancy Crothers, Brandon Haller, Ryan Lambert, Omari Norman, Kim Raheb, and Stephen Secrist made key contributions to this report.
|
Power plants emit pollutants that have been linked to various negative health effects. In 2003, two new power plants, owned by Sempra Energy and Intergen, began operations 3 miles south of the U.S.-Mexico border near Imperial County, California. The county does not meet some federal and state air quality standards and may be further impacted by the emissions from these plants. Although these plants export most of the electricity they produce to the United States, they are not currently required to meet any U.S. or California emissions standards. GAO was asked to determine (1) how emissions from the two plants compare with emissions from recently permitted plants in California and emissions from sources in Imperial County, and what emissions standards they would be subject to if they were located in Imperial County; (2) the health impacts of emissions from the plants on Imperial County residents; and (3) options available to U.S. policymakers to ensure that emissions from these plants do not adversely affect the health of Imperial County residents. In commenting on a draft of this report, DOE disagreed with our characterization of the limitations of their assessment of the health impact of pollution from the Sempra and Intergen power plants. We believe we have portrayed the limitations of this assessment accurately. The estimated emissions from the Sempra and Intergen power plants near Mexicali are comparable with similar plants recently permitted in California and are low relative to emissions from the primary sources of pollution in Imperial County, California, which are dust and vehicles. However, if the plants were located in Imperial County, they would be required to take steps to improve air quality by reducing emissions from other pollution sources in the region, such as paving dirt roads, because the county is not meeting certain U.S. air quality standards. Although emissions generated from the Sempra and Intergen plants may contribute to various adverse health impacts in Imperial County, the extent of such impacts is unknown. The Department of Energy (DOE) estimated that emissions from these plants may increase asthma hospitalizations by less than one per year. However, DOE did not quantify any other asthma-related impacts, such as emergency room visits or increased use of medications, which, although less severe, are likely to occur more often. In addition, DOE did not determine whether increased emissions would cause other respiratory or cardiovascular problems and the impact of particulate matter on particularly susceptible populations. Finally, the potential health impacts associated with ozone could be greater than DOE estimated because some important data needed for modeling were not available. Existing laws and international agreements may not provide adequate mechanisms to address adverse health impacts resulting from power plant emissions. Policymakers could take some actions, such as requiring plants that seek to export electricity to the United States to use specified emission controls. While this action would have benefits, it would also have costs, such as possibly reducing energy supplies available to Southern California. Long-term policy options include the development of a binational pollution reduction program or a trust fund to provide grants and loans to support air quality improvement projects. However, substantial efforts on both sides of the U.S.-Mexico border would be required to establish the legal and management framework necessary for such programs to be effective.
|
The transportation of large amounts of spent fuel to an interim storage or permanent disposal location is inherently complex and the planning and implementation may take decades to accomplish. The actual time it would take depends on a number of variables including distance, quantity of material, mode of transport, rate of shipment, level of security, and coordination with state and local authorities. For example, according to officials from a state regional organization we interviewed and the Blue Ribbon Commission report, transportation planning could take about 10 years, in part because routes have to be agreed upon, first responders have to be trained, and critical elements of infrastructure and equipment need to be designed and deployed. As we previously reported, DOE does not have clear legislative authority for either consolidated interim storage or for permanent disposal at a site other than Yucca Mountain and, as such, there is no facility to which DOE can transport commercial spent nuclear fuel. Without clear authority, DOE cannot make the transportation decisions necessary regarding commercial spent nuclear fuel. Specifically, as we reported in November 2009, August 2012, and October 2014, provisions in NWPA that authorize DOE to arrange for consolidated interim storage have either expired or are unusable because they are tied to milestones in the development of a repository at Yucca Mountain that have not been met. DOE officials and experts from industry we interviewed in October 2014 agreed with this assessment, and noted that the federal government’s ability to site, license, construct, and operate a consolidated interim storage facility not tied to Yucca Mountain depends on new legislative authority. For permanent disposal, we reported in April 2011, that developing a permanent repository other than Yucca Mountain will restart the likely time-consuming and costly process of siting, licensing, and developing such a repository and it is uncertain what legislative changes might be needed, if any, to develop a new repository. In part, this is because NWPA, as amended, directs DOE to terminate all site specific activities at candidate sites other than Yucca Mountain. GAO-11-229. As we reported in October 2014, experts identified technical challenges that could affect the transportation of spent nuclear fuel and these challenges could be resolved with sufficient time. The three technical challenges the experts described were (1) uncertainties related to the safety of high burn-up fuel during transportation,nuclear fuel to be transported under current guidelines, and (3) sufficiency of the infrastructure to support transportation. (2) readiness of spent Before 2000, most fuel discharged from U.S. nuclear power reactors was considered low burn-up fuel and, consequently, the industry has had decades of experience in transporting it. As we reported in October 2014, various reports from DOE, NRC, the Electric Power Research Institute, and the Nuclear Waste Technical Review Board, as well as experts we interviewed, agreed that uncertainties exist on how long high burn-up fuel—used for about 10 years— can be stored and then still be safely transported. Once sealed in a canister, the spent fuel cannot easily be inspected for degradation. We reported that as of August 2014, NRC officials told us that they had analyzed laboratory tests and models developed to predict the changes that occur during dry storage and that the results indicate that high burn-up fuel will maintain its integrity over very long periods of storage and can eventually be safely transported. However, NRC officials said they continued to seek additional evidence to confirm their position that long-term storage and transportation of high burn-up spent nuclear fuel is safe. We also reported that DOE and the Electric Power Research Institute have planned a joint development project to test high burn-up fuel for degradation, but those results will not be available for about a decade. As we reported in October 2014,storage of spent nuclear fuel allow higher temperatures and external radiation levels than guidelines for transporting the fuel, some of the spent nuclear fuel in dry storage may not be readily transportable. For example, according to the Nuclear Energy Institute, as of 2012, only about 30 percent of spent nuclear fuel currently in dry storage is cool because the guidelines governing dry enough to be directly transportable. For safety reasons, transportation guidelines do not allow the surface of the transportation cask to exceed 185 degrees Fahrenheit (85 degrees Celsius) because the spent nuclear fuel is traveling through public areas using the nation’s public transportation infrastructure. NRC’s guidelines on spent nuclear fuel dry storage limit spent nuclear fuel temperature to 752 degrees Fahrenheit (400 degrees Celsius). Scientists from the national laboratories and experts from industry we interviewed suggested three options for dealing with the stored spent nuclear fuel so it can be transported safely: (1) leave it to cool and decay at reactor sites, (2) repackage it into smaller canisters that reduce the heat and radiation, or (3) develop a special transportation “overpack” to safely transport the spent nuclear fuel in the current large canisters. However, as we reported in August 2012, spent nuclear fuel stored at reactor sites that had already shut down and dismantled their infrastructure may pose an even more difficult challenge because the ability to repackage the fuel or develop similar solutions may be limited without building additional infrastructure, such as a special transfer facility, or the spent fuel would need to be shipped to a site that had a transfer facility. See DOE, Office of Fuel Cycle and Research Development, A Project Concept for Nuclear Fuels Storage and Transportation, FCRD-NFST-2013-000132 Rev. 1 (June 15, 2013). 2013, DOE completed a preliminary technical evaluation of options available and needed infrastructure for DOE or a new waste management and disposal organization to transport spent nuclear fuel from shut-down sites to a consolidated interim storage facility. According to DOE officials, there was no need to make a decision regarding how best to move forward with the study results because there was, at that time, no site and no authorization to site, license, construct, and operate a consolidated interim storage facility. We also reported in October 2014 that procuring qualified railcars may be a time-consuming process, in part because of the design, testing, and approval for a railcar that meets specific Association of American Railroads standards for transporting spent nuclear fuel. As we found in October 2014, public acceptance is key to any aspect of a spent nuclear fuel management and disposition program, including transportation. Specifically, unless and until there is a broad understanding of the issues associated with management of spent nuclear fuel, specific stakeholders and the general public may be unlikely to support any spent nuclear fuel program. In particular, a program that has not yet been developed or for which a site has not been identified may have challenges in obtaining public acceptance. This finding is not new and, in April 2011 and in October 2014 we found reports spanning several decades that identified societal and political opposition as the key For example, in 1982, the obstacles to spent nuclear fuel management.congressional Office of Technology Assessment reported that public and political opposition were key factors to siting and building a repository. The National Research Council of the National Academies reiterated this conclusion in a 2001 report, stating that the most significant challenge to siting and commencing operations at a repository is societal. Our analysis of stakeholder and expert comments indicates the societal and political factors opposing a repository are the same for a consolidated interim storage facility. Moreover, we reported in April 2011 and October 2014 that any spent nuclear fuel management program is going to take decades to develop and to implement and that maintaining public acceptance over that length of time will face significant challenges. We also reported in November 2009, that the nation could not be certain that future generations would have the willingness or ability to maintain decades-long programs we put into place today.nuclear fuel more than once, which may be required if some spent nuclear fuel is moved to an interim storage facility prior to permanent disposal. Some stakeholders have voiced concerns that because of this opposition to multiple transport events, a consolidated interim storage site may become a de facto permanent storage site. Of particular concern is having to transport spent In October 2014, we reported that according to experts and stakeholders, social media has been used effectively to provide information to the public through coordinated outreach efforts by organizations with an interest in spent nuclear fuel policy. Some of these organizations oppose DOE’s strategy and the information they distribute reflects their agendas. In contrast, we reported that DOE had no coordinated outreach strategy, including social media. We concluded that in the absence of a coordinated outreach strategy by DOE, specific stakeholders and the general public may not have complete or accurate information about the agency’s activities, making it more difficult for the federal government to move forward with any policy to manage spent nuclear fuel. We recommended that DOE develop and implement a coordinate outreach strategy for providing information to specific stakeholders and the general public on federal activities related to managing spent nuclear fuel—which would include transportation planning. DOE generally agreed with our recommendation. Chairman Shimkus, Ranking Member Tonko, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Karla Springer (Assistant Director), and Antoinette Capaccio, Robert Sánchez, and Kiki Theodoropoulos also made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Spent nuclear fuel—the used fuel removed from commercial nuclear power reactors—is an extremely harmful substance if not managed properly. The nation's inventory of spent nuclear fuel has grown to about 72,000 metric tons currently stored at 75 sites in 33 states, primarily where it was generated. Under the Nuclear Waste Policy Act of 1982, DOE was to investigate Yucca Mountain, a site about 100 miles northwest of Las Vegas, Nevada, for the disposal of spent nuclear fuel. DOE terminated its work at Yucca Mountain in 2010 and now plans to transport the spent nuclear fuel to interim storage sites beginning in 2021 and 2024, then to a permanent disposal site by 2048. Transportation of spent nuclear fuel is a major element of any policy adopted to manage and dispose of spent nuclear fuel. This testimony discusses three key challenges related to transporting spent nuclear fuel: legislative, technical, and societal. It is based on reports GAO issued from November 2009 to October 2014. Based on its prior work, GAO found three key challenges related to the transportation of spent nuclear fuel: legislative, technical, and societal. Societal challenges . As GAO reported in October 2014, public acceptance is key for any aspect of a spent nuclear fuel management and disposition program—including transporting it—and maintaining that acceptance over the decades needed to implement a spent fuel management program is challenging. In that regard, GAO reported that in order for stakeholders and the general public to support any spent nuclear fuel program—particularly one for which a site has not been identified—there must be a broad understanding of the issues associated with management of spent nuclear fuel. Also, GAO found that some organizations that oppose DOE have effectively used social media to promote their agendas to the public, but that DOE had no coordinated outreach strategy, including social media. GAO recommended that DOE develop and implement a coordinated outreach strategy for providing information to the public on their spent nuclear fuel program. DOE generally agreed with GAO's recommendation. GAO is making no new recommendations.
|
VA provides health care and other benefits to veterans in recognition of their service to our country. As of July 1, 1997, 26 percent of the nation’s population—approximately 70 million persons who are veterans, veterans’ dependents, or survivors of deceased veterans—was potentially eligible for VA benefits and services, such as health care delivery, benefit payments, life insurance protection, and home mortgage loan guarantees. VA operates the largest health care delivery system in the United States and guarantees loans on about 20 percent of the homes in the country. In fiscal year 1997, VA spent more than $17 billion on medical care and processed more than 40 million benefit payments totaling more than $20 billion. The department also provided life insurance protection through more than 2.5 million policies that represented about $24 billion in coverage at the end of fiscal year 1997. In providing these benefits and services, VA collects and maintains sensitive medical record and benefit payment information for millions of veterans and their dependents and survivors. VA also maintains medical information for both inpatient and outpatient care. For example, the department records admission, diagnosis, surgical procedure, and discharge information for each stay in a VA hospital, nursing home, or domiciliary. VA also stores information concerning health care provided to and compensation received by ex-prisoners of war. In addition, VA maintains information concerning each of the guaranteed or insured loans closed by VA since 1944, including about 3.5 million active loans. VA relies on a vast array of computer systems and telecommunication networks to support its operations and store the sensitive information it collects in carrying out its mission. Three centralized data centers—located in Austin, Texas; Hines, Illinois; and Philadelphia, Pennsylvania—maintain the department’s financial management systems; process compensation, pension, and other veteran benefit payments; and manage the veteran life insurance programs. In addition to the three centralized data centers, the Veterans Health Administration (VHA) operates 172 hospitals at locations across the country that operate local financial management and medical support systems on their own computer systems. The Austin Automation Center maintains VA’s departmentwide systems, including centralized accounting, payroll, vendor payment, debt collection, benefits delivery, and medical systems. In fiscal year 1997, VA’s payroll was almost $11 billion and the centralized accounting system generated more than $7 billion in additional payments. The Austin Automation Center also provides, for a fee, information technology services to other government agencies. The center currently processes a workers compensation computer application for other federal agencies and plans to expand the computing services it provides to federal agencies. The other two centralized data centers support VA’s Veterans Benefits Administration (VBA) programs. The Hines Benefits Delivery Center processes information from VA systems that support the compensation, pension, and education applications for VBA’s 58 regional offices. The Philadelphia Benefits Delivery Center is primarily responsible for supporting VA’s life insurance program. In addition, VHA hospitals operate local financial management and medical support systems on their own computer systems. The medical support systems manage information on veteran inpatient and outpatient care, as well as admission and discharge information, while the main medical financial system—the Integrated Funds Distribution, Control Point Activity, Accounting and Procurement (IFCAP) system—controls most of the $17 billion in funds that VA spent on medical care in fiscal year 1997. The IFCAP system also transmits financial and inventory information daily to the Financial Management System in Austin. The three VA data centers, as well as the 172 VHA hospitals, 58 VBA regional offices, and the VA headquarters office, are all interconnected through a wide area network. All together, VA’s network serves more than 40,000 on-line users. Our objective was to evaluate and test the effectiveness of general computer controls over the financial systems maintained and operated by VA at its Austin, Hines, and Philadelphia data centers as well as selected VA medical centers. General computer controls, however, also affect the security and reliability of nonfinancial information, such as veteran medical, loan, and insurance data, maintained at these processing centers. At the Austin Automation Center and VA medical centers in Dallas and Albuquerque, we evaluated controls intended to protect data and application programs from unauthorized access; prevent the introduction of unauthorized changes to application and provide segregation of duties involving application programming, system programming, computer operations, security, and quality assurance; ensure recovery of computer processing operations in case of a disaster or other unexpected interruption; and ensure that an adequate computer security planning and management program is in place. The scope of our work at the Hines and Philadelphia benefits delivery centers was limited to (1) evaluating the appropriateness of access granted to selected individuals and computer resources, (2) assessing efforts to monitor access activities, and (3) examining the computer security administration structure. We restricted our evaluation at the Hines and Philadelphia benefits delivery centers because VA’s OIG was planning to perform a review of other general computer controls at these sites during fiscal year 1997. To evaluate computer controls, we identified and reviewed VA’s information system general control policies and procedures. Through this review and discussions with VA staff, including programming, operations, and security personnel, we determined how the general computer controls were intended to work and the extent to which center personnel considered them to be in place. We also reviewed the installation and implementation of VA’s operating system and security software. Further, we tested and observed the operation of general computer controls over VA’s information systems to determine whether they were in place, adequately designed, and operating effectively. To assist in our evaluation and testing of general computer controls, we contracted with Ernst & Young LLP. We determined the scope of our contractor’s audit work, monitored its progress, and reviewed the related work papers to ensure that the resulting findings were adequately supported. We performed our work at the VA data centers in Austin, Hines, and Philadelphia; the VA medical centers in Dallas and Albuquerque; and VA headquarters in Washington, D.C., from October 1997 through January 1998. Our work was performed in accordance with generally accepted government auditing standards. VA provided us with written comments on a draft of this report, which are discussed in the “Agency Comments” section and reprinted in appendix I. A basic management objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modification, disclosure, or deletion. Our review of VA’s general computer controls found that the department was not adequately protecting financial and sensitive veteran medical and benefit information. Specifically, VA did not adequately limit the access granted to authorized VA users, properly manage user IDs and passwords, or routinely monitor access activity. As a result, VA’s computer systems, programs, and data are at risk of inadvertent or deliberate misuse, fraudulent use, and unauthorized alteration or destruction occurring without detection. We also found that VA had not adequately protected its systems from unauthorized access from remote locations or through the VA network. The risks created by these security issues are serious because in VA’s interconnected environment, the failure to control access to any system connected to the network also exposes other systems and applications on the network. Due to the sensitive nature of the remote access and network control weaknesses we identified, these issues are described in a separate report with limited distribution issued to you today. A key weakness in VA’s internal controls was that the department was not adequately limiting the access of VA employees. Organizations can protect information from unauthorized changes or disclosures by granting employees authority to read or modify only those programs and data that are necessary to perform their duties. VA, however, allowed thousands of users to have broad authority to access financial and sensitive veteran medical and benefit information. At Austin, for example, the security software was implemented in a manner that provided all of the more than 13,000 users with the ability to access and change sensitive data files, read system audit information, and execute powerful system utilities. Such broad access authority increased the risk that users could circumvent the security software, and presented users with an opportunity to alter or delete any computer data or program. The director of the Austin Automation Center told us that his staff had restricted access to the sensitive data files, system audit information, and powerful system utilities that we identified. In addition, we found several other examples where VA did not adequately restrict the access of legitimate users, including the following. At both the Hines and Philadelphia centers, we found that system programmers had access to both system software and financial data. This access could allow the programmers to make changes to financial information without being detected. At the Hines center, we also identified 18 users in computer operations who could update sensitive computer libraries. Update access to these libraries could result in the security software being circumvented with the use of certain programs to alter or delete sensitive data. At the Dallas center, we determined that 12 computer support personnel had access to all financial and payroll programs and data. Although these support staff need access to certain programs, providing complete access weakens the organization’s ability to ensure that only authorized changes are allowed. At the Austin center, we found more than 100 users who had an access privilege that provided the ability to bypass security controls and enabled them to use any command or transaction. Access to this privilege should be limited to use in emergencies or for special purposes because it creates a potential security exposure. The director of the Austin Automation Center told us that the privilege that provided users the opportunity to bypass security controls had been removed from all individual user IDs. The VBA CIO also said that a task force established to address control weaknesses had evaluated the inappropriate access that we identified at the Hines and Philadelphia benefits delivery centers and made recommendations for corrective measures. We also found that VA was not promptly removing access authority for terminated or transferred employees or deleting unused or unneeded IDs. At the Dallas and Albuquerque centers, we found that IDs belonging to terminated and transferred employees were not being disabled. We identified over 90 active IDs belonging to terminated or transferred employees at Dallas and 50 at Albuquerque. If user IDs are not promptly disabled when employees are terminated, former employees are allowed the opportunity to sabotage or otherwise impair VA operations. At the Dallas center, we identified more than 800 IDs that had not been used for at least 90 days. We also identified inactive IDs at the Austin, Hines, and Albuquerque centers. For instance, at the Hines center, we found IDs that had been inactive for as long as 7 years. Allowing this situation to persist poses unnecessary risk that unneeded IDs will be compromised to gain unauthorized access to VA computer systems. In January 1998, the director of the Dallas Medical Center said that a program had been implemented to disable all user IDs for terminated employees and those IDs not used in the last 90 days. In addition, the director of the Austin Automation Center and the VBA CIO told us that IDs would be automatically suspended 30 days after the password expired at the Austin, Hines, and Philadelphia centers. One reason that VA’s user access problems existed was because user access authority was not being reviewed periodically. Such periodic reviews would have allowed VA to identify and correct inappropriate access. The directors of the Austin Automation Center and the Dallas Medical Center told us that they planned to periodically review system access. The VBA CIO also said that the Hines and Philadelphia benefits delivery centers will begin routinely reviewing user IDs and deleting individuals accordingly. In addition to overseeing user access authority, it is also important to actively manage user IDs and passwords to ensure that users can be identified and authenticated. To accomplish this objective, organizations should establish controls to maintain individual accountability and protect the confidentiality of passwords. These controls should include requirements to ensure that IDs uniquely identify users; passwords are changed periodically, contain a specified number of characters, and are not common words; default IDs and passwords are changed to prevent their use; and the number of invalid password attempts is limited. Organizations should also evaluate the effectiveness of these controls periodically to ensure that they are operating effectively. User IDs and passwords at the sites we visited were not being effectively managed to ensure individual accountability and reduce the risk of unauthorized access. VA had issued an updated security policy in January 1997 that addressed local area network user ID and password management. Specifically, this policy required users to have separate IDs; passwords to be changed periodically, be at least six characters in length, and be formed with other than common words; and IDs to be suspended after three invalid password attempts. Despite these requirements, we identified a pattern of network control weaknesses because VA did not periodically review local area network user IDs and passwords for compliance with this policy. At the Albuquerque center, we identified 119 network IDs that were allowed to circumvent password change controls, 15 IDs that did not have any passwords, and eight IDs that had passwords with less than six characters. At the Philadelphia center, we found that approximately half of the network user IDs, including the standard network administrator ID, were vulnerable to abuse because passwords were common words that could be easily guessed or found in a dictionary. At the Austin and Dallas centers, we found that network passwords were set to never expire. Not requiring passwords to be changed increases the risk that they will be uncovered, which could lead to unauthorized access. In February 1998, the VBA CIO told us that the Hines and Philadelphia benefits delivery centers plan to require that passwords not be common words. Additionally, the directors of both the Austin Automation Center and the Dallas Medical Center said that although their staffs did not control wide area network password management controls, they were working with VA technical staff to improve network password management by requiring passwords to be changed periodically. In addition, VA’s user ID and password management policy only applied to local area networks. VA did not have departmentwide policies governing user IDs and passwords for other computer platforms, such as mainframe computers or the wide area network. Although some organizations within VA had procedures in these areas, we identified a number of user ID and password management problems. At the Philadelphia center, we found that the security software was implemented in a manner that did not disable the master security administration ID after a specified number of invalid password attempts. Allowing unlimited password attempts to this ID, which has the highest level security authority, increases the risk of unauthorized access to or disclosure of sensitive information. At the Austin center, we determined that more than 100 mainframe IDs that did not require passwords, many of which had broad access authority, were not properly defined to prevent individuals from using them. Although system IDs without passwords are required to perform certain operational tasks, these IDs should not be available to individual users because IDs that do not require password validation are more susceptible to misuse. Twenty of these IDs were especially vulnerable to abuse because the account identifiers were common words, software product names, or derivations of words or products that could be easily guessed. At the Dallas and Albuquerque centers, we discovered that an ID established by a vendor to handle various support functions had remained active even though the vendor had recommended that this ID be suspended when not in use. The director of the Austin Automation Center told us that his staff had deleted nearly 50 of the mainframe IDs that did not require passwords and reduced the access authority for many of the remaining IDs that did not require passwords. In addition, the chief of the Information Resources Management Service at the Dallas Medical Center agreed to take steps to address the system maintenance ID problem we identified. We also found numerous instances where user IDs and passwords were being shared by staff. For example, as many as 16 users at the Albuquerque Medical Center and an undetermined number at the Dallas Medical Center were sharing IDs with privileges to all financial data and system software. At Austin, more than 10 IDs with high-level security access were being shared by several staff members. The use of shared IDs and passwords increases the risk of a password being compromised and undermines the effectiveness of monitoring because individual accountability is lost. The director of the Austin Automation Center told us that shared IDs had been eliminated and replaced with individually assigned user IDs. In addition, the chief of the Information Resources Management Service at the Dallas Medical Center agreed to take steps to address the shared ID problem we identified. The risks created by these access control problems were also heightened significantly because the sites we visited were not adequately monitoring system and user access activity. Routinely monitoring the access activities of employees, especially those who have the ability to alter sensitive programs and data, can help identify significant problems and deter employees from inappropriate and unauthorized activities. Without these controls, VA had little assurance that unauthorized attempts to access sensitive information would be detected. Because of the volume of security information that must be reviewed, the most effective monitoring efforts are those that target specific actions. These monitoring efforts should include provisions to review unsuccessful attempts to gain entry to a system or access sensitive deviations from access trends, successful attempts to access sensitive data and resources, highly-sensitive privileged access, and access modifications made by security personnel. For VA, such an approach could be accomplished using a combination of the audit trail capabilities of its security software and developing computerized reports. This approach would require each facility to compile a list of sensitive system files, programs, and software so that access to these resources could be targeted. Access reports could then be developed for security staff to identify unusual or suspicious activities. For instance, the reports could provide information on browsing trends or summarizations based on selected criteria that would target specific activities, such as repeated attempts to access certain pay tables or sensitive medical and benefit information. Despite the thousands of employees who had legitimate access to VA computer systems containing financial and operational data, VA did not have any departmentwide guidance for monitoring successful and unsuccessful attempts to access system files containing key financial information or sensitive veteran data. As a result, VA’s monitoring efforts were not effective for detecting unauthorized access to or modification of sensitive information. The security staffs at the Philadelphia, Hines, Dallas, and Albuquerque centers were not actively monitoring access activities. At the Philadelphia center, available violation reports were not being reviewed, while at the Hines center, it was unclear who had specific responsibility for monitoring access. As a result, no monitoring was being performed at either the Hines or Philadelphia centers. In addition, neither the Dallas nor Albuquerque centers had programs to actively monitor access activities. Also, violation reports at the Austin Automation Center did not target most types of unusual or suspicious system activity, such as repeated attempts to access sensitive files or libraries or attempts to access certain accounts or pay tables. In addition, the Austin Automation Center had not developed any browsing trends or instituted a program to monitor staff access, particularly access by staff who had significant access authority to critical files, programs, and software. The director of the Austin Automation Center told us that he plans to establish a new security staff that will be responsible for establishing a targeted monitoring program to identify access violations, ensure that the most critical resources are properly audited, and periodically review highly privileged users, such as system programmers and security administrators. Also, the director of the Dallas Medical Center told us that his staff plan to periodically review user access. In addition, the chief of the Information Resources Management Service told us during follow-up discussions that the Dallas Medical Center will establish a targeted monitoring program to review access activities. Furthermore, none of the five sites we visited were monitoring network access activity. Although logging events on the network is the primary means of identifying unauthorized users or unauthorized usage of the system by authorized users, two of the sites we reviewed were not logging network security events. Unauthorized network access activity would also go undetected at the sites that were logging network activity because the network security logs were not reviewed. The director of the Austin Automation Center told us that his staff planned to begin a proactive security monitoring program that would include identifying and investigating unauthorized attempts to gain access to Austin Automation Center computer systems and improper access to sensitive information on these systems. The director of the Dallas Medical Center also told us that his staff planned to implement an appropriate network monitoring program. In addition to these general access controls, there are other important controls that organizations should have in place to ensure the integrity and reliability of data. These general computer controls include policies, procedures, and control techniques to physically protect computer resources and restrict access to sensitive information, provide appropriate segregation of duties among computer personnel, prevent unauthorized changes to operating system software, and ensure the continuation of computer processing operations in case of an unexpected interruption. Although we did not review these general controls at the Hines and Philadelphia centers, we found weaknesses in these areas at the Albuquerque, Dallas, and Austin centers. Important general controls for protecting access to data are the physical security control measures, such as locks, guards, fences, and surveillance equipment that an organization has in place. At VA, such controls are critical to safeguarding critical financial and sensitive veteran information and computer operations from internal and external threats. We found weaknesses in physical security at each of the three facilities where these controls were reviewed. None of the three facilities that we visited adequately controlled access to the computer room. Excessive access to the computer rooms at these facilities was allowed because none of the sites had established policies and procedures for periodically reviewing access to the computer room to determine if it was still required. In addition, the Albuquerque Medical Center was not documenting access to the computer room by individuals who required escort, such as visitors, contractors, and maintenance staff. At the Austin Automation Center, for instance, we found that more than 500 people had access to the computer room, including more than 170 contractors. The director of the Austin Automation Center told us that since our review, access to the computer room had been reduced to 250 individuals and that new policies and procedures would be established to further scrutinize the number of staff who had access to the computer room. In addition, both the Dallas and Albuquerque medical centers gave personnel from the information resource management group unnecessary access to the computer room. At the Albuquerque Medical Center, 18 employees from the information resource management group had access to the computer room, while at the Dallas Medical Center, all information resource management staff were allowed access. At both medical centers, this access included personal computer maintenance staff and certain administrative employees who should not require access to the computer room. While it is appropriate for information resource management staff to have access to the computer room, care should be taken to limit access to only those employees who have a reasonable need. Our review also identified other physical security control weaknesses. For example, windows in the Dallas Medical Center computer room were not alarmed to detect potential intruders and sensitive cabling in this computer room was not protected to prevent disruptions to computer operations. In addition, chemicals that posed a potential hazard to employees and computer operations were stored inside the computer room in Austin. Furthermore, a telecommunication panel in the Austin Automation Center computer room was also not protected, increasing the risk that network communications could be inadvertently disrupted. The director of the Austin Automation Center told us that his staff had removed chemicals from the computer room and protected the telecommunications panel. In addition, the director of the Dallas Medical Center told us that his staff plan to address the physical security problems when the computer room is moved to a new facility. Another fundamental technique for safeguarding programs and data is to segregate the duties and responsibilities of computer personnel to reduce the risk that errors or fraud will occur and go undetected. Duties that should be separated include application and system programming, quality assurance, computer operations, and data security. At the Austin Automation Center, we found three system programmers who had been assigned to assist in the security administration function. Under normal circumstances, backup security staff should report to the security administrator and have no programming duties. Because these individuals had both system and security administrator privileges, they had the ability to eliminate any evidence of their activity in the system. At the time of our review, Austin’s security software administrator also reported to the application programming division director. The security software administrator, therefore, had application programming responsibility, which is not compatible with the duties associated with system security. The director of the Austin Automation Center told us that actions had been taken to address the reported weaknesses. These actions included removing the master security administration user ID and password from system programmers and establishing a new security group to consolidate security software administration. During a follow-up discussion, the director also said that an emergency ID had been established to provide system programmers with additional access when required. This approach should not only improve access controls but also provide a means to determine if system programmer access authorities need to be expanded. We also found instances where access controls did not enforce segregation of duties principles. For example, we found nine users in the information resource management group at the Albuquerque Medical Center who had both unrestricted user access to all financial data and electronic signature key authority. These privileges would allow the users to prepare invoices and then approve them for payment without creating an audit trail. A standard computer control practice is to ensure that only authorized and fully tested operating system software is placed in operation. To ensure that changes to the operating system software are needed, work as intended, and do not result in the loss of data and program integrity, these changes should be documented, authorized, tested, independently reviewed, and implemented by a third party. We found weaknesses in operating system software change control at the Austin Automation Center. Although the Austin Automation Center security policy required operating system software changes to be approved and reviewed, the center had not established detailed written procedures or formal guidance for modifying operating system software. There were no formal guidelines for approving and testing operating system software changes. In addition, there were no detailed procedures for implementing these changes. During fiscal year 1997, the Austin Automation Center made more than 100 system software changes. However, none of these changes included evidence of testing, independent review, or acceptance. In addition, the Austin Automation Center did not provide any evidence of review by technical management. Furthermore, operating system software changes were not implemented by an independent control group. The director of the Austin Automation Center told us that his staff planned to document and implement operating system software change control procedures that require independent supervisory review and approval. In addition, the director said that management approval will be required for each phase of the software change process. An organization must take steps to ensure that it is adequately prepared to cope with a loss of operational capability due to earthquakes, fires, accidents, sabotage, or any other disruption. An essential element in preparing for such catastrophes is an up-to-date, detailed, and fully tested disaster recovery plan. Such a plan is critical for helping to ensure that information systems can promptly restore operations and data, such as payroll processing and related records, in the event of disaster. The disaster recovery plan for the Austin Automation Center consisted of 17 individual plans covering various segments of the organization. However, there was no overall document that integrated the 17 individual plans and set forth the roles and responsibilities of each disaster recovery team, defined the reporting lines between each team, and identified who had overall responsibility for the coordination of all 17 teams. We also found that although the Austin Automation Center had tested its disaster recovery plan, it had only performed limited testing of network communications. This testing included the Austin Finance Center, but did not involve other types of users, such as VHA medical centers or VBA regional offices. In addition, the Austin Automation Center had not conducted unannounced tests of its disaster recovery plan, a scenario more likely to be encountered in the event of an actual disaster. Finally, a copy of the disaster recovery plan was not maintained at the off-site storage facility. In the event of a disaster, it is a good practice to keep at least one current copy of the disaster recovery plan at this location to ensure that it is not destroyed by the same events that made the primary data processing facility unavailable. The director of the Austin Automation Center told us that he was in the process of correcting each of the deficiencies we identified. Actions he identified included (1) expanding network communication testing to include an outpatient clinic and a regional office, (2) conducting unannounced tests of the disaster recovery plan, (3) incorporating the 17 individual recovery plans into an executive plan, and (4) maintaining a copy of the disaster recovery plan at the off-site storage facility. We found deficiencies in the disaster recovery planning at the Dallas and Albuquerque medical centers as well. At both locations (1) tests of the disaster recovery plans had not been conducted, (2) copies of the plans were not maintained off-site, (3) backup files for programs, data, and software were not stored off-site, and (4) periodic reviews of the disaster recovery plans were not required to keep them current. The director of the Dallas Medical Center told us that he intends to review the disaster recovery plan semiannually, develop procedures to test the plan, and identify an off-site storage facility for both the disaster recovery plan and backup files. The general computer control weaknesses that we identified are similar to computer security problems that have been previously identified in evaluations conducted by VA’s OIG and in contractor studies. For example, in a July 1996 report evaluating computer security at the Austin Automation Center, the OIG stated that the center’s security function was fragmented, user IDs for terminated employees were still active and being used, monitoring of access activities was not being performed routinely, over 600 individuals were authorized access to the computer room, and telecommunication connections were not fully tested during disaster recovery plan testing. Similar findings were also identified by contractors hired by the Austin Automation Center to review the effectiveness of certain aspects of its general computer controls. Specifically, Austin brought in outside contractors to evaluate security software implementation in November 1995 and network security in April 1997. The security software review determined that key operating system libraries, security software files, and sensitive programs were not adequately restricted, that more than 90 IDs did not require passwords, and that access activity was not consistently monitored. In addition, the network security review found that the center had not established a comprehensive system security policy that included network security. The OIG also reported comparable access control and security management problems at the Hines Benefits Delivery Center in May 1997. For example, the OIG determined that access to sensitive data and programs had not been appropriately restricted and that system access activity was not reviewed regularly to identify unauthorized access attempts. The OIG also found that security efforts at the Hines Benefits Delivery Center needed to be more focused to meet the demands of the center. In addition, the OIG identified general computer control weaknesses at seven VA medical centers as part of a review of the IFCAP system conducted from January 1994 to November 1995. Problems identified at a majority of these medical centers were reported in March 1997. These issues included problems with restricting access to the production environment, monitoring access activity, managing user IDs and passwords, testing disaster recovery plans, and reviewing user access privileges periodically. Furthermore, the OIG included information system security controls as a material weakness in its report on VA’s consolidated financial statements for fiscal year 1997. The OIG concluded that VA assets and financial data were vulnerable to error or fraud because of significant weaknesses in computer controls. Although the Federal Managers’ Financial Integrity Act (FMFIA) of 1982 requires agencies to establish controls that reasonably ensure that assets are safeguarded against waste, loss, or unauthorized use, these information system integrity weaknesses were not included in the department’s FMFIA report as a material internal control weakness in fiscal year 1997. A key reason for VA’s general computer control problems was that the department did not have a comprehensive computer security planning and management program in place to ensure that effective controls were established and maintained and that computer security received adequate attention. To assist agencies in developing more comprehensive and effective information security programs, we studied the security management practices of eight nonfederal organizations with reputations as having superior information security programs. We found that these organizations successfully managed their information security risks through an ongoing cycle of risk management activities. As shown in figure 1, each of these activities is linked in a cycle to help ensure that business risks are continually monitored, policies and procedures are regularly updated, and controls are in effect. The risk management cycle begins with an assessment of risks and a determination of needs. This assessment includes selecting cost-effective policies and related controls. Once policies and controls are selected, they must be implemented. Next, the policies and controls, as well as the risks that prompted their adoption, must be communicated to those responsible for complying with them. Finally, and perhaps most important, there must be procedures for evaluating the effectiveness of policies and related controls and reporting the resulting conclusions to those who can take appropriate corrective action. In addition, our study found that a strong central security management focal point can help ensure that the major elements of the risk management cycle are carried out and can serve as a communications link among organizational units. In contrast, VA had not instituted a framework for assessing and managing risks or monitoring the effectiveness of general computer controls. Specifically, VA’s computer security efforts lacked clearly delineated security roles and responsibilities; regular, periodic assessments of risk; security policies and procedures that addressed all aspects of VA’s an ongoing security monitoring program to identify and investigate unauthorized, unusual, or suspicious access activity; and a process to measure, test, and report on the continued effectiveness of computer system, network, and process controls. The first key problem at the locations we reviewed was that security roles and responsibilities were not clearly assigned and security management was not given adequate attention. For example, the computer security administration function at the Austin Automation Center was fragmented between computer security administration staff and other computer security components. Specifically, computer security administration staff reported to the application programming division while other computer security staff reported to a staff function within the center’s management directorate. Furthermore, the computer security administration staff was responsible for application programming in addition to supporting security administration. The director of the Austin Automation Center told us that a new security group would be formed to consolidate staff performing the security software administration and physical security functions into one group. As part of this effort, roles and responsibilities for security administration were to be explicitly assigned. The roles and responsibilities for managing computer security at the other facilities we reviewed were also weak. For instance, computer security administration at the Philadelphia Benefits Delivery Center was limited to adding and removing users from the system, while at the Hines Benefits Delivery Center the responsibility for day-to-day security monitoring and reviewing the overall effectiveness of the security program was unclear. And at both the Dallas and Albuquerque medical centers, security administration was assigned only as a collateral responsibility. The security administrators at these medical centers reported spending less than a fifth of their time on security-related matters, which was not sufficient to actively manage and monitor access to critical medical and financial systems. A second key aspect of computer security planning and management is periodically assessing risk. Regular risk assessments assist management in making decisions on necessary controls by helping to ensure that security resources are effectively distributed to minimize potential loss. These assessments also increase the awareness of risks and, thus, generate support for adopted policies and controls, which helps ensure that the policies and controls operate as intended. VA’s policy requires that risk assessments be performed every 3 years or when significant changes are made to a facility or its computer systems. However, none of the three facilities where risk assessments were reviewed—Albuquerque, Dallas, and Austin—had completed risk assessments on a periodic basis or updated these assessments when significant changes occurred. For example, there was no indication that a risk assessment had ever been performed at the Albuquerque Medical Center. The Dallas Medical Center risk assessment had not been updated since 1994, even though its processing environment had changed significantly since then. The Dallas Medical Center has upgraded its computer hardware and added network capabilities since 1994. Furthermore, the Austin Automation Center did not conduct a risk assessment from 1991 through 1996, even though the center implemented a new financial management computer system during this period. The director of the Austin Automation Center told us that his staff planned to begin assessing risk on a regular basis. A third key element of effective security planning and management is having established policies and procedures governing a complete computer security program. Such policies and procedures should integrate all security aspects of an organization’s interconnected environment, including local area network, wide area network, and mainframe security. The integration of network and mainframe security is particularly important as computer systems become more and more interconnected. VA’s CIO, through the Deputy Assistant Secretary for Information Resources Management (DAS/IRM), is responsible for developing departmentwide security policies and periodically reviewing organizational compliance with the security policies. On January 30, 1997, DAS/IRM issued an updated security policy. However, this policy is still evolving and does not yet adequately establish a framework for developing and implementing effective security techniques or monitoring the effectiveness of these techniques within VA’s interconnected environment. For example, the updated security policy addressed local area networks but did not provide guidance for other computer platforms, such as mainframe computer security. A fourth key area of an overall computer security management program is an ongoing security monitoring program that helps to ensure that facilities are monitoring both successful and unsuccessful access activities. As noted above, VA did not have overall guidance on monitoring and evaluating access activities at VA processing facilities. Security administration staff at the VA facilities we visited were not actively monitoring successful or unsuccessful attempts to access sensitive computer system files. In addition, although VA has procedures for reporting computer security incidents, these procedures will not be effective until each facility establishes a mechanism for identifying computer security incidents. A fifth key element of effective security planning and management is a process for periodically monitoring, measuring, testing, and reporting on the continued effectiveness of computer system, network, and process controls. This type of security oversight is an essential aspect of an overall security planning and management framework because it helps the organization take responsibility for its own security program and can help identify and correct problems before they become major concerns. Although VA had taken some measures to evaluate controls periodically, the department had not established a coordinated program that provided for ongoing local oversight and periodic external evaluations. In addition, VA had not provided technical standards for implementing security software, maintaining operating system integrity, or controlling sensitive utilities. Such standards would not only help ensure that appropriate computer controls were established consistently throughout the department, but also facilitate periodic reviews of these controls. The Austin Automation Center was the only facility we visited that had attempted to evaluate the effectiveness of its computer controls. For the last 3 years, the Austin Automation Center has brought in either OIG or contractor personnel to evaluate certain aspects of its computer security, including mainframe security software implementation, the network security environment, and physical access controls. In addition, the director of the Austin Automation Center told us that the center’s client server environment and security controls would be reviewed during calendar year 1998. However, the Austin Automation Center had not established an ongoing security oversight program to ensure that controls continued to work as intended. In addition, both the DAS/IRM security group and the VHA Medical Information Security Service (MISS) had performed security reviews, but these reviews focused on compliance rather than on the effectiveness of controls. The DAS/IRM security group evaluated disaster recovery on a departmentwide basis in fiscal year 1997; MISS reviews computer security at VHA processing facilities on a 3-year rotational basis. Despite these efforts, we found control weaknesses due to noncompliance with VA policies and procedures. Furthermore, until VA establishes a program to periodically evaluate the effectiveness of controls, it will not be able to ensure that its computer systems and data are adequately protected from unauthorized access. In April 1998, DAS/IRM officials told us that VA is in the process of developing a comprehensive security plan and management program that will incorporate a risk management cycle and include requirements for monitoring access activity, reporting security incidents, and reviewing compliance with policies and procedures. The director of VHA MISS also told us in April 1998 that the VHA information security program office is addressing all of the security issues identified. As part of this effort, MISS plans to change its on-site security review procedures and VHA plans to expand current security policies and guidance. VA’s access control problems, as well as other general computer control weaknesses, are placing sensitive veteran medical and benefit information at risk of disclosure, critical financial and benefit delivery operations at risk of disruption, and assets at risk of loss. The general computer control weaknesses we identified could also adversely affect other agencies that depend on the Austin Automation Center for computer processing support. Especially disturbing is the fact that many similar weaknesses had been reported in previous years, indicating that VA’s past actions have not been effective on a departmentwide basis. Implementing more effective and lasting controls that protect sensitive veteran information and establish an effective general computer control environment requires that the department establish a comprehensive computer security planning and management program. This program should provide for periodically assessing risks, implementing effective controls for restricting access based on job requirements and proactively reviewing access activities, clearly defining security roles and responsibilities, and, perhaps most important, monitoring and evaluating the effectiveness of controls and policies to ensure that they remain effective. We recommend that you direct the VA CIO to work in conjunction with the VBA and VHA CIOs and the facility directors as appropriate to limit access authority to only those computer programs and data needed to perform job responsibilities and review access authority periodically to identify and correct inappropriate access; implement ID and password management controls across all computer platforms to maintain individual accountability and protect password confidentiality and test these controls periodically to ensure that they are operating effectively; develop targeted monitoring programs to routinely identify and investigate unusual or suspicious system and user access activity; restrict access to computer rooms based on job responsibility and periodically review this access to determine if it is still appropriate; separate incompatible computer responsibilities, such as system programming and security administration, and ensure that access controls enforce segregation of duties principles; require operating system software changes to be documented, authorized, tested, independently reviewed, and implemented by a third party; and establish controls to ensure that disaster recovery plans are comprehensive, current, fully tested, and maintained at the off-site storage facility. We also recommend that you develop and implement a comprehensive departmentwide computer security planning and management program. Included in this program should be procedures for ensuring that security roles and responsibilities are clearly assigned and security management is given adequate attention; risks are assessed periodically to ensure that controls are appropriate; security policies and procedures comprehensively address all aspects of VA’s interconnected environment; attempts (both successful and unsuccessful) to gain access to VA computer systems and the sensitive data files and critical production programs stored on these systems are identified, reported, and reviewed on a regular basis; and a security oversight function, including both ongoing local oversight and periodic external evaluations, is implemented to measure, test, and report on the effectiveness of controls. In addition, we recommend that you direct the VA CIO to review and assess computer control weaknesses that have been identified throughout the department and establish a process to ensure that these weaknesses are addressed. Furthermore, we recommend that you direct the VA CIO to monitor and periodically report on the status of actions taken to improve computer security throughout the department. Finally, we recommend that you report the information system security weaknesses we identified as material internal control weaknesses in the department’s FMFIA report until these weaknesses are corrected. In commenting on a draft of this report, VA agreed with our recommendations and stated that it is taking immediate action to correct computer control weaknesses and implement oversight mechanisms to ensure that these problems do not recur. VA stated that it is also preparing a comprehensive security plan and management program that will incorporate a risk management cycle and include requirements and guidance for monitoring access activity at VA facilities. In addition, the VA stated that its CIO is working closely with the VBA and VHA CIOs to identify computer control weaknesses previously reported in OIG reviews and other internal evaluations and develop a plan to correct these deficiencies. VA also informed us that the CIO will report periodically to the OIG on VA’s progress in correcting computer control weaknesses throughout the department. Finally, VA agreed to consider outstanding computer control weaknesses for reporting as material weaknesses in the department’s fiscal year 1998 FMFIA report when the department’s top management council meets in the first quarter of fiscal year 1999. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this report. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. We are sending copies of the report to the Chairmen and Ranking Minority Members of the House and Senate Committees on Veterans Affairs and to the Director of the Office of Management and Budget. Copies will also be made available to others upon request. Please contact me at (202) 512-3317 if you or your staff have any questions. Major contributors to this report are listed in appendix II. The following is GAO’s comment on the Department of Veterans Affairs’ letter dated July 16, 1998. 1. Although VA only concurred in principle with our recommendation to report the information system security weaknesses we identified as material internal control weaknesses in the department’s FMFIA report, the department’s plans for evaluating computer control weaknesses for reporting as material weaknesses appear reasonable. VA has committed to presenting outstanding control weaknesses to the top management council when it meets in the first quarter of fiscal year 1999 to determine material FMFIA weaknesses for fiscal year 1998. David W. Irvin, Assistant Director Debra M. Conner, Senior Auditor Shannon Q. Cross, Senior Evaluator Charles M. Vrabel, Senior Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a legislative requirement, GAO provided information on weaknesses in general computer controls that support key financial management and benefit delivery operations of the Department of Veteran Affairs (VA). GAO noted that: (1) general computer control weaknesses place critical VA operations, such as financial management, health care delivery, benefit payments, life insurance services, and home mortgage loan guarantees, and the assets associated with these operations, at risk of misuse and disruption; (2) sensitive information contained in VA's systems, including financial transaction data and personal information on veteran medical records and benefit payments, is vulnerable to inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction, possibly occurring without detection; (3) the general control weaknesses GAO identified could also diminish the reliability of the department's financial statements and other management information derived from VA's systems; (4) GAO found significant problems related to the department's control and oversight of access to its systems; (5) VA did not adequately limit the access of authorized users or effectively manage user identifications (ID) and passwords; (6) VA also had not established effective controls to prevent individuals, both internal and external, from gaining unauthorized access to VA systems; (7) VA's access control weaknesses were further compounded by ineffective procedures for overseeing and monitoring systems for unusual or suspicious access activities; (8) VA was not providing adequate physical security for its computer facilities, assigning duties in such a way as to segregate incompatible functions, controlling changes to powerful operating system software, or updating and testing disaster recovery plans to prepare its computer operations to maintain or regain critical functions in emergency situations; (9) a primary reason for VA's continuing general computer control problems is that it does not have a comprehensive computer security planning and management program; (10) the VA facilities that GAO visited plan to address all of the specific computer control weaknesses identified; (11) the director of the Dallas Medical Center and the Veterans Benefits Administration (VBA) Chief Information Officer (CIO) also said that specific actions had been taken to correct the computer control weaknesses that GAO identified at the Dallas Medical Center and the Hines and Philadelphia benefits delivery centers; and (12) VA plans to develop a comprehensive security plan and management program.
|
School buses are used to transport students to and from home and school and extracurricular activities like field trips and athletic events. The industry defines four basic types of school buses. Types A and B are comparatively small in size, while types C and D are comparatively large in size, as shown in figure 1. In general, the capacity of a school bus increases from type A to type D buses, and type D school buses can have a capacity up to 90 students. Type C school buses are most common, representing 70 percent of school bus sales in 2014 (23,715 of 34,021 buses) according to figures reported by School Bus Fleet. School districts or private contractors can operate school buses transporting public school students. School districts have a spectrum of contracting options from which to choose school bus transportation. When a school district contracts with a private company, the contractor could manage and provide all or some aspects of student transportation, depending on the school district’s needs and preferences. In full-service or “turnkey” contracts, the contractor takes on all aspects of pupil transportation services, such as hiring and training drivers and managing school bus routes. In other contracts, the district may retain ownership of school buses but have the contractor operate the buses, or the district may only have the contractor provide particular operations, such as special needs transportation. Whether district-operated or contracted, oversight of school bus transportation occurs across all levels of government and can involve multiple agencies at each level of government. At the federal level, NHTSA sets vehicle safety standards for new motor vehicles and administers grant programs as part of its mission to reduce deaths, injuries, and economic losses resulting from motor vehicle crashes. NHTSA also collects and analyzes crash data for a variety of purposes, such as to determine the extent of a safety problem and steps it should take to develop countermeasures. Two data sets are used to generate national statistics: FARS is a census of fatal crashes, and the General Estimates System (GES) is a sample of fatal, injury, and property-damage crashes. In addition to these activities that apply broadly to motor vehicle safety, including school buses, NHTSA provides guidance and holds workshops specific to school bus safety. For example, in December 2016 NHTSA hosted a day-long meeting on school transportation safety that included panels on school transportation risks and school bus vehicle technology, among other topics. Also at the federal level, FMCSA’s mission is to reduce crashes and fatalities involving commercial motor vehicles. FMCSA is responsible for setting and enforcing federal safety regulations that apply to large commercial truck and bus operators. For school bus transportation, FMCSA’s safety regulations for commercial motor vehicle operations do not apply to home-to-school and school-to-home transportation. These regulations apply in very limited circumstances such as for contractors hired by schools who provide transportation for extracurricular activities across state lines. The primary exception to this, however, is commercial driver’s licensing; school bus drivers must have a commercial driver’s license with a school bus endorsement, which requires a driver to pass additional knowledge and skills tests specific to operating a school bus, and are subject to drug and alcohol testing. FMSCA collects data on motor vehicle crashes but focuses on crashes involving large trucks and commercial buses, given its mission and jurisdiction. This crash data is collected in the Motor Carrier Management Information System. At the state level, multiple agencies are often responsible for setting or enforcing state-specific requirements for school-bus driver qualifications and training, vehicles, inspections, and other operational aspects. Some states require school districts to provide students with transportation to and from home and school, while other states allow school districts to decide whether to provide such transportation. Finally, local school districts are responsible for implementing and supervising school bus operations. This includes managing and establishing routes and policies, operating and maintaining school buses, and training and assigning staff, sometimes in conjunction with contractors. At the federal level, both NHTSA and FMCSA collect data on motor vehicle crashes that include crashes involving school buses. Since these data cover crashes involving a range of vehicles, information that is specific to school buses, like the specific type of school bus or whether it was operated by a school district or contractor, is not included in these national data. States may have richer data on school bus crashes, and we found that a small number of states collect some school-bus-specific information in their crash data, such as the type of operator. However, state data on school bus crashes vary because states determine what specific data elements to collect in crash data. NHTSA collects basic data on motor vehicle crashes, including school- bus-related crashes, but not any in-depth data specific to school buses, such as whether a school bus was district or contractor operated or the type of school bus. NHTSA’s FARS and GES crash data both include a variable to identify school-bus-related crashes. Therefore, NHTSA uses this variable to isolate school-bus-related crashes in FARS data and generates an annual report describing the number and some characteristics of fatal school-bus-related crashes. For example, the report describes characteristics of fatal school-bus crashes such as the time of day and whether the fatality(ies) was an occupant or non- occupant of the school bus or other involved vehicles. NHTSA can also isolate school bus crashes from GES data; however, because GES is a sample of crashes and school bus crashes are such rare events, GES data cannot be used to reliably examine year-to-year trends, according to NHTSA. NHTSA’s crash data aim to cover all types of traffic accidents, and as such, FARS and GES do not include additional variables tailored to accidents involving school buses. Moreover, the source for FARS and GES—police accident reports—vary from state to state and may not contain such school-bus-specific information, such as type of bus and type of operator, for NHTSA to aggregate across states. FMCSA’s crash data for large truck and bus crashes do not include a variable to identify school-bus-related crashes. FMCSA’s crash data, which come from police accident reports, identify the vehicle involved as a bus or truck but does not further delineate the type of bus (e.g., transit bus or school bus). In addition, it does not define or collect data on the type of school bus operator. States collect crash data to help implement and evaluate highway safety policies, but the specific crash data that states collect vary. Some states have richer data on all types of school bus crashes, including fatal, injury, and property-damage-only crashes than NHTSA and FMCSA, based on our review of federal and select state’s crash data collection processes. However, since states have discretion in determining what specific data elements to collect, state data on school bus crashes vary. Each state has its own crash data system, and police accidents reports—a key source of crash data—are unique to each state. For example, on California’s police accident report, officers enter codes to identify the involved vehicles, and specific codes classify the type of school bus and operator (e.g., public or contractor). Other states’ police accident reports may not collect as detailed information on a school bus involved in a crash. For example, our review of police accident reports in our selected states found that one state’s police accident report had a field to indicate whether a crash involved a school bus, and another state’s report used the narrative section to note a school bus’s involvement. We surveyed states to determine whether they track the type of school bus operator in crash data, or other state data such as inspection or funding data, since information states collect on school bus crashes and operations differs. In our survey of states, about half of states that responded (22/47) reported that they track whether school buses are district operated or contracted, though least often in crash data. States most commonly reported tracking the type of operator in funding or reimbursement data (15), followed by inspection data (10), and statewide crash data (7). We asked these states why they tracked the type of operator, and states reported doing so most often for funding purposes (18), followed by compliance with state contracting laws (10), and educational or training purposes (7). For example, in its inspection data, New York state officials said they track the type of school bus operator along with several other variables that allow the state to analyze data on inspection outcomes, such as the number of buses passing inspection or being placed out of service, to see if there are different outcomes across these variables. For the 25 states that reported they do not track whether school buses are school-district or contractor operated, states nearly always indicated there was no need or requirement to track such data. For example, 17 states said there is no distinction made in state law or regulation on the type of operator. Three states also reported that there are no contractors operating school buses in the state because school districts choose not to use them, so there is no need to track such information. The Transportation Research Board noted in 2002 that fatalities and injuries involving students make up a relatively small proportion of all fatalities and injuries, so the benefits of additional data collection efforts that focus solely on school travel should be carefully considered before being recommended or implemented. Stakeholders we interviewed had mixed views on whether data improvements should be a priority for the federal government. For example, one stakeholder said that the federal government could create a repository for national school-bus data that would require standardized methods of data collection by the states and that the resultant data would help illuminate key areas of school bus safety, such as illegal passing of stopped school buses, that are not currently being highlighted. However, another stakeholder we interviewed, who believes national data is lacking, said examining and collecting additional crash data for other modes of transportation may be more revealing than it would be for school buses, given the safety record of school buses. School buses continue to have a strong safety record relative to other types of motor vehicles based on more recent fatal crash data, which we discuss in more detail below. According to NHTSA, only 8 percent of fatalities in crashes involving a school bus from 2005 to 2014 were school bus occupants (i.e., drivers or passengers). We also found that school bus crashes constituted less than 1 percent of all crashes in 6 of our selected states for which annual crash reports included a section on school bus crashes. NHTSA and FMCSA officials we interviewed said the agencies have no plans to change their data collection processes specific to school bus crashes, but both have efforts under way to improve the overall quality of crash data. For example, FMCSA is in the process of establishing a working group, as required in the FAST Act, to examine the information collected in police accident reports on commercial motor vehicles, a process that could lead to improvements in the data collected on crashes involving large trucks and buses. Also, if funding is available, NHTSA officials said the agency plans to analyze states’ reporting requirements for school bus crashes in fiscal year 2017. This analysis would identify sources of crash data and whether these sources provide reliable information that could be used to determine causative factors and examine potential countermeasures for all reported school bus crashes. Additionally, states play a primary role in overseeing school bus safety, as described later in this report, and states have their own mechanisms to use state crash data to identify and use federal grant programs to address highway safety issues in their state. NHTSA and FMCSA have grant programs whereby each state identifies its priorities for highway and motor carrier safety, respectively. For example, for NHTSA’s Highway Safety Grant Program, each state must develop a Highway Safety Plan based on an evaluation of highway safety data, including crash data, to identify safety problems within the state. Therefore, if a state identifies a need for initiatives to improve school bus safety and has jurisdiction, the state could include it as a priority in its grant application and target federal and state spending for related initiatives. NHTSA and FMCSA said that, at present, no states identified school bus safety as a priority area in applications for the State and Community Highway Safety Grant Program or Motor Carrier Safety Assistance Program. While national data on school bus crashes are limited, from 1999 to 2010 UMTRI collected BIFA data, with support from FMCSA; however, these data are no longer collected. BIFA data supplemented FARS data with detailed information collected through interviews and from police accident reports about the physical configuration and operating authority of each bus involved in a fatal crash, including the type of operator. For this report, we analyzed BIFA data for 2000 to 2010 to describe characteristics of fatal crashes involving school buses during that time period. However, for variables included in BIFA data that originated in FARS data, we also analyzed data from FARS for fatal crashes involving school buses for 2011 to 2014 to provide more recent information as BIFA data were only collected through 2010. Since this analysis examined data on fatal crashes involving school buses, it is not generalizable to all types of crashes involving school buses. Further, we did not have exposure data (e.g., vehicle miles traveled by school buses of different types or used for different types of trips) to allow us to report rates of crashes for the characteristics we examined. We found that from 2000 to 2010, an average of 118 fatal crashes involving a school bus occurred each year. The total number per year ranged from 93 (2009) to 128 (2008). When we extended our analysis to include 2011 to 2014, the average fell slightly to 115 fatal crashes each year, which is 0.3 percent of the 34,835 fatal motor-vehicle crashes that occurred on average each year during this time. Most fatal crashes involved local travel and occurred during times that would indicate the buses were traveling to and from school, according to our analysis. For 2000 to 2010, most fatal crashes (89 percent) were considered local, meaning the total trip distance was less than 50 miles. Seventy-four percent of fatal crashes from 2000 to 2010 occurred during home-to- school and school-to-home travel times. From 2011 to 2014, this percentage fell to 65 percent of fatal crashes, with the remainder of fatal crashes occurring during other times (see fig. 2). Our analysis of BIFA and FARS data also examined driver and vehicle factors that may have contributed to the fatal school bus crashes and found such factors were not prevalent. Driver-related factors: The data on fatal school bus crashes from 2000 to 2014 identified a driver-related factor involving the school bus driver for 27 percent of these crashes (see fig. 3). The most common type of driver-related factor was miscellaneous (e.g., leaving vehicle unattended with engine running, failing to keep in the proper lane), at 68 percent of all driver-related factors. The next most common category, identified for 12 percent of driver-related factors, was physical or mental condition (e.g., careless driving, reaction to or failure to take drugs or medications). In 8 percent of fatal crashes from both 2000 to 2010 and 2011 to 2014, the school bus driver was charged with a violation. The most common type of violation fell under either the “rules of the road” turning, yielding, and signaling category (e.g., failure to signal for a turn or stop) or the reckless/careless/hit- and-run category (e.g., inattentive, careless, improper driving), representing 36 and 32 percent of all violations, respectively. Vehicle-related factors: Vehicle-related factors involving the school bus were rarely cited in fatal crashes involving school buses. Of the 1,731 total fatal crashes from 2000 to 2014, only 5 crashes had an identified vehicle factor for the school bus—3 for brakes, 1 for tires and wheels, and 1 for other components. Examining other vehicle and crash characteristics, we found that about 80 percent of fatal crashes involved large school buses (type C or D)—which account for most bus sales, according to in recent sales data—and 6 percent involved small school buses (type A or B), with the remainder unknown or of another body type. Seven percent of buses involved in fatal crashes during this time were classified as special needs school buses. In addition, the average age of the school buses in fatal crashes from 2000 to 2010 was 7 years; the average age for school buses in fatal crashes rose slightly to 8 years for 2011 to 2014. Most of these crashes occurred on dry roads (81 percent) and in clear weather conditions (85 percent) for both the 2000 to 2010 and 2011 to 2014 time period. We found that school districts operated 67 percent of school buses involved in fatal crashes from 2000 to 2010, and contractors working for school districts operated 25 percent of school buses involved in these fatal crashes, which is roughly proportional to the operations conducted by districts and contractors. We found no definitive national data on the number of each type of operator or the miles driven by each type of operator, so we cannot directly compare the rates of fatal crashes for each type of operator. However, the percentage of fatal crashes involving buses operated by school districts and contractors roughly aligns with industry association estimates of operations conducted by each type of operator. One association estimates that contractors provide one-third of pupil transportation services in the United States. An official from another association estimated the extent of contracting in two ways: first, by number of buses (contractors operate about one-third of school buses); second, by the number of operations (contractors conduct about one- fourth of school bus operations). We also examined the share of fatal school bus crashes with driver- or vehicle-related factors, by type of operator, and did not find any major deviations from the overall percentage of fatal crashes involving school-district and contractor- operated school buses. Federal laws and regulations establish minimum requirements for school bus safety. Building on federal requirements, states establish more comprehensive safety requirements for school bus vehicles and operations. We found that all 50 states require school bus inspections, and most states also require driver training. However, fewer states require a specific maximum vehicle age or seating capacity for school buses. While state requirements build on federal laws and regulations, the specific requirements states set for school bus safety vary. The same two federal agencies that collect crash data set minimum safety regulations for school bus vehicles and operations. NHTSA sets Federal Motor Vehicle Safety Standards (vehicle safety standards) that create a baseline for school bus standards. Forty-eight out of 62 vehicle safety standards apply to new school buses, according to NHTSA. For example, Federal Motor Vehicle Safety Standard No. 217 establishes standards for emergency exits and window retention and release, and Federal Motor Vehicle Safety Standard No. 221 specifies requirements for the strength of the body panel joints in the bodies of school buses. NHTSA has reported that new school buses have to meet more vehicle safety standards than any other type of new motor vehicle. All manufacturers of new motor vehicles and equipment must certify compliance with vehicle safety standards; therefore, school buses operated by school districts and contractors all must meet these federal standards. FMCSA is responsible for setting and enforcing Federal Motor Carrier Safety Regulations that apply to large commercial truck and bus operations. However, FMCSA’s safety oversight of school bus operations is limited because most school bus transportation is exempt from its safety regulations. In particular, all school bus transportation to and from home and school is exempt. Beyond home-to-school and school-to-home transportation, the type of operator—whether it is a private contractor or a school district or other governmental entity—and the type of trip—including whether the trip will cross state lines—determine whether all Federal Motor Carrier Safety Regulations apply. For example, contractors hired to provide interstate transportation for extracurricular activities, such as field trips or sporting competitions, are required to comply with other Federal Motor Carrier Safety Regulations such as limits on driving and on-duty time. School district employees are exempt from these requirements. However, even with these exemptions, federal regulations for commercial drivers’ licenses and drug and alcohol testing for commercial driver’s license holders apply to all school bus drivers and operators. Figure 4 provides examples of federal regulations for school bus safety. Within federal laws and regulations for school bus operations and vehicles, we specifically examined what federal requirements exist for school bus inspections, driver training, and maximum vehicle age and seating capacity. While many federal requirements, like vehicle standards for school buses, apply to both school districts and contractors, some federal requirements apply to only certain types of school transportation. FMCSA’s safety regulations require inspections of commercial motor vehicles. However, most school bus operations are exempt from this requirement, as noted above. Federal Motor Carrier Safety Regulations require other types of commercial operators to systematically inspect, repair, and maintain vehicles under their control, requirements that include inspecting service brakes, the steering mechanism, lighting, and tires, among other components. For inspections, commercial operators must conduct periodic (at least annual) vehicle inspections, which could be conducted in-house, at a commercial business, or through a state-run inspection program. Therefore, a contractor’s school-bus operations may be subject to this federal inspection requirement if, for instance, the contractor is hired by the school district to transport students across state lines for school-sponsored extracurricular activities; a school district’s school-bus operations would not be subject to the federal inspection requirement if the district provides the transportation for this type of trip. Representatives of contractors we spoke with stated that in practice, most contractors usually comply with Federal Motor Carrier Safety Regulations, even when they are not using the school buses for interstate activities, as contractors want the flexibility and maximum ability to operate buses under different circumstances, such as chartered services on the weekend. NHTSA does not have an oversight role in school bus operations but recommends that states establish procedures for regularly scheduled inspections of school buses in accordance with FMCSA’s requirements, as described above. FMCSA recently established minimum training regulations for entry-level school bus drivers. In December 2016, FMCSA issued a final rule requiring all drivers—employed by school districts and contractors—to complete entry-level driver training when applying for a commercial driver’s license, including those seeking a school bus endorsement. As part of this final rule, FMCSA established a training curriculum to address the specific training needs of school bus drivers. Training providers are required to cover all topics in the curriculum, including loading and unloading, railroad-highway grade crossings, and emergency exit and evacuation, but FMCSA set no minimum hours for the knowledge and behind-the-wheel training for the school bus endorsement. Additionally, NHTSA developed a series of refresher (i.e., in-service) training modules for school bus drivers in 2011. NHTSA officials told us they developed this training module because school bus stakeholders often asked NHTSA for guidance and assistance on training experienced school-bus drivers. Stakeholders we interviewed, including selected state officials, told us that they widely use NHTSA’s refresher training materials for school bus drivers. As previously noted, 48 federal vehicle safety standards apply to school buses. However, federal vehicle safety standards do not stipulate a maximum vehicle age or maximum seating capacity for school buses because, according to NHTSA, it does not have regulatory authority regarding how school buses are used. Nevertheless, NHTSA has made recommendations and issued guidance related to both of these items. In its pupil transportation guideline, issued in March 2009, NHTSA recommended replacing school buses manufactured before April 1, 1977, with school buses that meet current vehicle safety standards for buses and recommended prohibiting schools from purchasing school buses built prior to April 1, 1977, for school transportation. For capacity, NHTSA has reported in information posted on its website that school bus manufacturers determine the maximum number of persons who can sit on a school bus seat, which is based on sitting three small elementary age school children or two high school age persons into a typical 39-inch school-bus seat. In this same posting, NHTSA also reported that states and school bus operators are responsible for determining the number of persons who can safely fit into a school bus seat, and NHTSA recommended that all passengers be seated entirely within the confines of the school bus seats while the bus is in operation. NHTSA sets vehicle safety standards, and FMCSA does not have a role setting vehicle standards for school buses. States build upon federal laws and regulations and usually set additional, state-specific requirements for school bus safety that generally apply to both school districts and contractors, according to stakeholders we spoke with. We found that multiple state agencies often play a role overseeing school bus vehicles and drivers. For example, in Illinois, the State Board of Education and Secretary of State oversee school bus driver training, and the Department of Transportation oversees school bus inspections, while in Pennsylvania the Department of Transportation oversees school bus driver training and the State Police oversees school bus inspections. In addition, state laws and regulations vary widely across states. For example, three school bus manufacturers we spoke with told us that no two states have the same vehicle standards for school buses, with varying requirements for eight-way flashing signal lights, content and location of first aid kits, and location of switch panels, among other things. Figure 5 describes examples of state requirements for school bus transportation. Upon examination of state laws and regulations, we found that states set requirements for inspections, driver training, and vehicle standards that supplement the baseline federal requirements. States’ school-bus safety requirements vary widely across states but tend not to differ based on the type of operator, according to all eight selected state officials we spoke with, as described below. Four other stakeholders we interviewed affirmed that there are no differences in state requirements for school bus transportation for different types of operators. However, for state requirements for commercial motor vehicles, which can apply to but are not specific to school buses, six stakeholders we interviewed, including manufacturers and contractors, said there are some differences in requirements for contractors and school districts. For example, two stakeholders commented that states vary in the extent to which they exempt school bus operations from state requirements for commercial motor vehicles, requirements that are not school-bus-specific but apply to a wider range of vehicles and that are similar to Federal Motor Carrier Safety Regulations. See appendix II for additional descriptions of state requirements for school bus inspection, driver training, and maximum vehicle age and seating capacity in the eight selected states. Based on our review of laws and regulations in the 50 states, we found that all 50 states require school bus inspections to check for defects and safety compliance with state rules at the state or local level. We also found that the frequency of these inspections and agency conducting or overseeing inspections varies across states. For 41 states, we found that the state required periodic inspections of school buses to be conducted by state inspectors or third-party inspectors. For example, California requires its state highway patrol to inspect school buses at least once every year, while the Illinois state transportation agency requires certified, private inspection stations to inspect school buses at least twice a year. In the other nine states, we found the state requires local school districts to conduct inspections and/or authorizes the state to conduct spot check inspections of school buses without any set frequency. For example, Nebraska requires local school districts to conduct an inspection of each school bus before the start of the school year and then every 80 days during the school year. According to a Nebraska state official, the state discontinued its state school bus inspection program due to resource constraints and delegated responsibility for inspections to local school districts. State officials in all eight selected states told us that all school bus operators, including contractors, are subject to the state’s school-bus inspection requirements. States may also require additional inspections to supplement the periodic inspections, including conducting random or unannounced inspections. Officials from four of the eight states we interviewed—Illinois, Washington, Tennessee, and Pennsylvania—stated that they complement annual or biannual inspections with unannounced or random school-bus inspections. For example, a Tennessee state official told us that the state conducts random inspections of school buses for at least 10 percent of the statewide school-bus fleet annually to ensure that all operators maintain their buses safely and appropriately. States may also require even more frequent inspections, sometimes on a daily basis. For example, California requires all school bus operators to inspect their school buses regularly—every 45 days or 3,000 miles, whichever occurs first—as part of a preventive maintenance program. To provide context to understand how states implement these requirements and the results of inspections, we asked the selected states about the data they collect on inspection outcomes. The selected states vary in how they collect and maintain inspection data and the extent to which results are accessible to the public, as was the case with the frequency of inspections. Officials from selected states told us there are different methods of collecting and compiling inspection results. For example, a Tennessee state official told us that the state uses electronic devices (e.g., tablets, laptops) to collect data during inspections and maintain results in a central database. Illinois state officials told us that private, certified inspection stations can use an electronic or paper form to document inspection results, and all completed forms are maintained by the state. Given these differences, states vary in their ability to easily search and summarize inspection results for school buses in the state. We found school bus inspection results are generally accessible to the public, but how the public can access results varies. For instance, Washington posts the number of school buses inspected and the number and percentage of buses placed out of service by school district each school year on its state agency website, while Pennsylvania and Tennessee state officials told us that school bus inspection results are accessible only through a formal request. Additionally, in our review of selected states’ school-bus inspection results, we found that a relatively small number of school buses were placed out of service after an inspection because they were determined to be unsafe to operate without repairs. Specifically, 3 to 5 percent of inspected school buses in a given year were put out of service for violations, based on data from four of our selected states, as shown in table 1 below. By contrast, the out-of-service rate for all types of buses nationwide is about 7 percent, according to FMCSA. Problems that could put a school bus out of service in one state we interviewed include any leaks on the exhaust system, an exterior brake or stop-arm light that doesn’t work, or a bus alarm not sounding when the emergency door is opened. In three states, the most common problems identified during inspection involved brakes, lights or signals, and exhaust systems. Officials we spoke with in the six selected states that had state inspection programs stated that out-of-service school buses cannot be operated until the identified problem has been fixed and the bus passes another inspection. We found that a majority of states require training for all school bus drivers. Specifically, we found that 44 states require entry-level (i.e., pre-service) training and 44 states require refresher training for all school bus drivers. However, as with inspection requirements, we found that the frequency, length, and other attributes of the required training vary across states. For example, Pennsylvania requires a minimum of 20 hours of school-bus-specific training for all entry-level drivers and a minimum of 10 hours of refresher training for drivers every 4 years. Tennessee requires all school bus drivers to receive at least 4 hours of annual refresher training on various topics, including operational safety of school buses, loading and unloading of students, and managing student behavior, but the state does not require entry-level school-bus driver training, according to a Tennessee state official. While the training requirements vary across states, officials from all eight of our selected states stated that all school bus drivers must meet state training requirements, whether they are employed by a school district or contractor. States administer school bus driver training in different ways, and additional training requirements may exist at the local level. For example, in Virginia and New York, the state departments of education oversee school bus driver training programs and train and certify instructors, who can be school district or contractor employees, to provide training to drivers. In Nebraska, the state department of education contracts with the Nebraska Safety Center at the University of Nebraska to develop training curriculum and train instructors to provide training. Beyond state requirements, local school districts and contractors may have additional training programs and requirements for school bus drivers. Contractors we spoke with told us that they also require entry-level and refresher training for their drivers that meets or exceeds state requirements. State officials in California, Pennsylvania, and Tennessee also told us that local school districts may require additional, supplemental training for drivers. For example, a state official told us that one large school district requires drivers to complete a minimum of 40 hours of gang awareness training. Additionally, all eight selected states require school bus drivers to receive training on transporting students with special needs. Drivers in these states typically receive training on transporting special needs students as part of the training curriculum for entry-level or refresher training for school bus drivers. For example, in New York, under state law, entry-level school bus drivers are required to take a minimum of two hours of instruction related to transporting special needs students during the first year of employment, and all school bus drivers are required to take one hour of annual training related to transporting special needs students. State officials in a few of our selected states said additional training on special needs transportation is provided to drivers at the local level. A Washington state official told us that the state trains all instructors on special needs transportation topics so the instructors can in turn provide more targeted training to drivers, such as how to secure wheelchairs on a particular bus model. In our search of state laws and regulations, we found six states that set a requirement for the maximum vehicle age for when a school bus must be replaced or no longer used. The requirements in these six states varied. For example, Tennessee sets a maximum age for school buses that applies to school districts and contractors; specifically, type A and B buses can be used for up to 15 years, and type C and D buses can be used for up to 18 years with unlimited miles, or up to 19 years for buses with less than 200,000 miles that have passed inspections twice a year. According to a state official, Tennessee has a maximum vehicle age requirement because older school buses may not be cost-effective to maintain, as older vehicles have more mechanical and maintenance issues. In other states, all types of school buses were subject to the same maximum age, such as stating that school buses used to transport students cannot be more than 20 or 25 years old. In addition to these six states with specific requirements, we also found instances where states provide funding or set a school bus depreciation schedule to replace school buses. Although these programs do not necessarily prohibit school bus operators from operating school buses that exceed the parameters of a state’s funding program, they encourage school districts to regularly replace school buses. For example, according to a state official, Washington provides replacement funding for school buses to school districts and contractors, and the state established a useful life cycle for each type of school bus, but the state does not require school districts and contractors to retire or stop using a bus at the end of the established life cycle. Washington sets an 8-year life cycle for type A buses and a 13-year life cycle for type C and D buses owned by the school district. In Virginia, the state has a 15-year life cycle for all school bus types, but according to state officials, a school bus older than 15 years can continue to be used as long as it passes inspections. While states do not typically set maximum school bus age requirements, local school bus operators usually make decisions on when to replace a school bus, according to stakeholders we interviewed. In particular, according to seven stakeholders we interviewed—3 manufacturers, 3 state agencies that conduct inspections, and 1 contractor—local operators make these decisions based on a business case that includes factors such as maintenance costs and environmental conditions. Representatives from two school bus manufacturers we interviewed told us that most states do not have a maximum vehicle age requirement and that many school districts will continue to use buses as long as they pass inspections and maintenance costs are not too high. State officials from Washington and Virginia said school bus operators need to maintain any school buses that are used for longer than the state-established life cycle and that these buses must pass the state inspection. With regard to school bus seating capacity, we found eight states that have a specific requirement for maximum seating capacity on school buses. Eight states set a specific maximum capacity or parameter that would yield a specific maximum capacity. For example, New York has a maximum seating capacity of 84 student passengers in type C and D school buses. We also found that about half of the states (23) had other types of seating capacity requirements, such as explicitly restricting school buses from transporting more student passengers than the manufacturer’s rated seating capacity. For example, Illinois does not allow a school bus to be operated with more passengers than recommended by the manufacturer’s rated seating capacity. Stakeholders from the school bus industry most commonly cited the National Congress on School Transportation (NCST) and its National School Transportation Specifications and Procedures as a source of leading practices for safe school bus transportation. Seventeen of the 30 stakeholders we interviewed, including state directors of student transportation and manufacturers, identified NCST and the voluntary document as a national standard for school bus safety. An NCST official told us that the National School Transportation Specifications and Procedures is meant to build on federal laws and regulations and for states to consider when establishing their standards, specifications, regulations, and guidelines for school transportation. NCST holds a congress roughly every 5 years. The primary purpose and product of the congresses is the specifications and procedures document that contains recommendations across different aspects of student transportation, including school-bus body and chassis specifications, procedures for conducting school bus inspections, and selecting and training drivers. As the congress meets regularly, NCST has discussed new safety concerns or needed guidance and amended its specifications and procedures document accordingly. For example, one stakeholder we spoke with said a relatively recent change in the document was the inclusion of criteria, based largely on federal regulations, to use in a school-bus inspection program to determine when a school bus should be placed out of service. Stakeholders also cited federal and state requirements and industry associations and experts as sources of leading practices. Eleven of 30 stakeholders we spoke with identified state laws and regulations, while 10 stakeholders identified federal laws and regulations and industry associations as sources they turn to for leading practices. Eight stakeholders also mentioned federal guidance as sources of best practices for school bus operations and inspections. For federal guidance, two stakeholders mentioned they look to NHTSA’s Highway Safety Program Guideline No. 17, Pupil Transportation Safety, which recommends strategies for a school bus safety program at the state level. For example, this guideline recommends developing a training plan for drivers and establishing a systematic preventive-maintenance program for school buses that includes periodic vehicle inspections. Our literature review identified these same sources and also provided general practices for states and local school districts and contractors to follow. For example, we found NCST’s specifications and procedures document, NHTSA’s Highway Safety Program Guideline No. 17, and textbooks that often cited federal vehicle safety standards and FMCSA’s safety regulations in our literature research. In our review of these sources, including NCST’s specifications and procedures document, we found recommended practices for maintaining school buses, including establishing an inspection program with uniform criteria for placing school buses out of service and analyzing the intended life cycle of school buses with ongoing efficiencies associated with vehicle replacement. A few stakeholders we spoke with indicated that specific, national leading practices for certain aspects of school bus transportation may not always be appropriate, as school bus operations are driven by local or regional factors such as available funding and environmental and geographic conditions. For example, stakeholders we spoke with said that different factors, like weather and road conditions, can contribute to how long a school bus should remain in use. Three stakeholders, including a manufacturer, noted that school buses operating in adverse road and weather conditions in some states may need to be replaced more frequently due to higher maintenance costs. A 2002 National Association of State Directors of Pupil Transportation Services (NASDPTS) report noted that accurate and thorough records on operating and maintenance costs of a school bus fleet provide data needed to analyze and understand costs and said that establishing school bus replacement policies are important. As noted earlier, states and local districts largely oversee school bus safety, and as such, school bus transportation is subject to local district decisions, practices, and differences in operations. When we asked stakeholders what additional federal research and guidance would benefit the school bus industry, there was no consensus among the stakeholders. Seven of thirty stakeholders said current federal research and guidance is sufficient and did not cite a need for additional guidance. For the two areas stakeholders mentioned most often, federal agencies have related efforts under way. Five stakeholders said data on or guidance to combat illegal passing of school buses would be useful. NASDPTS conducts an annual survey on illegal passing, whereby school bus drivers voluntarily count the number of vehicles that pass them when they stop to load and unload students. For each of the 5 years NASDPTS has collected this data, participating school bus drivers have observed more than 74,000 instances of illegal passing on a single day. In 2000, NHTSA issued a best practices guide on reducing the illegal passing of school buses. Further, NHTSA officials told us that research on the effectiveness of using cameras to enforce laws on passing school buses is currently under way with data collected at multiple locations, to be completed in early 2018. Based on the results of this research, NHTSA officials said they may update the content of the best practices guide on reducing the illegal passing of school buses. Four stakeholders said that additional federal guidance on school bus driver training on various topics, including loading and unloading students and technology distraction, would be helpful. As previously mentioned, FMCSA recently established minimum training regulations for entry-level training for school bus drivers when applying for commercial driver’s license, and two school bus associations— NASDPTS and the National School Transportation Association—were part of the negotiated rulemaking committee that helped develop the training regulations. Additionally, NHTSA’s 2011 refresher training for school bus drivers covers several topics, including loading and unloading students. NHTSA officials said they plan to update the content, if needed, after consulting with school-bus industry stakeholders in fiscal year 2017. Finally, NHTSA officials and stakeholders commented that the school bus industry is a close-knit community that keeps one another informed with conferences and networks across all levels of government. Stakeholders we spoke with said that much of the school bus industry’s awareness comes from annual forums and conferences at the state and national level. For example, the annual NASDPTS conference held in November 2015 included sessions on incidents of dragging students in bus doors and FMCSA’s then proposed rule on entry-level driver training. Another stakeholder told us that they confer with state school transportation associations—state organizations of school bus drivers and transportation managers—to identify and address any school-bus safety issues in the state. In addition, NHTSA and FMCSA officials and one stakeholder told us that three of the national school bus associations meet annually with FMCSA and NHTSA to discuss various school-bus safety issues. We provided a draft of this product to the Department of Transportation for comment. The Department of Transportation provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The Fixing America’s Surface Transportation Act included a provision for GAO to conduct a review of school bus safety, including examining any differences in the safety performance of different types of school bus operators—that is, school districts and contractors—and what safety requirements apply to them. We examined: (1) data federal and state agencies collect on school bus crashes and the number and characteristics of fatal school-bus crashes that have occurred since 2000; (2) federal and state laws and regulations pertaining to school bus inspections, vehicles, and drivers, as well as state data on inspections’ outcomes; and (3) sources for leading practices for safe school-bus transportation, as identified by stakeholders and literature, as well as any areas where further federal guidance could be useful. As part of our work, we also examined whether there were differences for school-district and contractor-operated school buses in any of the above areas. Overall, we focused our review on the transportation of public K-12 students traveling to and from home and school and for extracurricular activities and not transportation of private school students. To describe what data federal and state agencies collect on school bus crashes, we reviewed agency documents that describe or use National Highway Traffic Safety Administration (NHTSA) and Federal Motor Carrier Safety Administration (FMCSA) crash datasets, including the 2014 FARS/NASS GES Coding and Validation Manual and Large Truck and Bus Crash Facts 2014. We interviewed NHTSA and FMCSA officials to understand what data each agency collects on school bus crashes and whether they track the type of operator involved in school bus crashes. We also asked about any planned or ongoing efforts to change or improve the data collected on school bus crashes. To understand crash data collected by states, we reviewed NHTSA guidance on crash data systems, primarily the Traffic Records Program Assessment Advisory. We also interviewed school-bus industry associations, the Association of Transportation Safety Information Professionals, and other stakeholders to identify national and state data on school bus crashes and to discuss the strengths and limitations of existing datasets. We also administered an e-mail survey to the 50 state pupil-transportation directors to gather information on school bus data. Specifically, the survey asked whether states systematically collect data on the type of school bus operator—that is, school district or contractor—in crash or other data, and the reasons why these data were or were not collected. We obtained contact information for the survey recipients from the National Association of State Directors of Pupil Transportation Services (NASDPTS) and administered the survey between June 20, 2016, and August 8, 2016. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, we pretested the survey with the pupil transportation directors in three states and NASDPTS to ensure that questions were clear and unbiased and to minimize the burden the survey placed on respondents. Based on feedback from the pretests, we made minor changes to the content and format of survey questions. We received completed surveys from 47 respondents for an overall response rate of 94 percent. To describe the number and characteristics of fatal school-bus crashes since 2000, we analyzed data from two data sets. First, we analyzed Buses Involved in Fatal Accidents (BIFA) data from the University of Michigan Transportation Research Institute (UMTRI) for calendar years 2000 to 2010 to describe the attributes of crashes involving school buses. BIFA includes data on fatal traffic crashes in the United States involving a bus. We used BIFA data as they were the only source of national crash data we identified that included bus-specific variables like type of operator and bus, and 2010 was the last year for which BIFA data were collected. Cases for BIFA are selected from NHTSA’s Fatality Analysis Reporting System (FARS) file. BIFA supplements the FARS data; UMTRI collected police reports for each crash and trained interviewers to contact owners, operators, or drivers of the buses to collect detailed information on the bus, operator, and driver. Our analysis of BIFA data included variables collected by UMTRI, such as the type of bus, type of operator (school district or contractor), and length of trip, as well as FARS variables, such as driver- and vehicle-related factors, model year of the vehicle, and road and atmospheric conditions. Since the BIFA data were last collected for calendar year 2010, we reviewed NHTSA’s school-transportation-related analysis for 2000 through 2014 to compare the overall number of fatal school-bus crashes during and after BIFA data collection and examine whether there were any trends or changes after 2010. We also examined whether there were any changes to federal rules for school bus vehicles and operators that would substantially change the regulatory landscape for school bus operations after 2010. In reviewing the data and federal rule changes, we found no substantial changes that would raise concerns about using the BIFA data from 2000 to 2010 for our review. For BIFA, we identified crashes using the included variable for “school-bus-related crashes.” Second, we analyzed FARS data for calendar years 2011 to 2014, the latest year for which data were available, to examine this more recent FARS data to extend our analysis for certain variables like atmospheric and road conditions and time of day of the crash. For FARS, we implemented guidance NHTSA provided to use four variables from the accident and vehicle data files to identify school-bus-related crashes. Based on interviews with NHTSA and UMTRI officials, as well as reviewing system documentation and electronic data testing, we determined that the data were sufficiently reliable for the purpose of describing the number and type of fatal school-bus crashes. While these data sets allow us to describe the attributes of fatal crashes, the descriptive information is not generalizable to crashes with non-fatal injuries or with property damage only. Moreover, we did not have exposure data, such as the total miles traveled by different types of buses or operators, so we could not calculate crash rates that would allow for directly comparing different types of crashes. To describe federal school bus safety requirements, we reviewed federal laws and regulations on school bus inspections, driver training, and vehicle standards—specifically, vehicle age and seating capacity of school buses. We primarily focused our review on these three areas based on our initial research into school bus safety requirements and the content of the mandate. We reviewed inspection requirements in the Federal Motor Carrier Safety Regulations that would apply to school bus operators, but the scope of our review did not include all other aspects of these regulations, such as hours-of-service requirements for drivers and driver qualifications. We did not examine seat belts as part of our review due in part to NHTSA’s current effort to further research seat belts on all school buses. We also reviewed and analyzed guidance and reports from NHTSA, FMCSA, and the National Transportation Safety Board, including NHTSA’s Highway Safety Program Guideline No. 17, Pupil Transportation Safety; National Transportation Safety Board’s accident investigation reports involving school buses; and FMCSA’s March 2016 Notice of Proposed Rulemaking and December 2016 Final Rule on entry- level driver training. We also interviewed officials from those agencies to understand the scope and applicability of federal laws and regulations for school bus vehicles and operators. To describe state laws and regulations, we systematically searched laws and regulations for all 50 states to determine the extent to which states set requirements for school bus inspections, driver training, and vehicle standards. Specifically, we searched for state requirements for: (1) school bus inspections; (2) entry-level or refresher training for school bus drivers; (3) maximum age, mileage, or use that require retiring or no longer using school buses; and (4) maximum seating capacity for school buses. We conducted this search on state statutes and administrative codes in a legal database. In consultation with GAO’s Office of General Counsel and our librarian, we developed search terms and protocols and used a data collection instrument for each of the requirements to ensure consistent collection of information. For example, for our searches on state requirements for school bus inspections, we used the search term “school bus w/10 inspect!” and increased the proximity of the key words from within 10 words to within 15 and 20 words. When our search returned no results for a state, we then searched the websites of the state’s education, transportation, motor vehicle, and/or police agencies and used any information found from these searches, such as a legal citation or terminology, to direct additional searches in a legal database. We also consulted with our Office of General Counsel on coding the results of our searches in the data collection instrument. After completing our searches, we compared the results of our search on states’ school- bus inspection requirements with the 2011 survey results from South Carolina and NASDPTS on school bus inspection practices to verify our research. We also compared the results of our research on vehicle age with a list provided by the National Conference of State Legislatures and the results of our research on vehicle age and seating capacity with a stakeholder’s compiled list of state requirements and practices on school bus vehicle age/life cycle use and seating capacity. We took steps to reconcile any identified differences, including conducting further research in a legal database and state agency websites and contacting state officials to clarify and verify the information we found in our legal search. We also validated our search results with eight selected states as part of our in-depth review on how selected states implement school-bus safety requirements, as further described below. Finally, our Office of General Counsel reviewed and verified the search results for all 50 states. The scope of our research did not include local requirements, and thus we did not include any local requirements for school bus inspections, driver training, or maximum vehicle age or seating capacity that may be applicable to school bus operators. In addition, our search terms and protocols aimed to identify states with requirements, but due to the nature of keyword searches, we may not have identified all relevant school bus requirements. Further, for states for which we didn’t identify requirements, we attempted several types of searches to try to find state inspection, driver training, or maximum age or capacity requirements. However, we cannot definitively conclude that there are no requirements in the mentioned categories for these states. To better understand implementation of federal and state rules and whether public and private bus operators face different safety requirements, we performed additional research on and conducted in- depth interviews with state officials from eight selected states. Using School Bus Fleet’s 2013–2014 school year school transportation data, we selected states to include those with the highest number of students transported daily by school bus, the highest annual route miles traveled per student, and variation in the number of school buses owned by states/school districts and contractors. We also selected states that vary geographically and maintained data available on school bus involved accidents and school bus inspections. We selected eight states: California, Illinois, Nebraska, New York, Pennsylvania, Tennessee, Virginia, and Washington. These eight states account for about 28 percent of public K-12 students transported daily on school buses. We conducted semi-structured interviews with state officials from the selected eight states and, when available, collected data on the outcomes of school bus inspections and the age of school buses. These eight selected states are a non-probability sample of states, and thus, the information we obtained is used for illustrative purposes and is not generalizable. To identify sources of leading practices, we conducted a literature search to identify leading practices on school bus inspections, driver training, and maximum vehicle age and seating capacity. We reviewed literature for the last 15 years for pertinent studies in peer-reviewed journals, trade publications, and conferences, among others, to identify sources and leading practices. We also interviewed school bus industry stakeholders, including officials from school-bus industry associations, federal agencies, select state agencies, school bus manufacturers, and school bus contractors, to identify sources of leading practices. We selected stakeholders to represent a range of roles in the school bus industry and the federal and state levels of government. A full list of stakeholders interviewed for this review is provided in table 2 below. In these interviews, we asked stakeholders an open-ended question for them to generate sources of leading practices, rather than offering a list of possible sources. Therefore, not every stakeholder we interviewed commented on whether a particular document or organization represented a source of leading practices; we can only report counts of stakeholders that identified a particular document or organization. We also asked school bus industry stakeholders what areas of additional federal guidance and research, if any, are needed. In identifying sources of leading practices and areas of further federal guidance and research, our questions did not apply to 4 of the 30 stakeholders in both cases. For example, we did not ask federal agencies about what additional federal research or guidance would be useful as we instead asked them about current or future research on school bus safety. The views of these school bus stakeholders are not generalizable to the entire school bus community, but they provide us with valuable insights. We analyzed the content of interviews with stakeholders and identified sources of leading practices from our literature review in the areas of inspections, driver training, and vehicle standards. In our review of eight selected states, we found variation in state requirements for and implementation of school bus inspection, driver training, and vehicles standards for maximum age and seating capacity, as shown in table 3 below. In addition to the contact named above, Susan Zimmerman (Assistant Director), Joanie Lofgren (Analyst in Charge), Carl Barden, Pamela Daum, Leia Dickerson, H. Brandon Haller, David Hooper, Jennifer Kim, Avani Locke, Grant Mallie, Janet Mascia, SaraAnn Moessbauer, Malika Rice, Amy Rosewarne, and Carter Stevens made key contributions to this report.
|
School buses transport over 26 million students to school and other activities every day. While school buses have a strong safety record, crashes with fatalities and injuries do occur. Since school buses transport precious cargo—our children—government and industry strive to further improve their safety. Federal and state agencies both oversee school bus safety, and locally, school buses can be operated by school districts or private contractors, working on behalf of school districts. The Fixing America's Surface Transportation Act included a provision for GAO to review school bus safety. GAO examined (1) fatal crashes involving school buses for 2000 to 2014 and (2) federal and state school-bus-related laws and regulations, among other objectives. GAO analyzed two sets of data from the National Highway Traffic Safety Administration and the University of Michigan Transportation Research Institute on fatal school bus crashes for 2000 to 2014, the latest year for which data were available; reviewed federal laws and regulations; and systematically searched state laws and regulations on school-bus inspections, driver training, and maximum vehicle age and capacity in all 50 states. GAO also interviewed federal officials from the Department of Transportation (DOT), school bus industry associations and manufacturers, and other stakeholders. DOT reviewed a draft of this report and provided technical comments that GAO incorporated as appropriate. Based on GAO's analysis of data for 2000 to 2014, 115 fatal crashes involved a school bus on average each year—which is 0.3 percent of the 34,835 total fatal motor-vehicle crashes on average each year. The school-bus driver and school-bus vehicle (e.g., a defect) were cited as contributing factors in 27 percent and less than 1 percent of fatal school-bus crashes, respectively. Seventy-two percent of fatal crashes occurred during home-to-school and school-to-home travel times. Limited national data on school bus crashes exist beyond data on fatal school-bus crashes, but some states have richer data—for example, on the type of bus or whether the operator was a school district or private contractor. Federal laws and regulations set requirements for certain aspects of school bus safety, and state laws and regulations in many cases go beyond the federal requirements. Federal regulations for school-bus vehicle standards and driver licensing apply to both school districts and contractors. DOT has reported that new school buses must meet more Federal Motor Vehicle Safety Standards than any other type of new motor vehicle. Federal safety regulations for commercial motor-vehicle operations apply in certain cases, such as for contractors hired by schools to provide transportation for extracurricular activities across state lines. Based on a systematic search of state laws and regulations, GAO found that all 50 states require school bus inspections while most states—GAO found 44—require refresher training for school bus drivers. However, GAO found that less than a quarter of states set specific requirements for the maximum age and seating capacity of school buses. Overall, according to stakeholders GAO interviewed, states' requirements vary by state for school bus inspections, driver training, and vehicles but tend not to differ based on the type of operator.
|
DHS was created in response to the terrorist attacks on September 11, 2001. Not since the creation of the Department of Defense in 1947 has the federal government undertaken an organizational merger of this magnitude. Enacted on November 25, 2002, the Homeland Security Act established DHS by merging 22 distinct agencies and organizations with multiple missions, values, and cultures. The 22 agencies whose powers were absorbed or in part assumed by DHS came from eight different departments (Agriculture, Commerce, Defense, Energy, Health and Human Services, Justice, Transportation, and the Treasury) and two independent offices (the Federal Emergency Management Agency and the General Services Administration). In addition, DHS merged responsibilities from former agencies to create some new agencies, such as Customs and Border Protection. On March 1, 2003, DHS officially began operations as a new department. DHS is among the largest federal government agencies, with approximately 180,000 employees and an estimated budget of $43.6 billion for fiscal year 2007. DHS’s mission is to lead the unified national effort to secure America, prevent and deter terrorist attacks, protect against and respond to threats and hazards to the nation, ensure safe and secure borders, welcome lawful immigrants and visitors, and promote the free flow of commerce. Six of the seven primary operational agencies, and the Operations Directorate of the department, have identified the need to conduct activities in support of the homeland security mission 24 hours a day, 7 days a week, 365 days a year. The department’s July 2006 organizational chart, as illustrated in figure 1, highlights these six agencies and the Operations Directorate. The three components of DHS that have overall responsibility for the four multi-agency 24/7/365 operations centers were created in response to the events of September 11, 2001, and the subsequent establishment of DHS. By merging portions of the Immigration and Naturalization Service and the U.S. Department of Agriculture with elements of U.S. Customs, CBP was created as part of DHS in 2003 to protect the nation’s borders in order to prevent terrorists and terrorist weapons from entering or exiting the United States while facilitating the flow of legitimate trade and travel. CBP sponsors two 24/7/365 multi-agency operations centers: the Air and Marine Operations Center and the National Targeting Center. TSA, established in 2001 (as part of the Department of Transportation), and incorporated into DHS in 2003, protects the nation’s transportation systems to ensure freedom of movement for people and commerce and sponsors the Transportation Security Operations Center. DHS established the Office of Operations Coordination (referred to as the Operations Directorate) after a broad internal review in 2005. The Operations Directorate, which sponsors the National Operations Center (includes the previous Homeland Security Operations Center), is responsible for coordinating internal and external operational issues throughout the department, conducting incident management, and facilitating rapid staff planning and execution. The three sponsoring components provide overall direction and management for their respective centers. We have previously reported that establishing the new DHS is an enormous undertaking and the new department needs to build a successful transformation that does the following: instills the organization with important management principles; rapidly implements a phased-in transition plan; leverages the new department and other agencies in executing the national homeland security strategy; and builds collaborative partnerships with federal, state, local, and private-sector organizations. DHS faces significant management and organizational transformation challenges as it works to protect the nation from terrorism and simultaneously establish itself. For these reasons, in January 2005, we continued to designate the implementation and transformation of the department as high risk. DHS’s Inspector General reported, in December 2004, that integrating DHS’s many separate components into a single, effective, efficient, and economical department remains one of its biggest challenges. We also reported in 2005 that agencies can enhance and sustain their collaborative efforts by engaging in eight key management practices: defining and articulating a common outcome; establishing mutually reinforcing or joint strategies; identifying and addressing needs by leveraging resources; agreeing on roles and responsibilities; establishing compatible policies, procedures, and other means to operate across agency boundaries; developing mechanisms to monitor, evaluate, and report on results of collaborative efforts; reinforcing agency accountability for collaborative efforts through agency plans and reports; and reinforcing individual accountability for collaborative efforts through performance management systems. Although there is no commonly accepted definition for collaboration, in our previous assessment of collaborative efforts among federal agencies we defined it as any joint activity by two or more organizations that is intended to produce more public value than could be produced when the organizations act alone. This report focuses on the actions DHS and its components have taken to make collaboration at multi-agency operations centers as effective as possible. Joint activities take place at operations centers where multiple components staff watchstander positions and provide liaison, expertise, and access to information that would not otherwise be on hand. For this report, we selected the first seven of the eight key practices listed above and assessed the first two key practices together, thereby reducing our focus to six areas. We did not address the eighth practice—reinforcing individual accountability for collaborative efforts through performance management systems—because an in-depth examination of component agencies’ performance management systems was beyond the scope of this review. The four multi-agency operations centers each have their own mission and generate different products while performing similar functions and sharing a number of customers. The missions of the AMOC, NTC, and TSOC are tactical, including such activities as monitoring the nation’s airspace, the movement of potential terrorists, and the passengers on commercial flights, respectively. NOC-Watch’s mission is more strategic in that it collects information gathered by the other multi-agency operations centers and provides a national perspective on situational awareness. The products of the four multi-agency operations centers reflect their different missions and range from reports on suspect individuals traveling on commercial flights to reports on suspicious private air and marine craft. The multi-agency operations centers all share some common functions: maintaining situational awareness and information sharing and communications; coordinating internal operations and coordinating among federal, state, local, tribal, and private-sector entities; and managing incidents and making decisions. While all the multi-agency operations centers share common customers, such as foreign, federal, state, and local governments, the NOC-Watch has a larger number of customers, given its role as a hub for overall situational awareness. Of the four multi-agency operations centers, three—AMOC, NTC and TSOC—have tactical yet different missions and provide different products that reflect their respective missions. The NOC-Watch has a more strategic mission in providing an overall assessment of situational awareness. The AMOC’s primary mission is to detect, sort, track, and facilitate the interdiction of criminal entities throughout the Western Hemisphere, by utilizing integrated air and marine forces, the latest technology, and tactical intelligence. AMOC’s maintains day-to-day, around-the-clock airspace situational awareness of the nation’s borders through identification and detection of foreign and domestic threats. Created in 1988 by the U.S. Customs Service and located in Southern California, the AMOC was established as the Air and Marine Operations Center on March 1, 2003. In addition to CBP and U.S. Coast Guard personnel, the AMOC is staffed by the Federal Aviation Administration, and the Department of Defense National Guard Bureau-Air National Guard, as well as a representative of the government of Mexico. AMOC staff use surveillance systems and databases to detect, identify, and track potential threats, and to coordinate the apprehension of criminals using law enforcement air, marine, and ground interdiction forces. Staff utilize a surveillance system that includes an extensive network of over 200 ground-based radar and satellite tracking systems throughout North America and the Caribbean. Staff also use numerous law enforcement and Federal Aviation Administration databases to ensure that U.S. entry policy and procedures are followed. Figure 2 shows the variety of information and data sources employed by the AMOC. Staff can conduct detailed research from a transnational and criminal threat perspective to identify suspect persons, aircraft, and marine vessels. AMOC staff use the resulting information to coordinate air and marine law enforcement activity with various agencies such as the U.S. Coast Guard and Immigration and Customs Enforcement; federal, state, and local law enforcement; the Department of Defense; U.S. and foreign air traffic control facilities; and foreign government coordination centers. The AMOC Daily Intelligence Report focuses on the nation’s borders involving suspicious private air and marine craft that are detected by radar, eyewitnesses, or surveillance aircraft. The NTC’s mission is to coordinate and support all agency field-level anti- terrorism activities by providing tactical targeting and analytical research, and to be a single point of reference for all agency anti-terrorism efforts. NTC monitors the movement of potential terrorists and prevents them and any weapons of terror from entering and exiting our country through land, air, and sea ports. Established on October 22, 2001, under the U.S. Customs Service, the NTC, located in Northern Virginia, began 24/7/365 operations November 10, 2001. In addition to CBP personnel, the NTC is staffed by the U.S. Coast Guard, Immigration and Customs Enforcement, Federal Air Marshal Service, and the Transportation Security Administration. NTC staff use sophisticated information-gathering techniques and analytical tools to look at data containing passenger and flight information. These data include lists of known terrorists, foreign visitors whose official authorization permitting entry into and travel within the United States has elapsed (visa overstays), passport information, and cargo listings to seek potential matches. Any inconsistency identified in the data can trigger additional analysis. Figure 3 shows the variety of sources of information and data sources employed by the NTC. NTC works with a variety of federal stakeholders. For example, the NTC works with the Federal Bureau of Investigation’s Terrorist Screening Center to identify persons on the National Terrorist Watch List. NTC staff also provide information from CBP’s Advance Passenger Information System for TSA’s performance of a risk assessment on crewmembers on international flights. Federal Air Marshals use information developed by the NTC to determine if they need to put resources on a specific flight. Using NTC capabilities to screen crew, vessel, and cargo, along with other information, the U.S. Coast Guard determines vessels and crewmembers that warrant further surveillance or assessment and can prioritize its inspection efforts. NTC also helps in implementing the pilot Immigration Advisory Program by reviewing advance information on travelers forwarded by program teams to identify travelers at foreign airports that may present a risk or warrant more intensive examination before they board aircraft bound for the United States. (Passengers whose travel documents are invalid, expired, or otherwise may have been altered, counterfeited or obtained through fraud are advised, as is the airline, before they leave their foreign location that they will likely be deemed inadmissible and denied entry upon arrival in the United States.) The NTC reports we reviewed primarily identified individuals at and between domestic ports of entry and certain critical foreign ports. The TSOC’s mission is to provide situational awareness and information sharing in day-to-day coordination and incident management for all transportation security related operations and issues worldwide by monitoring, responding to, and investigating security incidents involving all transportation sectors. TSOC maintains situational awareness of passengers on commercial flights and works to minimize and mitigate security vulnerabilities of the National Capital Region and critical infrastructure such as commercial airports, rail stations, and pipelines. The TSOC, located in Northern Virginia, began 24/7/365 operations in August 2003. The National Capital Region Command Center constitutes the multi-agency element of the TSOC because it is staffed by other DHS component agencies—specifically the U.S. Secret Service and Customs and Border Protection. In addition, representatives of organizations outside of DHS such as the Federal Bureau of Investigation, District of Columbia Metro Police, Federal Aviation Administration, U.S. Capitol Police, and the U.S. Air Force (Northeast Air Defense Sector) provide watchstanders for the TSOC. As part of its mission, TSOC staff coordinate with federal, state, and local homeland security entities to identify activities that might indicate a threat to national security and isolate indications of impending attack while assessing their potential impact. The TSOC also furnishes alerts and reports to field security organizations while combining intelligence with operational information across all modes of transportation. Last, it monitors incidents and crises, including national special events such as presidential inaugurations and the Super Bowl, for TSA headquarters and makes recommendations to DHS leadership. Figure 4 shows the modes of transportation monitored by the TSOC. The TSOC reports we reviewed provided information on incidents across all modes of transportation, including National Capital Region security incidents, critical infrastructure, and individuals of interest related to the No-Fly List. The NOC-Watch is designed to perform a more strategic mission than the other three multi-agency operations centers. NOC-Watch acts as the primary national-level coordination point for awareness of events that may affect national security or safety. The center is responsible for combining and sharing of information, communications, and operations coordination pertaining to the prevention of terrorist attacks and domestic incident management by facilitating information sharing with other federal, state, local, tribal, and nongovernmental entities and by fusing law enforcement, national intelligence, emergency response, and private-sector reporting. Created as the Homeland Security Operations Center and located in Northwest Washington, D.C., the center was established on February 19, 2003, and redesignated the National Operations Center on May 25, 2006. The NOC-Watch is the 24/7/365 element of the center. In addition to staff from the Operations Directorate, the NOC-Watch includes other DHS staff from 20 components and offices such as representatives from the U.S. Secret Service, Federal Protective Service, Federal Air Marshal Service, Transportation Security Administration, Customs and Border Protection, U.S. Coast Guard, Federal Emergency Management Agency, U.S. Border Patrol, U.S. Citizenship and Immigration Services, National Biological Surveillance Group, U.S. Computer Emergency Readiness Team, Domestic Nuclear Detection Office, and other DHS directorates. The NOC-Watch also includes representatives from 35 other federal, state, and local agencies such as the Central Intelligence Agency; Defense Intelligence Agency; National Security Agency; National Geospatial-Intelligence Agency; Federal Bureau of Investigation; Department of Interior (U.S. Park Police); Drug Enforcement Administration; Alcohol, Tobacco, Firearms and Explosives; Virginia State Police; Fairfax County Police; and the New York, Boston, and Los Angeles police departments; and a number of other organizations. NOC-Watch staff use information gathered and communicated by the three tactical centers; other DHS operation centers; other federal, state, and local entities; and a wide variety of other information sources to provide overall national situational awareness related to homeland security. The NOC-Watch reports, via the DHS Director of Operations, to the Secretary of Homeland Security and coordinates directly with the White House and focuses on two goals: (1) the detection, prevention, and deterrence of terrorist attacks and (2) domestic incident management during crises and disasters or national special events. Figure 5 shows some of the sources of information and agencies with which that information is shared. Situation reports prepared by the Operations Directorate’s NOC-Watch that we reviewed contained information reported from other DHS subcomponents and operations centers such as the TSOC, NTC, and AMOC, as well as external intelligence and law enforcement agencies, and the private sector. The NOC-Watch also prepares a Homeland Security Operations Morning Brief that provides information to federal, state, and local law enforcement agencies on the national picture at the sensitive but unclassified level. All four centers conduct common functions to maintain situational awareness and communicate and coordinate with other federal, state, and local governments, as well as private-sector entities. The centers do so to support both the mission of the sponsoring component organization and the underlying homeland security mission of DHS. On the basis of our discussions with center officials and our assessment of documents they provided, we summarized these functions and found that all DHS multi- agency operations centers perform 9 of 11 functions identified in table 2. (According to TSOC officials, the TSOC does not coordinate with foreign governments, and NTC and TSOC officials said they do not exercise command and control functions.) Multi-agency operations centers’ customers include federal, state, and local governments and private-sector entities, along with foreign governments. The NOC-Watch has a larger number of overall customers; as the national-level multi-agency hub for situational awareness and a common operating picture, the NOC-Watch provides information to a wider range of government customers, including federal executive leadership, and intelligence and law enforcement agencies at the federal, state, and local level. DHS has leveraged its resources—one key collaborative practice—by having staff from multiple agencies work together at the four operations centers. However, opportunities exist to further implement this and the other relevant practices that our previous work has identified as important to enhancing and sustaining collaboration among federal agencies. For example, not all of the components responsible for managing the operation centers had established goals to define and articulate a common outcome and mutually reinforcing or joint strategies for collaboration (related to two of our key practices); assessed staffing needs to leverage resources; defined roles and responsibilities of watchstanders from agencies other than the managing one; applied standards, policies, and procedures for DHS’s information sharing network to provide a means to operate across agency boundaries; prepared mechanisms to monitor, evaluate, and report on results of the operations centers to reinforce for collaborative efforts; and reinforced agency accountability for collaboration efforts through agency plans and reports. The Operations Directorate, established in November 2005 to improve operational efficiency and coordination, provides DHS with an opportunity to more consistently implement these practices that can enhance and sustain collaboration among federal agencies at multi-agency operations centers. The three DHS components responsible for the four multi-agency centers have not developed and documented common goals or joint strategies for their operation that our work has shown could enhance collaboration among the agencies. Officials at the four multi-agency operations centers we visited said they did consider formally documenting working agreements but concluded it was not essential since all of the agencies involved were part of DHS. Officials from the NOC said that the lack of formal agreements is a reflection of the speed with which the center was established and the inherent flexibility offered to DHS agencies in order to get them to staff the operation center positions. Nonetheless, as the DHS Office of Inspector General has reported, memorandums of understanding are valuable tools for establishing protocols for managing a national-level program between two organizations. Within DHS, external and internal memorandums of agreement and other interagency joint operating plans are often used to document common organizational goals and how agencies will work together. For example: The National Interdiction Command and Control Plan among the Department of Defense, Office of National Drug Control Policy, and the AMOC highlights an agreement between a DHS component and other federal agencies. The Joint Field Office Activation and Operations Interagency Integrated Standard Operating Procedure describes how a temporary federal multi-agency coordination center should be established locally to facilitate field-level domestic incident management activities related to prevention, preparedness, response, and recovery and addresses the roles and responsibilities of multiple DHS components such as the Federal Emergency Management Agency and Immigration and Customs Enforcement and other federal agencies such as the Federal Bureau of Investigation. Guidelines Governing Interaction Between ICE’s Office of Investigations and CBP’s Office of Border Patrol documents a memorandum of understanding between the Office of Investigations at Immigration and Customs Enforcement and CBP’s Border Patrol, entered into in November 2004, that governs the interaction between the two components and formalizes roles and responsibilities in order to further enhance information sharing. Thus, although some DHS components have established a variety of internal and external working agreements, memorandums, and in the case of the Joint Field Offices, standard operating procedures, DHS’s Operations Directorate, which is responsible for coordinating operations, has not provided guidance on how and when such agreements should be used to improve collaboration among the sponsoring and participating components at the operations centers we reviewed. Nor have any of these centers documented goals or joint strategies using these or other types of agreements. Our previous work has shown that memorandums of agreement or understanding and strategic plans can both be used to establish common goals and define joint strategies for how agencies will work together. According to our work, collaborative efforts are further enhanced when staff working across agency lines define and articulate a common federal outcome, or purpose, that is consistent with their respective agency goals and missions. Joint strategies or mutual agreements also contribute to another key area when they are used as a vehicle for identifying and defining more specific expectations of the roles and responsibilities of staff provided by collaborating agencies. The extent to which officials responsible for managing the four multi- agency operations centers had conducted needs assessments to determine the staffing requirements of each center varied. For example, CBP officials conducted an evaluation in June 2005 that addressed AMOC capabilities and continuing staffing needs related to AMOC personnel, but did not address the need for, or responsibilities of, U.S. Coast Guard staff at the center. AMOC officials did cite a requirement for additional staff from the U.S. Coast Guard, as well as a requirement for an Immigration and Customs Enforcement position in a subsequent strategic planning effort (although these requirements had not been filled). However, there was not a specific assessment supporting the need for these staff positions. NTC officials had not conducted a staffing needs assessment but said they plan to conduct an assessment based upon current targeting programs, the scheduled expansion of existing programs, and the onset of additional CBP targeting programs. They said they plan to include data on the volume of telephone calls handled by the center and the number of information requests completed by the NTC in support of CBP targeting and operations, and they expect to complete the assessment in October 2006. TSOC and NOC-Watch officials said they had not documented a needs analysis for staff from agencies other than the sponsoring agency. Instead, they viewed the cross-agency staffing requirement as a historical edict based on a general assumption that other agency staff expertise was needed to fulfill the mission of their operations center. Our work has shown that collaborating agencies should identify the resources, including human resources, needed to initiate or sustain their collaborative effort and take steps to leverage those resources. Because each agency, or component, has different strengths and limitations, assessing these varying levels allows them collectively to obtain additional resources otherwise unavailable individually. Formal assessment of the need for all participating agencies’ staff to perform specific functions allows for the leveraging of resources to more effectively meet the operational needs of each agency or component. While three of the four multi-agency operations centers had developed descriptions for the watchstander position staffed by their own agency, only one center—the AMOC—had developed a position description for staff assigned to the center from another DHS agency. At the AMOC, center officials require that Coast Guard staff meet a standardized set of requirements for radar watchstanders. The other centers relied on the components providing staff to define their watchstanders’ roles and responsibilities. Lack of a consistent definition for the watchstander position may lead to people at the same center in the same role performing the same responsibilities differently or not at all. Our work has shown that defining roles and responsibilities both enhances and sustains collaboration among federal agencies. Because of the potentially critical, time-sensitive need for decisive action at 24/7/365 operations centers, it is important that the roles and responsibilities of watchstanders are described and understood stood by both the staff and the officials responsible for managing the operations centers. Further, a definition of the watchstander role and responsibilities is important for supporting agency officials who must make staffing decisions about assigning qualified and knowledgeable personnel to the centers. Finally, internal controls standards require that management and employees establish a positive control environment as a foundation for strong organizational internal controls. According to the standard, one activity that agency officials may consider implementing as part of the control environment is to identify, define, and provide formal, up-to-date job descriptions or other means of identifying and defining job-specific tasks. To collaborate by sharing information through DHS’s primary information sharing system, the Homeland Security Information Network (HSIN), agencies participating in multi-agency operations centers need to be connected to the network and have the guidance that enables its use. In the course of our work, we learned that CBP’s National Targeting Center could not collaborate with other users of HSIN because the system was not connected for NTC watchstanders. Other concerns about the use of HSIN to enhance coordination and collaboration have also been identified by the DHS Inspector General. According to the Inspector General, DHS did not provide adequate user guidance, including clear information sharing processes, training, and reference materials needed to effectively implement HSIN. The report noted that in the absence of clear DHS direction, users were unsure of how to use the system. Though DHS officials said other networks such as the Secret Internet Protocol Router Network and the Joint Worldwide Intelligence Communications System are primarily used for coordination of intelligence analysis, the connectivity problem with the primary DHS-wide information sharing system, HSIN, remained unresolved as of September 2006. Our work has shown that to facilitate collaboration, agencies need to address the compatibility of standards, policies, procedures, and data systems used in the collaborative effort. Furthermore, as agencies bring diverse cultures to the collaborative effort, it is important to address these differences to enable a cohesive working relationship and to create the mutual trust required to enhance and sustain the collaborative effort. Frequent communication among collaborating agencies is another means to facilitate working across agency boundaries and prevent misunderstanding. The lack of standards, policies, and procedures for use of HSIN at DHS operations centers could limit the frequency and effectiveness of communications among the centers. With the exception of AMOC, the multi-agency centers have not developed methods to monitor, evaluate, and report the results of joint efforts. For example, the Office of Management and Budget’s assessment of the NOC- Watch for 2005 determined that center officials had not established effective annual or long-term performance goals. Nor were performance measures or other mechanisms in place to monitor and evaluate the joint efforts of multiple DHS agencies at the TSOC and NTC. In response to a report by the DHS Office of Inspector General in March 2004 that found the AMOC did not have organizational performance measures and individual performance standards to assess AMOC’s effectiveness and productivity, AMOC officials reported to the Inspector General that they began collecting data in January 2004 on a daily basis to measure productivity for the overall operations center as well as individual watchstanders, including U.S. Coast Guard representatives. Our work has shown that developing means to monitor, evaluate, and report areas of improvement allow agencies to enhance collaboration. Developing performance measures and mechanisms to monitor and evaluate the contributions can help management, key decision makers, and both stakeholders and customers obtain feedback through internal reports in order to improve operational effectiveness and policy. Developing goals and providing performance results can also help reinforce accountability through joint planning and reporting of collaborative efforts. Neither DHS nor the component agencies responsible for managing the four multi-agency operations centers consistently discuss or include a description of the contribution of collaborative efforts of the multi-agency operations centers in their strategic or annual performance plans and reports. The most recent DHS strategic plan, issued in 2004, neither included a discussion of performance goals nor addressed the joint operations of the multi-agency centers. The plan reported only that DHS “will provide integrated logistical support to ensure a rapid and effective response and coordinate among Department of Homeland Security and other federal, state, and local operations centers consistent with national incident command protocols.” CBP’s 2005 annual report on the operations of the NTC does, however, include a section dedicated to the contributions of the external liaisons in describing the roles and responsibilities of other DHS agency personnel including the Federal Air Marshal Service, Immigration and Customs Enforcement, and the U.S. Coast Guard, and the accomplishments they have made in the center’s operations. In addition, the AMOC strategic plan for 2005 generally discussed the importance of collaboration with other component agencies and included a goal to strengthen component agency partnerships to maximize homeland security strategies. Reports of the components responsible for managing the other centers do not address the roles and contributions of other supporting agencies in accomplishing the centers’ missions. DHS agencies responsible for providing staff to support watchstander positions for multi-agency operations centers managed by other agencies also do not address their participation in the operations of the operations center in strategic plans or performance reports. In general, managing and supporting agencies that do mention the operations centers do not include any discussion of the relationship between the participating agencies’ missions or strategies and those of the centers. Our work has shown that federal agencies can use their strategic and annual performance plans as tools to drive collaboration with other agencies and partners and establish complementary goals and strategies for achieving results. These performance plans can also be used to ensure that goals are consistent and, if possible, mutually reinforcing. Accountability is also reinforced when strategic and annual performance plans help to align agency policy with collaborative goals. A public accounting through published strategic and annual performance plans and reports makes agencies answerable for collaboration. DHS established a new Office of Operations Coordination in November 2005 (referred to as the Operations Directorate) to increase its ability to prepare for, prevent, and respond to terrorist attacks and other emergencies and improve coordination and efficiency of operations. In responding to a draft of this report, DHS cited a number of efforts that the new directorate plans to take to fulfill this leadership role. Among other things, DHS said it plans to conduct an independent study, initiated in September 2006, to leverage technical and analytical expertise to support expanding the capabilities of the Operations Directorate. In addition, DHS said it plans to move elements of the National Operations Center to the Transportation Security Operations Center in 2007 and, ultimately to colocate the DHS headquarters and all the DHS component headquarters along with their respective staffs and operations centers at one location. DHS also cited the development of a new working group that is developing a national command and coordination capability. While we agree that these leadership efforts proposed by the Operations Directorate could further enhance collaboration among DHS’s component agencies, because DHS officials did not provide any information or documentation of these efforts in response to our requests during the course of the review, we were unable to determine the extent to which these efforts are likely to enhance and sustain departmental collaboration. Nonetheless, further departmental focus on the key practices we have identified could enhance collaboration among the component agencies. For example, at the time of our review, the directorate had not taken steps to gather information on the resources available at each center. The director’s office did not have ready access to information such as centers’ budgets or other financial information needed for reporting across the components, the number of staff employed at the multi-agency centers, or the number and type of operations centers managed by the various components. After being directed to the components for budget and staffing information, we found that the managing components of the multi-agency operations centers also did not have ready access to up-to-date information on the number of staff the centers employed. Such information could be useful to the directorate’s efforts to develop a national command and coordination capability and further enhance collaboration among the components with multi-agency operations centers. Directorate officials said that the Operations Directorate had not assumed its full range of responsibilities due to not being fully staffed until March 2006 and because of the revisions to the National Response Plan formalized in May 2006. In responding to a draft of this report, DHS said that the Operations Directorate does not have the authority to direct or exercise control over other components’ operations centers with respect to administration and support, including organization, staffing, control of resources and equipment, personnel management, logistics, and training. Nonetheless, while the Operations Directorate lacks authority to direct the actions of the other components’ operations centers and obtaining compatible data may be difficult since the reporting systems of several centers were in place prior to the creation of DHS, without compatible staffing and financial data Operations Directorate leadership officials are hampered in their ability to understand and compare the relative personnel and operating costs of the 24/7/365 operations centers and use such information to promote the expected unity of effort within the department. Enhanced leadership from the Operations Directorate to support consistent reporting of operations centers’ budgets and staffing could also support collaborative actions in two of the previously mentioned key areas: assessing staffing needs to leverage resources, and applying standards, policies, and procedures to operate across agency boundaries. In the absence of leadership to support these and other collaborative efforts, DHS officials have not yet taken full advantage of an opportunity to meet the directorate’s responsibilities. The establishment of the Operations Directorate with the express intent of enhancing collaboration and coordination among the department’s operational components provides an opportunity to implement practices that could enhance collaboration among DHS agencies working together at each multi-agency 24/7/365 operations center. Having staff from multiple agencies work together is a way of leveraging resources, one key practice for enhancing collaboration. However, those resources may not be used to their full potential if other steps to enhance collaboration are not taken, and the Operations Directorate could provide guidance to help ensure that the sponsors of the operations centers take the appropriate steps. There are multi-agency operations centers that lack common goals and joint strategies; clearly defined roles and responsibilities; compatible standards, policies, and procedures for information networking; consistent staffing assessments; prepared mechanisms to monitor, evaluate, and report on the results of collaborative efforts; and reinforced agency accountability through agency plans and reports. Our previous work has shown that these are all critical components in enhancing collaboration among federal agencies. Given that the collaboration in multi-agency operations centers focuses on gathering and disseminating information on real-time situational awareness related to disasters and possible terrorist activity, it is important that the staff at the centers achieve the most effective collaboration possible. To provide a setting for more effective collaboration among the staff at each multi-agency 24/7/365 operations center, we recommend that the Secretary of the Department of Homeland Security charge the Director of the Operations Directorate with developing and providing guidance and helping to ensure the agencies that sponsor the centers take the following six actions: define common goals and joint strategies; clarify the roles and responsibilities for watchstanders; implement compatible standards, policies, and procedures for using DHS’s information network to provide a means of operating across agency boundaries; conduct staffing needs assessments; implement mechanisms to monitor, evaluate, and report on the results of collaborative efforts; and address collaborative efforts at the four multi-agency operations centers in plans and reports on the level of each operation center’s managing agency. On October 16, 2006, DHS provided written comments on a draft of this report (see app. III.) DHS agreed with the six recommended actions to enhance collaboration at the DHS multi-agency operations centers and said it planned to take action to implement the practices. In the draft report, we said that the Operations Directorate had not yet taken actions to fulfill its leadership role and that a lack of leadership by the Operations Directorate to support consistent reporting of operations centers’ budgets and staffing limits collaborative actions. DHS did not agree that leadership provided by the Operation Directorate to support collaboration is lacking and provided a number of examples of leadership efforts. Among other things, DHS noted plans to conduct an independent study, initiated in September 2006, to leverage technical and analytical expertise to support expanding the capabilities of the Operations Directorate. In addition, DHS said it plans to move elements of the National Operations Center to the Transportation Security Operations Center in 2007 and, ultimately to colocate the DHS headquarters and all the DHS component headquarters along with their respective staffs and operations centers at one location. We identified the planned actions in the report and agree that these leadership efforts by the Operations Directorate have the potential to further enhance collaboration among DHS’s component agencies, along with the key practices suggested by our efforts to review collaboration among agencies across the federal government. However, because Operations Directorate officials did not provide any information or documentation of these efforts in response to our requests during the course of the review, we were unable to determine the extent to which these efforts are likely to enhance and sustain departmental collaboration. In addition, DHS officials cited what they considered to be misconceptions expressed in the draft report. They said that the Operations Directorate does not have the administrative, budgetary, programmatic, or command and control authority to direct or exercise control over other component’s operations centers. They also said that our draft incorrectly reported that the National Operations Center replaced the Homeland Security Operations Center. Although it was not our intent to imply that the Operations Directorate has administrative, budgetary, programmatic, or command and control authority to direct or exercise control over other component’s operations centers, we added a clarifying reference to address DHS’s concern. Finally, although we reported that the new National Operations Center includes (rather than replaced) the previous Homeland Security Operations Center, we also added a footnote to further clarify that the scope of responsibilities of the new National Operations Center is greater than that of the Homeland Security Operations Center. We are sending copies of this report to the Senate Committee on Homeland Security and Governmental Affairs, the Permanent Subcommittee on Investigations, the Secretary of Homeland Security; the Assistant Secretary of the Transportation Security Administration, the Commissioner of Customs and Border Protection, and interested congressional committees. We will also make copies available to others on request. In addition, the report will be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, or wish to discuss the matter further, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To monitor cyber security, respond to incidents, and direct communications. To assist in the initiation, coordination, restoration, and reconstitution of national security and emergency preparedness telecommunications services or facilities under all conditions, crises, or emergencies. To provide warning and intelligence analysis to inform field operators, industry, and TSA leadership. To provide support to for scheduling, law enforcement situations, crisis management, and safety and security- related matters. To provide information on significant incidents from field and sector offices, providing situational awareness to the Commissioner and senior CBP management. 6. Caribbean Air Marine Operations Center (Regional Operations) To utilize integrated air and marine forces, technology, and tactical intelligence to detect, sort, track, and facilitate the interdiction of criminal entities throughout the Caribbean area. 7. National Airspace Security Operations Center (Regional Operations) To utilize integrated air forces, technology, and tactical intelligence to maintain air domain awareness, and detect, sort, track, and facilitate the interception of intruder aircraft throughout the National Capital Region. To monitor radio communications among CBP personnel for officer safety purposes, and to coordinate tactical communications and analytical investigative support to various DHS and other law enforcement agencies to support homeland security. To provide senior management with daily reports and coordination on all significant incidents, events, and matters that have an impact on the mission of ICE and DHS. To provide timely, effective classified intelligence support to ICE headquarters and field personnel by serving as a clearinghouse for the screening, evaluation, processing, exploitation, dissemination, and coordination of intelligence information. To provide timely immigration status and identification information to federal, state, and local law enforcement agencies on aliens suspected, arrested, or convicted of criminal activity. 12. Federal Protective Service Mega- Center System (4 regional centers) To provide alarm monitoring and dispatch services to all federally owned and leased buildings. To maintain national situational awareness and to monitor emerging incidents or potential incidents with possible operational consequences (becomes multi-agency under incident surge conditions). To facilitate, in coordination with the NOC, the distribution of warnings, alerts, and bulletins to the entire emergency management community using a variety of communications systems. 15. Mobile Emergency Response Support Operations Centers (5 regional centers) To serve as the emergency operations center for FEMA regions and associated state operations centers providing time-sensitive information flow affecting federal involvement and provide a deployed operations center platform using assigned mobile assets during all catastrophic events. To provide command, control, communication, and monitoring for ensuring the security of the White House complex and surrounding grounds. To coordinate communications for the receipt, coordination, and dissemination of protective intelligence information and activities that require immediate action in support of protection assignments. Also provides “as needed” information and coordination support for the service. 18. U.S. Coast Guard Command Center To gather, coordinate, and disseminate information as the direct representative of the Coast Guard Commandant and the National Response Center. Serves as the primary communications link of priority operational and administrative matters between USCG field units, District and Area Commanders, senior Coast Guard officials, DHS officials, the White House, other federal agencies, state and local officials, and foreign governments. 19. Intelligence Coordination Center (includes three 24/7/365 watch locations with one, the Intel Plot, colocated at U.S. Coast Guard Command Center) To function as the national-level coordinator for collection, analysis, production, and dissemination of Coast Guard intelligence. Provides all-source, tailored, and integrated intelligence and intelligence services to DHS, Coast Guard, Commandant/staff, intelligence community, combatant commanders, and other services and agencies. The Intel Plot provides predictive and comprehensive intelligence support to priority requirements of the Commandant and senior staff at Coast Guard headquarters. To serve as the single federal point of contact for all pollution incident reporting. Also serves as a communications center in receiving, evaluating, and relaying information to predesignated federal responders, and advises FEMA of potential major disaster situations. 21. Regional Command Centers (46) Area Command Centers (2) To serve as points of coordination at various organizational levels for operational command, control, communications, intelligence, and analysis. District Command Centers (9) Sector Command Centers (35) Our overall objective was to assess the collaboration among the four multi-agency DHS operations centers. The key questions addressed were as follows: 1. What are the missions, functions, and products of the multi-agency 24/7/365 DHS operations centers and who are their customers? 2. To what extent has DHS implemented key practices for enhancing and sustaining collaboration at these multi-agency centers? To answer our first objective, we obtained and reviewed information on the missions and functions of all 24/7/365 operations centers in DHS. We visited centers managed by the Operations Directorate, U.S. Customs and Border Protection, the Federal Emergency Management Agency, the Transportation Security Administration, the U.S. Coast Guard, and the Secret Service to observe their operations, interview officials responsible for managing the centers, and identify centers that employed staff from multiple DHS agencies. We identified four centers that employed staff from multiple DHS component agencies: the Air and Marine Operations Center, the National Targeting Center, the Transportation Security Operations Center, and the National Operations Center-Interagency Watch. We gathered and analyzed information regarding the products the multi-agency centers developed on a regular basis and the primary customers served by the centers. To answer our second objective, we met with responsible officials of the NOC-Watch and the acting Director of the Operations Directorate to discuss the roles and responsibilities of the new organization established as a result of the department’s Second Stage Review. We discussed the transition, current operations, and policy and procedures put in place by the Operations Directorate since the reorganization. We also met with officials from TSA, USCG, CBP, ICE, and the Operations Directorate to discuss how staff are assigned by these agencies to the four multi-agency operations centers. We spoke with watchstanders assigned to several of the centers from other DHS component agencies to discuss their roles and responsibilities at the centers, and the overall mission of the centers to which they had been assigned. We reviewed planning and policy documents including DHS’s strategic plans and performance and accountability reports as well as our prior reports and reports from DHS’s Inspector General that addressed DHS management issues. For the four national operations centers we identified as multi-agency DHS centers, we also reviewed strategic plans, standard operating procedures, and annual reports and performance and accountability reports. We assessed DHS’s efforts and actions taken by the Operations Directorate to encourage coordination among the multi-agency centers and to promote collaboration among the staff representing DHS agencies at the centers to determine the extent that they reflected consideration of key practices that our previous work has shown can enhance and sustain a collaborative relationship among federal agencies. Eight practices we identified to enhance and sustain collaboration are identified below: defining and articulating a common outcome; establishing mutually reinforcing or joint strategies; identifying and addressing needs by leveraging resources; agreeing on roles and responsibilities; establishing compatible policies, procedures, and other means to operate across agency boundaries; developing mechanisms to monitor, evaluate, and report on results; reinforcing agency accountability for collaboration efforts through agency plans and reports; and, reinforcing individual accountability for collaborative efforts through performance management systems. For the purposes of this review, we selected the first seven of the eight practices. We combined our discussion of the implementation of the first two practices—defining and articulating a common outcome and establishing mutually reinforcing or joint strategies. We did not address the eighth practice—reinforcing individual accountability for collaborative efforts through performance management systems—because an in-depth examination of component agencies’ performance management systems was beyond the scope of this review. We selected examples that, in our best judgment, clearly illustrated and strongly supported the need for improvement in specific areas where the key practices could be implemented. We conducted our work from October 2005 through September 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Christopher Keisling, Kathleen Ebert, Dorian Dunbar, Scott Behen, Keith Wandtke, Amanda Miller, Christine Davis, and Willie Commons III made key contributions to this report. Additional assistance was provided by Katherine Davis.
|
Because terrorists do not operate on a 9-5 schedule, the Department of Homeland Security (DHS) and its operational components have established information gathering and analysis centers that conduct activities 24 hours a day, 7 days a week, 365 days a year. Staff at these operations centers work to help detect, deter, and prevent terrorist acts. DHS has determined that out of 25 operations centers, four require higher levels of collaboration that can only be provided by personnel from multiple DHS agencies, and other federal, and sometimes state and local, agencies. For these four multi-agency operations centers, this report (1) describes their missions, products, functions, and customers and (2) assesses the extent to which DHS efforts to promote collaboration among the multiple agencies responsible for the centers reflect key practices for enhancing and sustaining collaborative efforts. To do so, GAO visited operations centers, reviewed data and reports from the centers, and interviewed center and other DHS officials. Each of the four multi-agency 24/7/365 operations centers has a different mission and therefore produces different products, yet all contribute to the larger mission of DHS and have similar functions and customers. Customs and Border Protection runs two of the four multi-agency operations centers--the National Targeting Center and the Air and Marine Operations Center. The former monitors the international movement of potential terrorists and produces reports on suspect individuals; the latter maintains situational awareness of the nation's airspace, general aviation, and sea-lanes and produces reports on suspicious private air and marine craft. The Transportation Security Administration's operations center monitors passengers on commercial flights; works to mitigate the vulnerabilities of commercial airports, rail stations, and pipelines, the National Capital Region, and critical infrastructure across the nation; and produces reports on these topics. DHS's Operations Directorate runs the National Operations Center Interagency Watch and works to enhance efficiency and collaboration among DHS components. This operations center has a more strategic mission in that it uses information gathered by the other operations centers to provide overall national situational awareness, and it prepares security briefs for federal, state, and local law enforcement agencies. Opportunities exist to enhance collaboration among 24/7/365 multi-agency operations centers. While DHS has leveraged resources by having staff from multiple agencies work together, the centers lack joint strategies for collaboration and staffing needs assessments, and they have not established a definition of watchstander roles for all agencies at each center. The centers also lack standards and procedures for using DHS's primary information sharing network; mechanisms to monitor, evaluate, and report on results; and reinforced accountability through agency plans and reports. GAO's previous work has shown that such practices are effective in enhancing and sustaining collaboration among federal agencies. The establishment of DHS's Operations Directorate in 2005 provides a means to promote implementation of more collaborative practices at the centers.
|
Through its participation in a series of aid effectiveness forums beginning in 2005, the U.S. government, along with other donor and partner countries, has committed to improving the effectiveness of assistance programs, in part through increased use of partner-country systems and strengthening of local capacity to achieve development results. For example, the 2011 Busan Partnership for Effective Development Cooperation states that donor and partner countries will use country systems as the default approach for implementing development assistance, working with both donors’ and partner countries’ governance structures. In keeping with these commitments, the 2010 Presidential Policy Directive on Global Development, USAID’s 2011-2015 Policy Framework, and USAID’s Local Systems Framework all stress the need to build partner-country capacity to achieve shared development goals. USAID’s Local Solutions initiative aims to increase funding for partner- country systems, including partner governments, private sector, and nongovernmental organizations, that have sufficient capacity—and to help strengthen their capacity when needed—in order to achieve sustainable development outcomes. In 2013, USAID created the senior position of Local Solutions Coordinator in the agency’s Counselor’s Office. The Local Solutions Coordinator is responsible for coordinating the functions and activities of the various headquarters offices and missions involved in carrying out the Local Solutions initiative. According to data USAID made available in May 2015, although overall obligations to partner-country systems increased in fiscal years 2010 to 2014, obligations to partner governments declined from about $929 million to $327 million during this period, as shown in figure 1. Following the launch of USAID Forward in 2010, USAID began to revise various policies related to planning, project design and implementation, and monitoring and evaluation—often referred to as USAID’s program cycle. While many of these policies apply broadly to all USAID assistance, some apply specifically to G2G assistance. For the purposes of this report, we identified the following key components of USAID’s program cycle as they relate to G2G assistance: Policy: USAID policy related to G2G assistance is documented primarily in the agency’s Automated Directives System (ADS), which contains the policies and procedures that guide the agency’s operations. USAID first issued a policy chapter specifically related to G2G assistance in 2011 and updated it in March 2012 and July 2014. Planning: The initial phase of USAID’s program cycle entails designing projects that are consistent with the mission’s country development strategy, assessing and addressing risks associated with implementing the projects, and preparing planning documents for mission director approval. Implementation: This phase entails selecting appropriate funding mechanisms and implementing G2G assistance activities according to the terms and conditions established in bilateral assistance agreements and other legal documents. Monitoring and evaluation (M&E): This phase entails conducting audits of partner-government entities and assessing the progress and results of G2G assistance activities. Appendix II provides a detailed summary of these components. Agencies should have in place appropriate mechanisms to help ensure achievement of program results. GAO’s Standards for Internal Control in the Federal Government, which we refer to as accountability standards, emphasizes the importance of identifying goals and objectives, identifying and mitigating risks, and establishing and tracking performance indicators, among other things. As continuous, built-in components of agencies’ operations, such measures help provide reasonable assurance that funds are used as intended and help agencies meet their objectives. USAID policy addresses accountability standards calling for identification, analysis, and mitigation of risks, and we found that USAID missions completed detailed risk assessments. However, missions did not always integrate risk mitigation measures into project and M&E planning when required by USAID policy. We found that risk assessments had often been completed after planning documents had been finalized. In addition, M&E plans we reviewed often did not incorporate steps USAID and partner governments agreed upon to address risks and build capacity. We also found that USAID missions missed opportunities to coordinate risk assessment activities with other donors. By requiring missions to conduct detailed fiduciary risk assessments and incorporate them into project planning, USAID policy addresses accountability standards calling for identification, analysis, and mitigation of risk. According to USAID’s policy on G2G assistance, before providing funds directly to a partner-government entity, the missions must complete a fiduciary risk assessment of that entity. The goal of this assessment is to establish risk mitigation measures that will be integrated into the design of the project to help ensure that funds are managed appropriately. Possible risk mitigation measures include technical assistance for capacity building, disbursement of funds in tranches contingent on the achievement of certain milestones, establishment of benchmarks for the partner country to demonstrate progress in correcting financial management weaknesses, and limits on cash advances under cost- reimbursable funding mechanisms. Missions are required to include the findings of the risk assessments, as appropriate, in the planning documents approved by the mission director. In addition, missions should include provisions for ensuring partner-government compliance with risk mitigation measures in the M&E plans for projects with G2G assistance activities. Since fiscal year 2012, legislation governing the use of funds for direct G2G assistance has placed conditions on such assistance, including requiring the assessment of the partner-government entity that will receive the assistance funds and determination regarding whether it has the systems required to manage those funds. According to USAID, the agency meets these assessment requirements by means of its policies and procedures relating to G2G assistance. On the basis of our review of 29 planning documents for G2G assistance activities with fiscal year 2012 obligations, we found that missions conducted risk assessments and formulated risk mitigation plans, as required. Table 1 provides illustrative examples of risks and associated recommendations identified in fiduciary risk assessment reports from our three case-study countries—Nepal, Peru, and Tanzania. The planning documents we reviewed showed that in some cases, missions took concrete steps to avoid risk. In Tanzania, for example, the mission identified four government organizations as potential recipients of G2G funds for a governance project but proceeded with only three of them, because the assessment identified significant risks that would have required extensive mitigation measures. However, for most planning documents we reviewed, missions did not integrate risk mitigation measures into project design and M&E planning when required. Of the 29 planning documents we reviewed, 20 included no discussion of identified risks, and 17 of the planning documents did not address measures for mitigating risks. Furthermore, 25 of the 29 M&E plans did not integrate follow-up for ensuring partner government compliance with agreed-upon risk mitigation measures. In most cases, missions had not completed the fiduciary risk assessments prior to finalizing project or activity planning: for 14 of the 20 planning documents that did not include risk mitigation information, we found that the fiduciary risk assessment was either under way or not yet initiated at the time of project planning. In some cases, our document review enabled us to identify possible reasons missions completed planning before the corresponding risk assessments were completed. In one instance, the planning document included non-G2G activities for which a fiduciary risk assessment was not required, according to the document; to avoid delays in the approval of these non-G2G activities, the mission approved the larger project and proceeded while the G2G- related risk assessment was under way. In another instance, the USAID mission had a previous funding relationship with the partner-government agency and thus may have decided to proceed because it was already aware of potential risks. USAID policy on G2G assistance clearly underscores the importance of integrating risk mitigation measures into project and M&E planning. When missions finalize project planning without having the information from completed fiduciary risk assessments, they may not incorporate into the design of the project appropriate safeguards or measures that would strengthen partner-country systems. Furthermore, not integrating partner- government follow-up on risk mitigation measures into project M&E plans weakens oversight and accountability and creates potential reporting inefficiencies. In some cases, missions documented risk mitigation measures and compliance-monitoring plans in other project-related documents. For example, in Tanzania, the mission sent implementation letters to the government entities receiving G2G assistance outlining agreed-upon action plans for mitigating risks and stating that the entities would report progress to the mission on a regular basis. On the basis of our fieldwork in Nepal, Peru, and Tanzania and our review of 29 planning documents for projects with G2G assistance funding obligated in fiscal year 2012, we found that missions missed key opportunities to work with other donors. According to USAID policy on G2G assistance, missions may consider various means of coordinating with other donors, such as by conducting joint risk assessments, involving other donors in USAID’s assessment, sharing the results of USAID’s fiduciary risk assessments, or other measures. In the three countries we visited, USAID mission and partner-government officials, as well as other donor representatives, told us that USAID’s risk assessments provided valuable opportunities for learning and relationship building, but they also cited, in all three countries, opportunities for improved coordination. In Nepal, representatives of bilateral and multilateral donor organizations participating in a working group dedicated to improving public financial management stated that each donor conducts its own risk assessment and that they were not aware of the results of USAID’s assessment. They also stated that there were opportunities for donors to better coordinate their risk assessment efforts and share information, thereby decreasing duplicative efforts and eliminating unnecessary burdens on the government of Nepal. In Peru, the Swiss Agency for Development Cooperation assessed the management capacity of a subnational government partner that also underwent a USAID fiduciary risk assessment. Swiss officials stated they were not aware of the results of USAID’s assessments, and USAID’s assessments made no reference to the Swiss assessment. In Tanzania, officials from a key recipient of G2G funds stated that the government organization had previously undergone capacity assessments and received technical assistance from the Swedish International Development Agency, but the findings of these assessments were not reflected in USAID’s risk assessment. In addition, in our review of planning documents for 29 G2G assistance projects, we found that 18 (about two-thirds) included general information about the project’s relationship to other donors’ activities, but none of these 18 described how USAID planned to work with other donors to assess risks or follow up on mitigation plans and steps. Although we did not find any examples of risk assessments conducted jointly with other donors in the 29 planning documents we reviewed, in 2014, the USAID mission in Senegal conducted a risk assessment of the Senegalese Ministry of Health and Social Welfare jointly with the World Bank. USAID headquarters officials told us they consider this type of joint assessment to be a best practice. In addition, USAID headquarters officials noted that mission officials in Rwanda, Egypt, and Indonesia worked with other donors, including the World Bank, on public financial management capacity assessments. Nevertheless, mission officials in the three countries we visited told us that donor coordination on risk assessment can be difficult and cited several reasons, among them that USAID has more rigorous risk assessment requirements, donor budget and project planning cycles may not coincide with USAID’s time frames, and donor working groups may be organized around pooled funding arrangements in which USAID does not participate. In addition, two of the USAID missions had yet to determine who should take the lead on donor coordination focused on improving partner-government public financial management. Despite such difficulties, some USAID mission officials and donor representatives we spoke with described potential benefits of coordination on risk assessments. For example, they told us that since donors’ risk assessments tend to produce similar results, a lack of coordination among donors leads to duplication and increased costs associated with conducting the assessments, costs borne by donors (including USAID) and partner governments alike. Moreover, by not coordinating on risk assessments, USAID misses opportunities to build relationships among donors that can help strengthen implementation of partner countries’ risk mitigation activities, including efforts to strengthen partner-government capacity. USAID policy on G2G assistance addresses accountability standards related to mitigating risk and safeguarding funds by encouraging missions to select one of three funding mechanisms for G2G assistance. We found that missions frequently established funding mechanisms whereby USAID reimburses partner governments for costs related to achievement of results. In addition, consistent with USAID policy addressing accountability standards related to the establishment of control activities, missions employed G2G assistance agreements and corresponding implementation letters with partner governments to commit funds and set objectives and conditions for funding, among other things. USAID policy on G2G assistance addresses accountability standards related to mitigating risk and safeguarding funds by encouraging missions to select a funding mechanism that best achieves the purpose of the project or activity, fosters and deepens the partner government’s public financial management capacity, efficiently implements the project or activity, guarantees accountability, and promotes sustainability. Missions generally choose from among three possible funding mechanisms for G2G assistance: cost reimbursement, fixed-amount reimbursement, and resource transfer. Cost reimbursement: USAID reimburses the partner-government entity for actual costs and expenditures incurred in carrying out the project activities, up to an estimated total cost specified in advance. Fixed-amount reimbursement: USAID reimburses an amount agreed to in advance based on unit of output, such as kilometers of roads built, or on associated project milestones, after the mission has verified that quality standards have been met. Resource transfer: USAID provides a transfer of funds or commodities to the partner government. Disbursement is generally dependent on the completion of specific actions by the partner government. Since 2012, legislation governing the use of funds for direct G2G assistance states that such assistance should be made on a cost- reimbursable basis.Our review of 29 planning documents for G2G assistance activities with fiscal year 2012 obligations showed that, in nearly all of these cases (26 of 29), USAID missions employed reimbursement-based mechanisms. In 11 of the 26 cases, missions also allowed funds to be advanced to the partner country. USAID policy allows for cash advances for projects that have been approved outside of the partner government’s budget cycle or when funding from the partner government is not available. In such cases, the partner government is required to provide documentation of the proper use of the funds. Finally, in 3 cases, USAID missions provided resource transfers. In our three case study countries, we noted the following examples of missions using these three types of funding mechanisms: In Nepal, the USAID mission used a resource transfer for a democracy and governance project, contributing to a multidonor trust fund managed by the government of Nepal; the mission also planned to use a fixed-amount reimbursement agreement to fund an accompanying capacity-building project with the Ministry of Peace and Reconstruction. In Peru, the mission specified cost reimbursement as the funding mechanism in its planning document for a health, education, and alternative development project implemented through the regional government of San Martín. The planning document stated that this mechanism was appropriate because it would provide the mission flexibility to make adjustments during project implementation based on the regional government’s performance or in the event of any unforeseen circumstances. In Tanzania, the USAID mission signed a fixed-amount reimbursement agreement with the Tanzania National Roads Agency for a rural roads rehabilitation project. However, to mitigate the agency’s lack of resources to finance the project start-up, USAID provided a 20 percent cash advance, conditional on the transportation agency’s agreement to certain terms. USAID policy related to use of assistance agreements and implementation letters addresses accountability standards regarding documentation of significant events and establishment of control activities. According to USAID policy on G2G assistance, assistance agreements between USAID and partner governments commit U.S. funds; these agreements also generally set forth agreed-upon terms regarding time frames; expected results; means of measuring results; and resources, responsibilities, and contributions of participating entities for achieving a clearly defined objective. In addition, USAID policy on G2G assistance states that missions can use implementation letters, which are formal correspondence from USAID to another party, to commit funds, detail project implementation procedures, specify the terms of an agreement, record the completion of conditions precedent to disbursements, and approve funding commitments and mutually agreed-upon modifications to project descriptions. Since 2012, legislation governing the use of funds for direct G2G assistance requires USAID to enter into formal agreements with partner governments on the objectives of this assistance. On the basis of our review of 29 planning documents for projects with G2G assistance funding obligated in fiscal year 2012, we found that USAID used one or more of four types of assistance agreements (see table 2) and associated implementation letters. For example, the USAID mission in Nepal has implemented G2G assistance through a broad assistance agreement with the national government with specific provisions spelled out in various implementation letters exchanged with the Ministries of Health and Population and Education, among others. Similarly, USAID Peru has implemented its G2G assistance through two broad assistance agreements, the first signed in 2008 and the second in 2012. The mission used implementation letters to approve work plans and establish funding amounts, among other things, with three government entities. In addition, the USAID mission in Tanzania implemented some of its G2G assistance through a strategic objective grant agreement with the national government to improve accountability and oversight of public resources through increased citizen engagement; the mission then used implementation letters to establish funding amounts, work plans, and reporting requirements with the National Audit Office, Public Procurement Regulatory Authority, and Ethics Secretariat. Audit requirements that apply to USAID’s G2G assistance are a key control for monitoring G2G assistance. Some of the USAID missions included in our review provided audits of G2G assistance they had collected when required. We found that these audits revealed weaknesses in partner countries’ management of assistance funding. However, the audits were often submitted late, limiting their usefulness as a monitoring tool. In addition, we found that project-level plans for M&E rarely included indicators or evaluation questions for assessing the degree to which G2G assistance activities would build local systems capacity, increase country ownership, or enhance sustainability—the three interrelated goals of the Local Solutions initiative. Audit requirements that apply to USAID’s G2G assistance are a key control for monitoring G2G assistance and thus support proper stewardship of U.S. government resources. According to USAID policy on audits, when a financial audit is required, the completed audit is to be submitted no later than 9 months after the end of the audit period. The main determinant for conducting an audit is whether G2G assistance recipients will expend more than $300,000 in the given fiscal year. USAID’s Office of Inspector General (OIG) reviews submitted audits and establishes recommendations for action. USAID policy on audits states that missions receiving such recommendations should take whatever steps are necessary to respond to the recommendations and provide documentation of the actions it takes. On the basis of our review of 18 audits provided by five USAID missions, we found that these missions collected audits and used them to identify weaknesses in the management of G2G assistance, but the frequently late submission of these audits to USAID limited their usefulness as a monitoring tool. In response to our request for audits of G2G assistance, five USAID missions provided 18 financial audits. Six of the 18 audit opinions were unqualified, meaning the auditors found no significant problems. However, 12 of the audits received qualified audit opinions because of questions about costs identified by the audits. Examples of costs questioned by the audits included payment of value-added tax, grants or advances to other organizations, and training- and travel-related expenses. In addition, during reviews of the audits, OIG identified additional questionable costs in 6 audits it believed did not comply with the terms of the award agreement or lacked supporting documentation. Finally, audits reported material weaknesses in internal controls in 14 of the financial audits and a lack of compliance with agreements, regulations, or laws in 17 of the 18 audits. The auditors’ negative findings in these areas included payments to contractors for unverified work, procurement from suppliers not on approved vendor lists, and improper cash advances, among other things. On the basis of its reviews of submitted audits, OIG made recommendations to USAID missions in all of the 15 OIG audit reviews we received from USAID missions. According to audit tracking data and supporting documentation provided by USAID, missions have taken final action on most of the recommendations in the OIG audit reviews we received. For example, one OIG audit review included a recommendation for USAID Nepal to correct deficiencies related to procurement and internal controls; in response, the mission agreed to ensure that goods and services are procured from authorized vendors only. In another example, OIG instructed USAID Ethiopia to determine whether questioned costs of about $28,000 were allowable or unallowable and, if appropriate, to recover unallowable costs; the mission found the costs to be unallowable and recovered the funds. Nevertheless, on the basis of our review of these audits, we found that two-thirds (12 of 18) were submitted late (see table 3); in one case, OIG indicated it had received the audit report about a year late. The late submission of audits delays subsequent audit follow-up activities required by USAID policy, including OIG’s review as well as USAID mission follow-up on OIG recommendations. For example, on the basis of its review of an annual audit of a government entity in Nepal, OIG recommended that the USAID mission ensure that the government entity correct one internal control weakness and address certain questioned costs, among other things. However, because the audit was submitted 1 year late—and near completion of the G2G assistance activity—the mission notified OIG that it would not take further action on the recommendations. The mission determined that although it did not plan to provide additional assistance to the government entity at that time, it would ensure corrective actions were taken prior to providing any future assistance. Late audit submission reduces the audit’s usefulness for selecting timely and appropriate responses to the audit findings—such as recovering funds, putting in place additional safeguards, or identifying ways to enhance financial management capacity. Moreover, by allowing weaknesses to continue unaddressed, late audits of G2G assistance activities increase the risk that those activities will not achieve their goals as efficiently and effectively as possible. USAID policy on M&E for G2G assistance incorporates accountability standards through the identification of objectives and related performance indicators. USAID policy on M&E requires missions to describe in their project planning documents indicators and, when appropriate, evaluation methods that will be used to assess achievement. Furthermore, project planning documents, in describing the project’s M&E plan, must link to missions’ country development strategies and mission-wide performance management plans. In addition, USAID policy on G2G assistance states that carefully defining M&E roles and responsibilities during project design is critical for this type of assistance. In our review of the M&E plans included in 29 planning documents for G2G activities with funding obligated in fiscal year 2012, we found that missions included general project-level M&E information, but often did not specify how they would monitor or evaluate achievement of Local Solutions goals the missions included in their mission-level strategies. Although some missions have begun to develop ways to measure and track progress in achieving these goals, at the time of our review, USAID did not have agency-wide guidance on how to do so. The country development strategies of 13 missions we reviewed that obligated G2G funding in fiscal year 2012 included strengthening partner- government capacity, enhancing and promoting country ownership, and increasing sustainability—the three goals of the Local Solutions initiative—among their development objectives. For example, one of USAID Nepal’s three development objectives is “more inclusive and effective governance,” while one of USAID Peru’s three development objectives is “management and quality of public services improved in the Amazon Basin,” and one of USAID Tanzania’s three development objectives is “effective democratic governance improved.” However, we found relatively little information in the project-level planning documents we reviewed about how missions would track progress toward these goals. In our review of the M&E plans included in 29 planning documents for G2G assistance activities with fiscal year 2012 obligations, we found that nearly all of them included general M&E information—such as periodic progress reporting, illustrative indicators, and general plans for evaluating program results—as well as considerations related to program sustainability. However, 18 of 29 planning documents we reviewed made no mention of indicators for measuring capacity, ownership, or sustainability, and 24 lacked evaluation plans or questions addressing these goals. Our previous report on Local Solutions noted specific weaknesses in USAID’s proxy indicator for tracking Local Solutions progress—the percentage of mission program funds obligated to partner-country systems. In addition, we noted that a USAID-commissioned study, while it concluded that increasing funding to partner governments was associated with improved capacity of partner governments in some countries, also highlighted the need for more evidence demonstrating the impact of this approach on funding development assistance relative to other funding approaches. At the time of our prior review, USAID officials told us that other approaches existed within the agency for measuring progress toward strengthening partner-country systems and promoting sustainable development, particularly project-level indicators and evaluation data. Some of the planning documents we reviewed did include indicators or evaluation plans, suggesting that missions have begun to develop ways to measure and track progress in achieving the three Local Solutions goals of strengthening capacity to implement programs, enhancing and promoting country ownership, and increasing sustainability. For example, the planning document for a nutrition project in Ghana envisioned conducting an impact evaluation to assess the relative effectiveness in achieving results of direct G2G funding versus an indirect funding model. The same planning document also included several expected results related to local government capacity, such as strengthening district assemblies’ capacity to manage direct donor funding. In addition, the M&E plan for a USAID early-education project in Nepal identified as an illustrative evaluation question prospects for scale- up and sustainability of the project as a regular activity of the local Ministry of Education. Finally, the planning document for a democratic governance and accountability project in Tanzania indicated that an evaluation would identify the keys to sustainability of enhanced public resource oversight, as well as constraints to wider adoption of accountability practices. This planning document also stated that indicators of citizen perceptions of governance and accountability would be tracked through a survey in targeted districts. The President’s Emergency Plan for AIDS Relief (PEPFAR), in which USAID is heavily involved, and other USAID-specific initiatives and programs have published guidance addressing how to measure and track progress toward enhancing capacity, country ownership, and sustainability. For example, PEPFAR’s guidance on capacity building provides illustrative examples of indicators, such as percentage of PEPFAR-supported government staff transferred to government salaries, number of workers trained and percentage of trainees retained, and percentage of partners with on-time reports and unqualified audits. With regard to sustainability planning, PEPFAR’s guidance calls on PEPFAR country teams to develop sustainability M&E plans. Similarly, the M&E guidance for the U.S. government’s global hunger and food security initiative (Feed the Future) states that it will measure public sector capacity and program sustainability primarily by tracking partner- government budgets allocated to agriculture and nutrition. With regard to measuring country ownership, the Global Health Initiative’s interagency paper on country ownership cites increases in health spending in the partner country and in direct funding to its government as possible indicators. Finally, USAID’s strategic framework for democracy, human rights, and governance cites improved governance and institutional capacity as key expected results of USAID activities. USAID’s July 2014 policy on G2G assistance allows missions to collaborate with partner governments to identify indicators and select evaluation questions that address capacity building and sustainability. According to USAID headquarters officials, an internal discussion paper on M&E for G2G assistance activities elaborates on these concepts, and the agency is currently reviewing tools and methods used by missions to measure performance of partner governments. In addition, according to USAID, as of March 2015, the agency is in the process of developing supplemental guidance on indicators that can be used to track results of strengthening public financial management activities. Nevertheless, at the time of our review, USAID did not have agency-wide guidance on how to collect data or evaluate the development hypothesis that channeling funds through partner-government systems helps to achieve Local Solutions goals. Without integrating indicators or evaluations for assessing progress toward Local Solutions goals into the plans for ongoing and future projects that include G2G assistance activities, USAID missions risk committing resources to unproven funding strategies as the agency executes its plans to expand G2G assistance in scale and scope. Moreover, USAID missions forgo an opportunity to contribute to empirical knowledge about the effects of channeling funds through partner- government systems. USAID’s policies guiding the processes that missions follow to plan, implement, and monitor and evaluate G2G assistance generally reflect an international consensus on how best to achieve development outcomes as well as accepted accountability standards. As designed, these policies permit USAID to work toward its goals of strengthening local system capacity, country ownership, and sustainability while providing reasonable assurance that U.S. resources are being used as intended. We found that USAID policies require that missions incorporate safeguards throughout the program cycle of planning, implementing, and monitoring and evaluating G2G assistance; however, we also found that missions have not yet fully applied these safeguards in all cases. In the cases we reviewed, USAID had carried out required fiduciary risk assessments, documented project planning, utilized assistance agreements and funding mechanisms, conducted audits, and devised key elements of monitoring and evaluation. However, missions in some cases had not completed the risk assessments in a timely manner, hampering their efforts to integrate assessment findings and mitigation measures into project planning and M&E plans, when required. Missions also encountered difficulties coordinating risk assessment and related activities with other donors, potentially leading to inefficiencies and less effective oversight of partner countries’ efforts to address financial management weaknesses. Because required audits we reviewed often were submitted late, the subsequent chain of OIG review and mission response also was delayed, decreasing the likelihood of resolving important audit findings such as questioned costs and other financial management weaknesses. Finally, though some missions demonstrated that they had begun to envision how to monitor and evaluate whether G2G assistance is achieving project goals while also enhancing capacity, country ownership, and sustainability, the agency as a whole has yet to identify indicators or evaluation approaches that would support expansion of these efforts. We recommend that the USAID Administrator take the following five actions to improve accountability for G2G assistance: 1. develop an action plan to improve the timeliness of risk assessments so that these assessments can better inform project planning; 2. develop an action plan to ensure that M&E plans for G2G assistance activities incorporate risk mitigation measures; 3. disseminate information to missions regarding best practices for coordinating risk assessments with other donors; 4. identify the factors contributing to late submission of required audits and develop a strategy to improve on-time audit submission and follow-up; and 5. develop and disseminate guidance on assessing the effects of G2G assistance on partner-country capacity, ownership, and sustainability, including through the identification of indicators and evaluation approaches. We provided a draft of this report to USAID for review and comment. USAID provided technical comments on the draft, which we incorporated as appropriate. USAID also provided written comments, which are reprinted in appendix IV. In its written comments, USAID agreed with all five of our recommendations and described steps taken, planned, or under way that it believes respond to the recommendations. With regard to the first two recommendations, given the actions it already has completed or has scheduled for completion by the end of 2015, USAID requested that we either remove the recommendations from our final report or indicate that the recommended actions have been completed and that we consider the recommendations implemented and closed. We appreciate USAID’s detailed description of its reported actions—including revised policy, training, and other guidance—aimed at improving the timeliness of risk assessments and ensuring that monitoring plans for G2G assistance incorporate risk mitigation measures. We will work expeditiously with USAID to collect and review evidence documenting its actions to address the first two recommendations. We also look forward to following up with USAID to monitor and collect information on the steps the agency noted it has already taken or planned to take in response to our other three recommendations. We are sending copies of this report to appropriate congressional committees, the Administrator of USAID, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives in this report were to assess the extent to which U.S. Agency for International Development (USAID) policies and practices related to (1) planning, (2) implementing, and (3) monitoring and evaluating government-to-government (G2G) assistance provide reasonable assurance that this assistance is used as intended. To address these objectives, we reviewed USAID policy outlined in the agency’s Automated Directives System (ADS) related to planning, implementing, and monitoring and evaluating G2G assistance activities. We also interviewed USAID officials in Washington, D.C., about the policies we reviewed. Some of the policy documents we reviewed apply broadly to all USAID assistance, while others were specific to G2G assistance. Specifically, the chapters we reviewed were the following: ADS Chapter 220: Use and Strengthening of Reliable Partner Government Systems for Implementation of Direct Assistance (first issued August 2011, revised in March 2012 and July 2014); ADS Chapter 201: Planning (most recently revised in December ADS Chapter 203: Assessing and Learning (most recent revisions in January and November 2012 and January 2013). In our summary of these policies, we drew from the most recent versions available at the time of our review, but in conducting our analysis, we used the versions that were in place at the time that we developed our tools for analysis. We also reviewed other chapters referenced in these policies, such as ADS 591: Financial Audits of USAID Contractors, Recipients, and Host Government Entities and ADS 350: Grants to Foreign Governments, as well as supplemental guidance (e.g., Public Financial Management Risk Assessment Framework Manual and Key Bilateral Funding Mechanisms). To assess the degree to which these policies reflect generally accepted accountability standards, we compared the most recent versions of these policies with relevant sections of GAO’s Standards for Internal Control in the Federal Government, which we refer to as accountability standards. These standards outline ways agencies can improve accountability, such as by assessing and mitigating risk and carrying out defined policies and procedures. We mapped the USAID policies listed above to relevant sections of the accountability standards, noting ways in which USAID policy addresses specific factors or elements that contribute to a supportive environment for accountability. To review mission planning for G2G assistance activities, we began by identifying project appraisal documents (PAD) and activity approval documents (AAD) as key sources of information for our review. Starting in 2012, USAID policy on planning required missions with approved country development cooperation strategies (CDCS) to document planning for projects (which consist of one or more activities) in PADs. Prior to 2012, missions documented project and activity planning using AADs. We next identified 22 USAID bilateral missions that had obligated more than $500,000 in G2G assistance in fiscal year 2012 and had completed a stage 1 rapid appraisal at the time of our review, according to USAID Local Solutions data and other information provided by the agency. We then requested planning, implementation, and monitoring and evaluation documents—including risk assessments, activity approval documents, project appraisal documents, assistance agreements, implementation letters, and audits—from these 22 USAID missions. During the course of this preliminary work and as we reviewed the submitted documents, we removed 8 missions from the scope of our review for the following reasons. First, according to USAID headquarters and mission officials, all of the fiscal year 2012 G2G funds obligated by missions in Egypt and Mali were deobligated after fiscal year 2012. Second, mission officials in Indonesia and Georgia determined that all of each mission’s respective fiscal year 2012 obligations had not, in fact, been implemented through partner government entities and, as such, were incorrectly characterized as G2G assistance. Third, project and activity planning documents for G2G assistance activities with fiscal year 2012 obligations were not available for USAID missions in Ethiopia and Rwanda. In response to our request for documents, these missions provided assistance agreements and implementation letters; in the case of Ethiopia, the mission stated that the agreements and letters documented authorization of G2G assistance activities. Finally, because GAO, the USAID Office of Inspector General (OIG), and the Special Inspector General for Afghanistan Reconstruction had each reviewed various aspects of USAID’s G2G assistance in Afghanistan and Pakistan, we did not include those two countries in our document review. (App. III provides a summary of other reviews of USAID’s G2G assistance in Afghanistan and Pakistan.) As a result of this process, we reviewed all 29 project appraisal or activity approval documents provided by 14 USAID missions: Armenia, Barbados, Ghana, Haiti, Honduras, India, Liberia, Mozambique, Nepal, Peru, Senegal, South Africa, Tanzania, and Zambia. Table 4 provides a list of the projects with G2G assistance activities for which we reviewed planning documents, by USAID mission. To conduct our review of these planning documents, we developed a data collection instrument to gather information on the required elements of PADs, as described in USAID policies on G2G assistance, planning, and monitoring and evaluation (M&E). Table 5 provides information on the data fields in our data collection instrument. Because USAID requirements differed for PADs and AADs (the two types of planning documents we reviewed), we tracked which of these documents each mission used and took this into consideration as we conducted our analysis of the information gathered. To determine the extent to which these documents contained the required elements, we reviewed each planning document and recorded any relevant information we found for each of the elements in our data collection instrument. We then analyzed this information and determined whether the information provided met requirements outlined in the USAID policies described above. With regard to project-level M&E plans, while we recognized that USAID missions may refine project M&E plans after completing project design, our interest was in the degree to which missions had integrated M&E into planning for G2G assistance activities. Accordingly, we reviewed the M&E information provided in the planning documents we collected and identified cases where these documents (1) included general M&E information and (2) specifically addressed sustainability, country ownership, or capacity. Finally, we also reviewed assistance agreements and implementation letters associated with the G2G activities in our review for information we did not find in the planning documents, including the funding mechanism used to implement the G2G activity and partner-government compliance with risk mitigation measures. To identify examples of types of risks identified in USAID risk assessments, we selected illustrative examples from our case study countries for inclusion in this report. We selected these examples to demonstrate the type of information contained in these risk assessments, including risks identified, the risk level (i.e., low, medium, high, or critical), and the assessor’s recommendation for mitigating each risk. To obtain insights into the use of financial audits as a key monitoring tool for G2G assistance, we requested the most recent completed financial audits from 22 missions with G2G assistance funds obligated in fiscal year 2012. In response to this request, 5 USAID missions provided 18 financial audits: Ethiopia (1), India (2), Nepal (7), Peru (7), and Rwanda (1). We reviewed audits submitted by these missions in response to our request, but did not seek to validate that all required audits were conducted or submitted. Accordingly, our findings are limited to the audits provided. We also reviewed agency or OIG reviews, memos, and related documentation provided by USAID headquarters and USAID missions. We recorded the following information from these documents: the type of auditor (third-party contractor, host country supreme audit institution, mission, or OIG), audited entity, time frame for audit and submission, audit findings, OIG recommendations, and status of implementation of the recommendations for each audit. The analysis we conducted is a reflection of the documentation provided by USAID, including the documents’ limitations. For example, OIG recommendations incorporate the findings and recommendations of the third-party contractor, supreme audit institution, and USAID auditors. We considered it reasonable to assume that if OIG had closed all its audit review recommendations, then the underlying auditors’ findings and recommendations for that audit could also be considered closed. We also selected 3 USAID missions—Nepal, Peru, and Tanzania—for in- depth case studies. We chose these missions on the basis of fiscal year 2012 G2G funding levels; sector diversity (G2G assistance in at least two sectors, such as education or health), their progress in implementing projects and activities, and geographical diversity. While the results of our case studies cannot be projected across all USAID missions, these 3 missions provide what we believe to be an illustrative mix of USAID’s G2G assistance activities. While in these countries, we conducted site visits and interviewed USAID and partner-government officials as well as representatives of other donor countries and civil society. We conducted this performance audit from April 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Following the launch of USAID Forward in 2010, USAID began to revise various policies related to the agency’s planning, project design and implementation, and monitoring and evaluation—often referred to as USAID’s program cycle. These policies guide the agency’s assistance program activities and operations. While many of these policies apply broadly to all USAID assistance, some apply specifically to G2G assistance. For the purposes of our report, we describe the program cycle in three stages: planning, implementation, and monitoring and evaluation. We summarize key components of USAID’s program cycle as they relate to G2G assistance below and conceptualize these components in figure 2. According to USAID’s Automated Directives System Chapter 201, planning begins at the mission level, with the development of a country development cooperation strategy (CDCS). The CDCS reflects the agency’s development approach in each country and articulates how USAID’s strategy reflects partner-country priorities. Regarding G2G assistance, the policy states that missions should consider building local capacity, including that of partner governments, to achieve sustainable development results. According to USAID’s ADS Chapter 220: Use and Strengthening of Reliable Partner Government Systems for Implementation of Direct Assistance, planning for G2G assistance activities also entails risk assessment and formulation of risk mitigation plans; these assessments and plans are key elements of broader project planning, which is summarized in a project appraisal document (PAD). USAID policy on planning states that a CDCS must include goals, results, and indicators, among other things, which help focus USAID investments in key areas affecting partner countries’ overall stability and prosperity. The policy states that missions should consider using partner-government systems in order to develop their capacity and improve sustainability— two key Local Solutions goals—during development of their CDCS. We found that these considerations were reflected in various parts of the CDCS, notably in general discussions of how the mission is addressing USAID initiatives as well as in the results framework, which identifies objectives and expected results. Table 6 summarizes key elements of CDCSs for USAID missions in our three country case studies: Nepal, Peru, and Tanzania. According to USAID policy on planning, while developing the CDCS, missions considering the use of partner-government systems generally must conduct a “stage 1 rapid appraisal,” which is a country-level examination of the partner government’s public financial management environment and associated risks. The stage 1 rapid appraisal is used to determine whether G2G assistance is feasible—in other words, whether to proceed to the next risk assessment stage—and informs development of the CDCS. In addition, as of July 2014, USAID policy states that, when appropriate, certain missions may be asked to undertake an expanded democracy, human rights, and governance review for G2G assistance in order to aid consideration of the reputational risk to the U.S. government as well as the risk that U.S. government resources could be misused in a way that damages political freedoms or human rights or benefits a central government at the expense of its citizens. Following completion of the stage 1 rapid appraisal, missions may decide to conduct one or more risk assessments of partner-government organizations (e.g., ministries or subnational agencies), known as “stage 2” risk assessments. Intended to inform the larger project design process, stage 2 risk assessments identify fiduciary risks and propose measures to address them. According USAID’s policy on G2G assistance, USAID missions generally must complete a fiduciary risk assessment as part of the overall project design and authorization process before obligating funds to a partner government for implementation of G2G assistance activities. According to USAID policy on planning, the PAD is used by missions to document the complete design of the project and serve as a reference document for project authorization and subsequent implementation. PADs must define the following: the development problem to be addressed by the project and how it links to the mission CDCS; a monitoring and evaluation (M&E) plan, including expected results and indicators; the financial plan and budget; and the overall project implementation plan. Table 7 describes projects with G2G activities in the three case-study countries. According to USAID policy on G2G assistance, PADs for projects with G2G activities must incorporate the findings of stage 2 risk assessments and a plan for mitigating risks identified in the fiduciary risk assessment. Possible risk mitigation measures may include the following: disbursement of funds in tranches, technical assistance for capacity building, inclusion of milestones and benchmarks for demonstrating progress in correcting financial management weaknesses, USAID “no objection” reviews of actions taken by partner-government ministries or agencies receiving assistance before proceeding to the next step, and limits on advance of funds. The policy also states that risk mitigation plans should be incorporated into the project M&E plan, which is a required part of the PAD, and include provisions for ensuring partner-government follow-up on any risk mitigation measures through periodic progress reports or meetings with partner-government officials as part of the project’s M&E plan. In addition, the policy states that missions may consider various means of coordinating with other donors, such as by conducting joint risk assessments, involving other donors in USAID’s assessment, sharing the results of USAID’s risk assessments, or other measures. USAID’s policy on G2G assistance outlines use of assistance agreements and implementation letters as well as selection of funding mechanisms for G2G assistance. The assistance agreements and implementation letters specify the type of funding mechanism USAID will use for the G2G assistance project or activity. USAID policy on G2G assistance also describes factors missions should consider when selecting funding mechanisms. According to USAID policy, assistance agreements between USAID and partner governments set forth mutually agreed-upon terms regarding time frames, results expected to be achieved, means of measuring those results, resources, responsibilities, and contributions of participating entities for achieving a clearly defined objective. In addition, because missions obligate funds through assistance agreements, missions must go through a set of preobligation requirements designed to ensure adequate planning prior to committing funds. USAID implements G2G assistance using one or more of four types of assistance agreements: development objective agreement: when obligating funds through development objective agreements, under USAID, policy missions must develop separate agreements for each development objective in an approved CDCS; bilateral project agreement: used to implement specific projects; limited scope grant agreement: used to award a grant to a partner- government entity for project obligations of less than $500,000; and program assistance agreement: used to provide resource transfers in the form of foreign exchange or commodities. In addition, USAID missions use implementation letters, which are formal correspondence from USAID to another party, and can serve several functions, including detailing project implementation procedures, specifying the terms of an agreement, recording the completion of conditions precedent to disbursements, and approving funding commitments and mutually agreed-upon modifications to project descriptions. In some cases, missions use assistance agreements to obligate funds to several projects or activities implemented by different partners. For G2G assistance activities developed under some of these types of agreements involving multiple partners, missions may subobligate funds to partner-government entities—such as central government ministries and regional and local governing authorities— through the use of implementation letters. USAID policy effective as of July 2014 states that the assistance agreement or implementation letter should incorporate risk mitigation measures. USAID policy describes factors missions should consider when selecting funding mechanisms for G2G assistance. The goal in each case is to select the funding mechanism that will best achieve the purpose of the project or activity, foster and deepen the partner government’s public financial management capacity, efficiently implement the project or activity, guarantee accountability, and promote sustainability. According to USAID policy, selection of the appropriate funding mechanism is also an important means of mitigating risk and safeguarding funds. USAID policy outlines three funding mechanisms: Cost reimbursement: USAID reimburses the partner-government entity for actual costs and expenditures incurred in carrying out the project activities, up to an estimated total cost specified in advance. Cost reimbursements require missions to prepare a budget that reasonably estimates the cost of implementing the project, with the understanding that the final amounts may be further refined. Cost reimbursements may be used when unit costs cannot be estimated with sufficient accuracy at the beginning of the project because of price fluctuations over the life of a project that are outside the control of the partner-government. Because USAID reimburses actual costs incurred based on these estimates, the mission is responsible for closely monitoring project implementation to help ensure that it is on schedule and to resolve problems as they arise. When the partner government is ready to request a reimbursement for costs incurred during project implementation, it submits the request according to procedures specified in the assistance agreement or the implementation letter, along with certified financial reports detailing the amount of expenditures incurred and supporting documents, such as contracts, invoices, and payments. The reports and supporting documents are subject to USAID review and audit procedures outlined in the agreement. Under this funding mechanism, USAID may provide cash advances for projects that have been approved outside of the government’s budget cycle or when funding from the partner government is not available. Fixed amount reimbursement: USAID reimburses an amount agreed to in advance, per output or associated milestone, after the mission has verified that quality standards have been met. This mechanism requires that the mission and the partner government invest a significant amount of time and resources to develop cost estimates for outputs and associated milestones during the design phase of the project. The partner-government entity implementing the project submits design specifications and cost estimates for each output or associated milestone for approval by the mission. The mission independently verifies that the estimate is reasonable and negotiates payment amounts with the partner government for each output or milestone. The amount of the mission’s contribution to the project is thereby fixed and the partner government bears the responsibility for any unforeseen cost increases. Similarly, if actual costs are less than estimated costs, the mission’s payment to the partner government is not reduced. However, the mission may make periodic adjustments for subsequent payment amounts in certain cases, such as unforeseeable inflation or price increases. Once the cost estimate has been established under this funding mechanism, the mission’s monitoring and oversight of the project is significantly less than that required for the cost reimbursement mechanism because the mission’s primary role is to verify that the outputs or associated milestones have been completed and meet the agreed-upon quality standards. In addition, during project planning, the mission must also determine that the partner-government entity has the qualified management staff with sufficient technical skills and experience to implement the project in a timely manner. As with the cost reimbursement mechanism, USAID may also provide cash advances under this funding mechanism, as long as these funds are then liquidated based on successful completion of outputs or associated milestones rather than actual costs incurred. Resource transfer: USAID provides a generalized resource transfer in the form of foreign exchange or commodities to the partner government. According to USAID policy, resource transfer is used for either (1) sector program assistance, which provides cash or in-kind assistance used to carry out wide-ranging development plans in a defined sector without restriction on the specific use of funds, or (2) balance-of-payments or general budget support, commonly known as cash transfers. The transfer of resources is generally dependent on the completion of specific actions by the partner government. For example, the provision of funds under sector program assistance must be directly linked to the implementation of specific policies, institutional reforms, or other partner-government actions necessary to achieve agreed-upon development objectives. These actions must be specified directly or by reference in the assistance agreement as conditions that must be established before these funds are disbursed, and the mission is required to document how it reached the decision to disburse funds. USAID’s general audit requirements outlined in ADS Chapter 591: Financial Audits of USAID Contractors, Recipients, and Host Government Entities apply to G2G assistance. The main determinant, in most cases, for conducting an annual audit is whether G2G assistance recipients expend more than $300,000 in G2G assistance funds in the given fiscal year. In addition, both USAID’s ADS Chapter 203: Assessing and Learning and its policy on G2G assistance establish M&E requirements. According to these documents, missions should begin preparing for M&E activities during the planning stage and must document M&E planning in the planning document for each project or activity. Notably, USAID policy on G2G assistance states that carefully defining M&E roles and responsibilities during project design is critical for this type of assistance. According to USAID policy on audits, non-U.S.-based organizations— including partner governments—expending $300,000 or more of USAID- funded awards must be audited annually. In addition, a closeout audit must be performed for all awards in excess of $500,000. According to the guidelines, audits may be performed by independent audit firms, or by a government’s supreme audit institution, and must be in accordance with auditing standards approved by the U.S. Comptroller General. Completed financial audits are to be submitted to the USAID Office of Inspector General (OIG) for review no later than 9 months after the end of the audited period. Upon completing its review, OIG establishes recommendations for action, if appropriate, and provides copies of the audit reports to the responsible USAID management. According to USAID policy, designated mission officials maintain each mission’s annual audit inventory, decide when to conduct audits, and coordinate with OIG to develop the annual audit plan. According to USAID, in practice, the controller at each mission fulfills these duties and liaises with the audit manager at USAID headquarters. Controllers track audit requirements, timing, and completion, as well as audit recommendations and implementation status. According to USAID, as of March 2015, missions utilized two databases for tracking audits: the first to track audit timing and the second to monitor audit recommendation follow-up. According to USAID, the agency was in the process of introducing a new agency-wide database for tracking audits of non-U.S.- based organizations, which, when fully operational, would maintain a record of all non-U.S. vendors receiving USAID funds, as well as the timeliness of audits of these organizations. USAID policy on M&E requires missions to describe how they will collect data and assess achievement during project planning in what is known as the project M&E plan. Furthermore, project planning documents, in describing the project’s M&E plan, must link to missions’ CDCS and mission-wide performance management plans; they are to be used to measure progress toward planned results and identify the cause of any delays or impediments during project implementation. Moreover, the policy states that defining the project M&E plan during project planning aids in adapting implementation to achieve sustainable results and future project planning. Notably, USAID policy on M&E for G2G assistance activities states that carefully defining M&E roles and responsibilities during project design is critical for G2G assistance. According to U.S. Agency for International Development (USAID) data, in fiscal years 2010 through 2013, the agency obligated between about $44 million and $468 million per fiscal year in government-to-government (G2G) assistance in Afghanistan and about $149 million and $461 million in Pakistan. (See fig. 3.) Summarized below are key findings from reviews of USAID’s G2G activities conducted by the USAID Office of Inspector General (OIG) in Afghanistan, the OIG in Pakistan, the Special Inspector General for Afghanistan Reconstruction (SIGAR), and GAO. In 2010, along with other donors, the United States pledged to provide at least 50 percent of development assistance funds directly through the Afghan budget by 2012. According to SIGAR, USAID and the government of Afghanistan signed a memorandum of understanding in December 2010 in support of the goals, objectives, and mechanisms for effective assistance in Afghanistan. SIGAR also reported that the memorandum of understanding focused on maximizing opportunities presented by USAID-funded assistance to increase capacity, institutional growth, and public ownership of the development process in Afghanistan. The memorandum also laid out financial requirements to ensure that direct assistance funds are used as intended, according to SIGAR. USAID’s assistance provided directly through the Afghan budget includes host-country contracts, G2G awards, and contributions to certain multidonor trust funds. According to USAID data, in 2014, the agency obligated funds for G2G assistance in the following sectors: agriculture, education, health, governance, rule of law and human rights, and private sector competitiveness. While USAID Afghanistan does not have a country development cooperation strategy, the mission has taken steps to conduct fiduciary risk assessments of several Afghan government entities. In 2011, we reported that USAID had not completed preaward risk assessments before providing funds to two Afghan government entities. In 2014, SIGAR reported that the mission had contracted with private firms to conduct fiduciary risk assessments of 16 ministries and found that all 16 ministries were unable to manage direct funds without taking risk mitigation measures recommended in these assessments. The mission’s internal review of 7 of these ministries also found that these ministries were unable to manage funds without the implementation of significant risk mitigation measures. In 2012, the USAID Administrator approved the mission’s request to waive compliance with agency requirements for assessing risks associated with using partner-government systems and documenting any risk mitigation plans for funds appropriated through fiscal year 2013. In spite of this waiver, SIGAR recommended that the USAID Administrator require compliance with all USAID requirements for the use of partner-government systems, with the exception of the country- wide stage 1 assessment. USAID responded that in spite of the approved waiver, USAID Afghanistan continues to comply with all USAID requirements for the use of partner-government systems. In 2011, we reported that USAID had not consistently followed its own policies for assessing risks associated with funds provided to a World Bank–administered trust fund for Afghan reconstruction. The Afghanistan Reconstruction Trust Fund was established in 2002 as a vehicle for donors to pool resources and coordinate support for Afghanistan’s reconstruction. We reported that for its initial $5 million contribution to the trust fund in 2002, USAID could not provide documentation supporting risk assessment procedures prior to disbursement, but determined afterward that (1) the trust fund had a comprehensive system in place for managing the funds and (2) the World Bank had a long history in managing multidonor pooled funding mechanisms. Similarly, the mission did not make preaward determinations for 16 of the 21 subsequent modifications to its contribution amounts. USAID agreed with our recommendation that the agency ensure adherence with its policies for assessing risks associated with multilateral trust funds and revised its guidance on awards to public international organizations in 2011. In their reviews of USAID Afghanistan’s implementation of G2G activities, SIGAR and the OIG both identified issues related to the implementation of G2G activities. According to SIGAR, while USAID had instituted several controls to help protect its direct assistance funds, the mission had not ensured full implementation of a key control activity—the inclusion of corrective actions to be taken by the Afghan government entity as conditions precedent to the disbursement of funds in USAID’s agreements with the Afghan government. SIGAR noted that the mission had incorporated a very small percentage of risk mitigation measures identified in the fiduciary risk assessment into the assistance agreements signed by the mission and the Afghan government and outlines the terms of the agreement. SIGAR recommended that the mission develop a plan for each ministry that has a complete risk assessment that defines how each of the risks identified is being or will be mitigated, and suspend disbursements until these plans are completed. USAID agreed with this recommendation, and stated that the mission had prepared such plans for six ministries receiving assistance. The mission further noted that the agency’s use of conditions precedent is only one control activity for mitigating risk in a suite of interventions used in its work with the Afghan government. Regarding funding mechanisms, an OIG review of USAID’s financial management controls in G2G assistance found that most of USAID/Afghanistan’s G2G activities may not count as G2G assistance as described in USAID policy. According to USAID policy, to the extent possible, missions must avoid funding the establishment of separate donor-funded project management or implementation units that operate outside the existing partner-government structures. USAID aims to strengthen those government institutions already established by the partner government rather than create or maintain separately operated project management or implementation units that may be unsustainable in the long run. Similarly, USAID policy states that while missions may use host country contracts to engage with partner governments, this funding mechanism is different from using partner-government systems and therefore is not counted toward the agency’s 30 percent Local Solutions target. OIG found that most of the mission’s G2G activities in Afghanistan had been implemented through project implementation and management units. For example, USAID provides funds for an education program to a nongovernmental organization, which hires a team of consultants to work in the Ministry of Education to manage and implement the activities under this program. OIG considered the use of project implementation and management units as a key risk mitigation measure that helped safeguard funds and thus did not find issue with this finding or make any recommendations. Similarly, a SIGAR review of USAID’s health programs in Afghanistan also noted that the mission funds this activity through a host country contract, which is managed by a separate grants and contracts management unit. In addition, SIGAR found that USAID Afghanistan’s use of cash advances in one G2G activity made funds more vulnerable to waste, fraud, and abuse because the activity is funded with monies paid in advance of costs incurred. USAID disagreed, stating that the activity is funded on a reimbursable basis through advances and liquidations. The October 2014 OIG review of USAID’s G2G activities in Afghanistan identified additional issues related to the mission’s implementation of these activities. OIG found that USAID staff were not properly involved with the Afghan ministries’ procurement procedures required to mitigate risks; mission staff were not fully aware of their responsibilities for overseeing G2G activities; the mission did not properly document expectations concerning project objectives, results, resources, and timelines so as to avoid misunderstandings with the Afghan government; and transactions were often recorded late in the USAID accounting system. The mission agreed with all OIG recommendations and reported on steps it planned to take to address these issues. In 2011, we reported on U.S. efforts to build public financial management capacity in the Afghan government and provided information on USAID- funded projects that provide training, mentoring, coaching, and technical assistance. We found that USAID had not consistently established baselines and targets, or reported actual performance data, and recommended that the agency establish targets and ensure that implementing partners report performance data. USAID agreed with these recommendation and noted steps it was taking to address them. Regarding audits, 2014 OIG and SIGAR reports stated that the mission was conducting audits for all G2G activities with expenditures over $300,000 in a given fiscal year as called for in USAID policy. According to SIGAR, USAID contracted with an accounting firm to perform audits of all G2G activities in Afghanistan. Examples of audit objectives included assessments of project internal controls, determination of validity and reliability of information, and determination of whether the ministry was complying with agreement terms and applicable laws and regulations related to the USAID-funded program. However, according to the SIGAR report, these audits had not been completed within the 9-month period required by USAID policy. SIGAR stated that USAID’s lack of timely and regular audit results makes it difficult for the agency to take action to identify and reconcile ineligible expenditures and address other issues with direct assistance implementation. USAID acknowledged the need for timely third-party audits, stating that it has modified its audit requirements and is now contracting and actively managing the required audits of the ministries. In addition, OIG found that the mission did not fully adhere to the audit requirements as described in project documents nor did the mission ensure Afghan government adherence. As a result, contracts for audits were not awarded annually and audits were not completed on time. OIG recommended that the mission implement procedures to validate that audits had been conducted prior to disbursing funds and modify the audit requirements in its G2G activity documents to describe the requirements for the audit process. The mission agreed with these recommendations and explained steps it had taken and planned to take to address these issues. The Enhanced Partnership with Pakistan Act of 2009 authorized up to $1.5 billion a year for development, economic, and democratic assistance to Pakistan for fiscal years 2010 through 2014. The act authorized civilian assistance for a wide range of activities, including projects to build the capacity of government institutions, promote sustainable economic development, and support investment in people through education and health programs. The act also encouraged, as appropriate, the use of Pakistani organizations, including Pakistani firms and community and local nongovernmental organizations, to provide this assistance. In order to increase the capacity of Pakistani organizations to manage U.S. funds and to implement this strategy in accordance with international commitments, USAID Pakistan launched the Assessment and Strengthening Program (ASP) in October 2010. The goals of this program are to assist potential Pakistani implementing partners, including the government of Pakistan organizations: (1) increase capacity to manage and account for U.S. government development assistance funds, (2) reduce the vulnerability of the funds to waste and misuse, and (3) increase speed and efficiency in getting USAID development resources to the intended beneficiaries. According to USAID data, in 2014, the agency obligated funds for G2G assistance in the following sectors: agriculture, education, and infrastructure. In 2011, we reported that USAID planned to shift its program implementers from U.S.-based partners to Pakistani organizations, including local, provincial, and federal government and nongovernmental organizations. To mitigate risks associated with providing funds to organizations with limited institutional capacity, USAID guidance directed missions to conduct a preaward assessment of the organizations’ internal controls and financial management systems. We found that USAID guidance at the time did not contain information on whether weaknesses identified in the preaward assessment must be addressed or whether the assessment’s recommendations to enhance the accountability of U.S. funds must be implemented. For Pakistani organizations that were required to undergo a preaward assessment, we found that not all contracts, grants, or agreements required these organizations to address weaknesses identified in the preaward assessment. We recommended that USAID assistance provided through Pakistani organizations identified as high or medium risk be required to address weaknesses identified in the risk assessment. USAID agreed with our recommendation and provided examples of steps the agency had taken to address identified weaknesses. Furthermore, according to USAID policy, if a mission is planning to increase the amount of total estimated funding for existing G2G activities implemented by a previously approved government entity by more than 50 percent of the initially authorized amount, or authorizes an additional amount of more than $20 million, an updated assessment must be conducted and documented to ensure that the entity’s public financial management systems are sufficient to bear the increased risk associated with the increased funding levels. This updated assessment must also include a revalidation of the risk mitigation plan for every approved partner-government entity receiving the funding increase. In a 2013 OIG review of USAID’s G2G assistance programs in Pakistan, OIG auditors found that the mission had not reassessed the government of Pakistan implementing entities as required by USAID policy. For example, the mission increased its commitment to provide funds to the partner- government entity administering the Federally Administered Tribal Areas to a ceiling of $611 million as of October 2012 from an initial commitment of $55 million in 2010 without updating the fiduciary risk assessment. USAID cited various reasons for not updating the assessments, including conflicting agency and mission policies. While USAID agreed with the OIG recommendations to reassess partner government implementing entities and develop a plan for full compliance with USAID policy, at the time of the release of the OIG report, the agency had not reached a decision on how to address these issues. In response to this report, the mission stated that it has submitted a waiver on compliance with agency requirements for assessing risks associated with using partner- government systems, and, at the time of the review, was awaiting approval from USAID headquarters. According to OIG, USAID Pakistan had been providing G2G assistance under the Enhanced Partnership with Pakistan Act in 2009, prior to the launch of the USAID Forward initiative in 2010 and the issuance of agency-wide policy on the use of partner-country systems in 2011. According to the OIG review, the mission developed and refined its own procedures for implementing G2G assistance and documented these procedures in mission orders. These mission orders incorporate lessons learned by the mission while planning and implementing its G2G activities and reflect the evolution of procedures during this period. As a result, OIG found that several mission orders related to G2G activities conflicted with agency-wide policy. For example, OIG found that while the mission order required a risk assessment, the mission order did not include a requirement for a Democracy, Human Rights, and Governance review as part of its assessment, as specified in the agency-wide policy. OIG did not specify whether the mission had neglected to conduct this governance review as a result of the conflicting policies. OIG also found examples of instances in which the mission was not following its own mission orders, primarily concerning the lack of clarity over the designation of roles and responsibilities. According to OIG, USAID Pakistan launched the ASP in 2010, in part to increase the capacity of Pakistani organizations, including the Pakistani government, to manage U.S. funds, but found shortcomings in the mission’s oversight of this program. Furthermore, according to OIG, the agreement between USAID Pakistan and ASP implementers calls for annual validations to ensure ongoing compliance with the standards and procedures developed under the institutional capacity-building program and to establish benchmarks to allow government implementers to reach a point where annual validations are no longer necessary. According to OIG, the mission did not validate whether the training had improved the internal systems of these entities or increased ministry staff members’ ability to implement projects. According to mission officials, the mission did not conduct any validations because of changing policy from USAID headquarters. Two versions of USAID policy on the use of partner- country systems appeared over the course of 8 months, with a third revision pending at the time of the review. Mission officials said they had put off validations and reassessments so that they could form a Partner Government System team that met requirements outlined in the USAID agency-wide policy. The team would then help determine which government implementing entities should be part of the validation process, and which should be scheduled for reassessments. As a result of this delay, the mission did not establish the Partner Government System team until 3 years after ASP began. OIG recommended that the mission take the appropriate steps to ensure that it can validate the implementing partners’ capacity-building activities with the partner- government entities. The mission agreed with the OIG recommendations and has taken some steps to address these issues. In a separate review of the ASP, OIG also found that (1) the program had not met first-year targets and had not completed all preaward assessments and capacity-building programs planned and (2) program planning was insufficient because the mission had not developed the results framework—an outline of the mission’s goals, objectives, expected results, and performance indicators—or the preliminary performance management plan—a tool for planning and managing the process of assessing and reporting progress toward achieving assistance objectives—until a year after the start of the program. The mission agreed with OIG’s recommendations and responded with steps it plans to take to address these issues. In addition to the contact named above, Jim Michels (Assistant Director), Todd M. Anderson, Martin De Alteriis, Jesse Elrod, W. Stephen Lowrey, Grace Lui, Kim McGatlin, Shirley Min, and Nikole Solomon made key contributions to this report. Additional technical assistance was provided by Amanda Bartine, Tina Cheng, David Dayton, José Peña, and Cristina Ruggiero.
|
USAID's Local Solutions initiative, launched in 2010 as part of USAID Forward, seeks to reform how the agency administers development assistance and to increase funding implemented through partner-country systems, including partner governments. In fiscal years 2012 through 2014, average annual obligations to G2G activities were about $620 million. The Local Solutions initiative aims to strengthen local capacity and enhance country ownership and sustainability of development efforts. GAO was asked to review accountability under this initiative. GAO assessed the extent to which USAID policies and practices related to (1) planning, (2) implementing, and (3) monitoring and evaluating G2G assistance provide reasonable assurance that this assistance is used as intended. GAO analyzed key USAID policy documents; interviewed USAID officials; reviewed planning documents from 14 USAID missions; and conducted fieldwork in Nepal, Peru, and Tanzania. For each key phase for government-to-government (G2G) assistance activities under its Local Solutions initiative, the U.S. Agency for International Development (USAID) has policies that generally reflect federal accountability standards to help ensure funds are used as intended. However, GAO identified several steps in implementing these policies that could further strengthen accountability. Planning: This phase entails designing projects that link to USAID missions' country development strategies, assessing and mitigating risks, and preparing planning documents. GAO found that USAID missions completed detailed fiduciary risk assessments for G2G assistance activities when required but did not always include mitigation steps in planning documents, in part, because risk assessments were often done after planning had been completed. Also, project monitoring and evaluation (M&E) plans often did not incorporate steps USAID and partner governments agreed to take to mitigate risks and build capacity. Implementation: In this phase, USAID implements G2G activities according to the terms and conditions established in assistance agreements with partner governments. USAID missions usually selected funding mechanisms in which USAID reimburses partner governments for costs related to completion of agreed-upon activities. In addition, consistent with USAID policy, missions employed assistance agreements and corresponding implementation letters to commit funds and set objectives, among other things. M&E. This phase includes conducting audits of partner-government entities and assessing the results of G2G assistance activities. Annual audits GAO reviewed were often submitted late, which delays audit follow-up actions required by USAID policy and limits the audits' usefulness as a monitoring tool. In addition, project M&E plans GAO reviewed rarely included indicators or evaluation questions for assessing the degree to which G2G assistance activities are building capacity, increasing ownership, or ensuring sustainability—the three interrelated goals of the Local Solutions initiative. GAO recommends that USAID take steps to strengthen accountability for G2G assistance by, among other things, improving the timeliness of risk assessments; incorporating risk mitigation measures into M&E planning; improving on-time audit submission; and assessing the effects of G2G assistance on capacity, ownership, and sustainability. USAID agreed with all of GAO's recommendations and noted various actions it is taking to address them.
|
In fiscal year 2014, the Air Force recorded payroll obligations of over $23 billion for direct compensation of active duty military personnel, a force of approximately 324,000. This compensation, including, among other things, basic pay, allowances, and incentives, was distributed among the various pay categories, as shown in figure 1. These pay categories are described later in this report. DOD has been unable to prepare auditable department-wide financial statements as required by the Government Management Reform Act of 1994. The National Defense Authorization Act for Fiscal Year 2010 mandated that DOD develop and maintain a Financial Improvement and Audit Readiness (FIAR) plan. The plan is to include, among other things, the specific actions to be taken and costs associated with (1) correcting the financial management deficiencies that impair DOD’s ability to prepare timely, reliable, and complete financial management information and (2) ensuring that DOD’s financial statements are validated as ready for audit by September 30, 2017. Per the FIAR plan, “audit ready” means the department has strengthened internal controls and improved financial practices, processes, and systems so there is reasonable confidence that the information can undergo an audit by an independent auditor. In response to difficulties encountered in preparing for an audit of the Statement of Budgetary Resources, DOD reduced the scope of initial Statement of Budgetary Resources audits beginning in fiscal year 2015 to focus on current-year budget activity reported on a Schedule of Budgetary Activity. This is an interim step toward achieving an audit of multiple-year budget activity required for an audit of the Statement of Budgetary Resources. In July 2014, the Air Force indicated that its fiscal year 2015 Schedule of Budgetary Activity would be audit ready and would be prepared in all material respects in conformity with U.S. generally accepted accounting principles. DOD requires the services to follow the FIAR Guidance methodology in asserting audit readiness. The guidance requires complete documentation that is sufficient, relevant, and accurate. The FIAR methodology also requires that the assertion documentation provide evidence demonstrating that the reporting entity has designed and implemented an appropriate combination of control activities and supporting documentation to mitigate the risk of material misstatement and achieve the financial reporting objectives. The Air Force Personnel Center is responsible for establishing and maintaining military personnel accounts in the personnel system. Once a servicemember’s personnel file is created, it is electronically transferred into the payroll system where a pay account is created for that servicemember. The personnel and payroll systems electronically interface daily to record any changes. Air Force Financial Services Office staff are responsible for submitting changes not driven by personnel actions, such as those concerning housing allowances. Documentation supporting payroll changes for individual servicemembers generally is to be scanned into one of two document management systems: one for personnel actions and another for all other changes. The Defense Finance and Accounting Service (DFAS), a separate component of DOD, processes the Air Force’s active duty military payroll using its Defense Joint Military Pay System-Active Component. DFAS systems also process the associated disbursements and record the entries to the general ledger maintained for the Air Force. Based on our testing, we determined that the Air Force and DFAS had in place systems and processes designed to provide a complete universe of active duty military pay transactions for the period we tested. The Air Force’s documented processes included reconciling payroll information with accounting, financial reporting, and disbursement information, designed to provide reasonable assurance that payroll transactions are properly recorded in the Statement of Budgetary Resources or the Schedule of Budgetary Activity and are consistent with personnel records. The Air Force provided us with transaction data processed by its payroll system for each of the first 10 months of fiscal year 2014 (October 1, 2013, through July 31, 2014) and the related reconciliations. We performed walk-throughs of each type of reconciliation discussed below. Based on those procedures we did not identify any concerns regarding Air Force processes and controls. However, our scope did not include all procedures that might be performed in a financial audit of the Statement of Budgetary Resources or the Schedule of Budgetary Activity. Reconciliation of payroll to personnel. We found that the Air Force performed a monthly reconciliation between the payroll system and personnel system and generally resolved mismatches within 45 days. To determine this, we reviewed documentation covering 1 month of the Air Force’s monthly reconciliation between the individual pay records in the payroll system and personnel system. This reconciliation helps reasonably assure that payees are actual servicemembers and that the characteristics that drive their pay, such as years of service and rank, match what is recorded in the personnel system. We reviewed the Air Force’s reconciliation for 1 month to gain an understanding of how the Air Force matches data elements. For data mismatches between the individual payroll records and personnel records that cannot be immediately resolved, the Air Force creates a tracking record to help reasonably assure that they are researched and resolved within 45 days, in accordance with Air Force guidelines. We tested the Air Force’s process by reviewing documentation detailing the monthly mismatch and aging statistics for open tracking records internally reported by the Air Force and found that the Air Force researches the mismatches. Reconciliation of payroll system to accounting system. We reviewed each of DFAS’s fiscal year 2014 quarterly reconciliations between the payroll and accounting systems and found that they accurately imported data from the payroll system and compared them to disbursement data in the accounting system, helping to reasonably assure that the population of transactions was completely reported in the accounting system. Because the payroll system does not interface directly with the accounting system, DFAS developed the Military Pay Reconciliation Tool (Reconciliation Tool), a Microsoft Access application, to facilitate this reconciliation process. The Reconciliation Tool tracks payroll data from the payroll system to the disbursements recorded in the accounting system. The Reconciliation Tool also allows DFAS to compare an individual servicemember’s Leave and Earnings Statement with detailed data in the payroll system. We tested the reconciliation process by reviewing documentation provided by the Air Force that demonstrated how data are processed through the payroll system in the form of vouchers and subsequently reconciled to the accounting system. Reconciliation of military pay to Fund Balance with Treasury (FBWT). We reviewed the monthly reconciliation DFAS performs for the Air Force between the accounting system and Treasury’s government-wide accounting system. According to Air Force officials, this multi-tiered reconciliation helps to assure that the recorded military payroll transactions were actually disbursed through Treasury accounts. We reviewed key documentation with Air Force personnel to understand the nature, timing, and frequency of the reconciliations. The documentation showed that the Air Force was able to tie military payroll to its FBWT reconciliation. In addition, we traced and agreed payroll system totals that included each of the 10 months to Treasury records. However, our scope did not include tests for all elements of the reconciliation, and we did not assess the accuracy of the recorded balance of FBWT. Reconciliation of accounting records to the Statement of Budgetary Resources. We found that DFAS was able to trace the recorded payroll to the Statement of Budgetary Resources. To determine this, we reviewed DFAS’s quarterly reconciliation between the accounting system and the financial reporting system. DFAS’s reconciliation between these systems helps to reasonably assure the accuracy of amounts reported in the Statement of Budgetary Resources. We reviewed each of DFAS’s fiscal year 2014 quarterly reconciliations of the accounting system and compared them to certain key line items of the Statement of Budgetary Resources— specifically, obligations incurred and gross outlays—of which a portion of the amounts in each included active duty direct compensation. We were able to trace the amounts in the payroll accounting system to the Statement of Budgetary Resources and found that they had been properly recorded. We selected statistical samples of six categories of military pay and, based on the testing we performed, found that while the Air Force could provide adequate supporting documentation for three of the categories, it was not able to provide or readily provide documentation for certain transactions involving Special Pay, Overseas Housing, and Domestic Housing. For fiscal year 2014, these three categories of military pay represented nearly 30 percent of active duty direct compensation obligations. We also tested a statistically selected sample of change order transactions, which represent pay from all categories initiating the start, stop, or update of an entitlement, and found that the Air Force could not provide documentation for certain of these transactions as well. The DOD Financial Management Regulation requires the military components to maintain documentation supporting all transactions recorded in finance and accounting systems. Additionally, Standards for Internal Control in the Federal Government and DOD’s FIAR Guidance require audited entities to document transactions and events, and to reasonably assure that internal controls are effective to help ensure that supporting documentation is maintained and readily available for management and audit purposes. Without adequate documentation to support its military payroll transactions, both the Air Force and DOD are at risk that military personnel may not be paid appropriately and financial statement auditability goals may not be achieved. To test the Air Force’s ability to provide supporting documentation, we selected several random statistical samples—a total of 360 active duty military payroll transactions—from transactions recorded during the first 10 months of fiscal year 2014 (October 1, 2013, through July 31, 2014) except for Domestic Housing for which a single month of transactions was selected. This included a sample of 90 change actions, which represent changes to a servicemember’s status or entitlements that could affect any of the pay categories. We drew separate random statistical samples of 45 transactions from each of the categories of military pay listed below. These six categories, which are defined below, accounted for approximately 94 percent of the Air Force’s fiscal year 2014 total active duty direct compensation obligations. Basic Pay is the primary form of pay for most servicemembers and is based on rank and years of service. In fiscal year 2014, it accounted for over 60 percent of active duty direct compensation obligations. Examples of supporting documentation for Basic Pay include enlistment agreements, promotion orders, and Statements of Service. Domestic Housing is a U.S.-based allowance that enables military personnel to obtain adequate civilian housing for themselves and their dependents when government quarters are not available. It accounted for about 20 percent of active duty direct compensation obligations in fiscal year 2014. As previously noted, our sampling universe for Domestic Housing was limited to the month of July 2014. The primary supporting document for Domestic Housing is Air Force Form 594 (Application and Authorization to Start, Stop, or Change Basic Allowance for Quarters (BAQ) or Dependency Redetermination). Overseas Housing, including overseas cost-of-living allowances, accounted for approximately 6 percent of active duty direct compensation obligations in fiscal year 2014. Servicemembers stationed outside the continental United States who are not furnished government housing receive an overseas housing allowance. In general, the amount depends on a servicemember’s location, pay grade, and dependency status. Special Pay includes enlistment, reenlistment, and retention bonuses; special pay and bonuses for critical skills, such as for certain medical officers; and special pay for duty subject to hostile fire or imminent danger. In fiscal year 2014, Special Pay accounted for approximately 4 percent of active duty direct compensation obligations. Examples of Special Pay supporting documents include job classification/on-the- job training forms, service contracts for medical officers, and certain travel orders and vouchers. Incentive Pay includes additional pay or bonuses for certain critical skills, such as career aviators and aircrew members; foreign language proficiency; and certain types of hazardous duty. In fiscal year 2014, it accounted for approximately 1 percent of active duty direct compensation obligations. Examples of supporting documentation for Incentive Pay include aeronautical orders and certificates of proficiency in strategic foreign languages. Other Pay includes domestic cost-of-living allowances, family separation allowances, and uniform and clothing allowances, each of which requires specific documentation. In fiscal year 2014, Other Pay accounted for about 1 percent of active duty direct compensation. We tested random samples of transactions to determine whether the Air Force could provide documentation to support the population of individual payroll transactions. We counted as errors all instances in which a necessary document could not be found, could not be readily provided, or the documents provided were incomplete. For all sampled items, we looked for sufficient and appropriate evidence of the servicemember’s eligibility for the pay and the amount of the pay. Our scope of work did not include observation of Air Force personnel’s document-handling procedures, identification or testing of internal controls, or examination of the Air Force’s document management systems. Rather, we tested whether a specific control objective—maintaining and readily providing supporting documentation—was being met. Maintaining and readily retrieving supporting documentation requires a complex series of manual and system processes, the combination of which can contribute to internal control deficiencies and process issues that may result in missing, misplaced, or incomplete documentation. Specifically, supporting documentation often originates in paper form and may be submitted at any of over 100 Air Force locations worldwide. Further, documentation may require approval from officials employed at another location; generally requires manual processing (such as faxing or scanning); and is ultimately stored in electronic form in one of two Air Force document management systems, one of which some Air Force personnel describe as not user friendly. The Air Force often cannot determine at what point in the document-handling and management process a breakdown occurred, resulting in missing or incomplete documentation. While the Air Force was able to provide supporting documentation for three categories of active duty military pay transactions that we tested— Basic Pay, Incentive Pay, and Other Pay—there were three categories—Special Pay, Overseas Housing, and Domestic Housing—for which the Air Force was not always able or readily able to provide supporting documentation. (See fig. 2.) We found that the Air Force could not readily provide supporting documentation for 5 transactions from our statistical sample of 45 special pay transactions. This exceeded our tolerable error rate of 1 in 45. Air Force officials did not identify to us a cause for their delay in retrieving and providing supporting documentation. We found that the Air Force was not able or readily able to provide adequate support for 4 transactions from our statistical sample of 45 transactions for overseas housing benefits. This exceeded our tolerable error rate of 1 in 45. Documentation for those transactions was either not provided, not readily provided or was not signed by a certifying official. In one case, the Air Force determined that the supporting document was more than 6 years old, meaning that it would have been destroyed in accordance with Air Force policy. Air Force officials stated that they could not determine the cause for the missing documentation nor did they identify to us a cause for their delay in retrieving and providing some of the supporting documentation. In 2014, AFAA reported that the Air Force could not adequately support its domestic housing transactions. To address the finding, the Air Force ordered all active duty military personnel claiming dependents to recertify their eligibility in writing. To determine the effectiveness of this recertification and to determine whether it helped improve documentation supporting domestic housing transactions, we selected a statistical sample of 45 domestic housing transactions from the July 2014 active duty military payroll. Although we found no errors involving servicemembers claiming dependents, the Air Force was not able or readily able to provide supporting documentation for 4 transactions involving servicemembers not claiming dependents. This exceeded our tolerable error rate of 1 in 45. Air Force officials stated that they could not determine the cause for the missing documentation, nor did they identify to us a cause for their delay in retrieving and providing the supporting documentation. To determine whether the Air Force could provide supporting documentation filed during fiscal year 2014, we selected a statistical sample of 90 changes to active duty military payroll (hereinafter referred to as change actions) from the first 10 months of fiscal year 2014. While supporting documentation for the regular pay transactions discussed above could be years old, support for change actions recorded during the period we tested would have been submitted during or shortly before fiscal year 2014—more nearly reflecting the current control environment. We found that the Air Force was not able to provide supporting documentation for 7 of the 90 change actions we sampled. This exceeded our tolerable error rate of 4 in 90. Specifically, we found that the Air force was not able to provide supporting documentation for 5 recent change actions related to overseas housing and 2 related to basic pay. Air Force officials stated that they could not determine the cause for the missing documentation. While the Air Force, for the period we reviewed, had systems and processes designed to provide a complete universe of active duty military pay transactions, it could not always provide or readily provide supporting documentation for certain transactions in three of the categories of pay transactions nor could it generally determine the cause of missing, delayed, or incomplete documents. The ability to maintain and readily provide documentation supporting all pay categories is one of many key financial controls that help to provide reasonable assurance that servicemembers are appropriately paid and that military payroll is properly reported in the Air Force’s financial statements. Additionally, maintaining and readily providing supporting documentation to substantiate financial transactions, often referred to as an audit trail, is a critical requirement to achieving audit success. Without continuing focus on ensuring that documentation is readily available to support military payroll transactions, both the Air Force and DOD are at risk that military personnel may not be paid appropriately and financial statement auditability goals may not be achieved. To help reasonably assure that supporting documentation for military payroll, particularly for Special Pay, Overseas Housing, and Domestic Housing, is properly maintained and readily available for management and audit purposes, we recommend that the Secretary of the Air Force direct the Deputy Assistant Secretary of the Air Force for Financial Management and Comptroller to take the following two actions: Review the processes and systems used to obtain, maintain, and readily provide supporting pay documentation to identify the root cause of deficiencies responsible for delayed, missing, and incomplete documentation. Develop and implement internal controls to provide reasonable assurance that adequate supporting documentation is maintained and readily available for examination for management and audit purposes and to support that correct amounts are being paid for military payroll transactions. We provided a draft of this report to Air Force for review and comment. In its written comments, reprinted in appendix II, the Air Force concurred with our recommendations and stated that it had identified and subsequently implemented nine corrective actions and taken action to help reasonably assure that supporting documentation for military payroll is maintained and available. In November 2015—10 months following our initial request—the Air Force provided adequate supporting documentation for 10 of the 21 transaction exceptions identified in this report. While some documentation was ultimately found and provided at the close of the audit, Air Force officials agreed it was not provided timely and concurred with our judgment in counting them as errors since the documentation was not readily provided, which would be necessary under a full-scope financial statement audit in order to be considered by the auditors. As auditors must complete their audits and federal agencies must submit their audited financial statements under statutorily established deadlines, agencies must be able to provide requested documentation to the auditors in a timely manner in order to be considered in the test results. Specifically, the FIAR guidance states that each service should ensure that it is prepared to respond to requests for audit documentation within 5 business days. For this audit, an Air Force official committed to provide audit documentation generally within 14 business days of the request in order to have sufficient time to redact personally identifiable information and to retrieve documents that were not centrally located. We have made technical changes and clarified the recommendations as appropriate to reflect that the documentation was either not provided or not readily provided. However, without continuing focus on ensuring that documentation is readily available to support military payroll transactions, both the Air Force and DOD are at risk that military personnel may not be paid appropriately and financial statement auditability goals may not be achieved. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense (Comptroller)/Chief Financial Officer; the Deputy Chief Financial Officer; the Director, Financial Improvement and Audit Readiness; the Secretary of the Air Force; the Assistant Secretary of the Air Force Finance Command; the Directors of the Defense Finance and Accounting Service and the Defense Finance and Accounting Service-Indianapolis Center; the Director of the Office of Management and Budget; and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1873 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. Our objectives were to determine whether the Air Force, for the period we tested, (1) had in place systems and processes designed to provide a complete universe of active duty military pay transactions prepared through its central payroll processing system and (2) could provide adequate documentation to support individual military payroll transactions. To address our first objective, we requested and received all transaction data processed by the Defense Joint Military Pay System-Active Component (DJMS-AC) for each of the first 10 months of fiscal year 2014 (October 1, 2013, through July 31, 2014), and we examined four reconciliations: For the first reconciliation (payroll to personnel), we reviewed documentation describing the Air Force’s monthly reconciliation of 16 data elements common to the servicemembers’ records in the payroll and personnel systems, and examined one month’s 1 month of reconciliation reports. We also verified that the Air Force tracks and resolves mismatches by reviewing internally reported monthly mismatch and aging statistics for open tracking records and by interviewing Air Force officials. For the second reconciliation (payroll to accounting), we reviewed each of the Defense Finance and Accounting Service’s (DFAS) fiscal year 2014 quarterly reconciliations between the payroll and accounting systems, which it accomplishes with the help of the Military Pay Reconciliation Tool, a Microsoft Access application that tracks payroll data from DJMS- AC to disbursements recorded in the accounting system. We also interviewed DFAS and Air Force officials and reviewed documentation describing how data are processed through the payroll system in the form of vouchers and subsequently reconciled to the accounting system. For the third reconciliation (military pay to Fund Balance with Treasury (FBWT)), we reviewed the monthly reconciliation DFAS performs for the Air Force between the accounting system and the Department of the Treasury’s (Treasury) government-wide accounting system. According to Air Force officials, this multitiered reconciliation helps to assure that the recorded military payroll transactions were actually disbursed through Treasury records. We reviewed key documentation with Air Force personnel to understand the nature, timing, and frequency of the reconciliations. The documentation showed that the Air Force was able to tie military payroll to its FBWT reconciliation. In addition, we traced and verified payroll system totals that included each of the 10 months to Treasury records. For the fourth reconciliation (accounting records to the Statement of Budgetary Resources), we reviewed each of DFAS’s fiscal year 2014 quarterly reconciliations between the accounting system and the financial reporting system for two key line items of the Statement of Budgetary Resources—obligations incurred and gross outlays—of which a portion of each included active duty direct compensation. We also interviewed DFAS and Air Force officials. To address our second objective, we selected random samples of pay transactions from our universe to determine whether the Air Force could provide supporting documentation for individual active duty pay transactions and to determine the effectiveness of its corrective actions to address previously reported deficiencies with domestic housing documentation. We performed sampling tests of seven populations, in which we included only active duty pay (appropriation symbol 3500) in our sampling universes. We excluded from our universes pay types that do not represent amounts paid to servicemembers, such as retired pay accruals or employer’s share of Social Security. We first segregated regular pay transactions from change actions (those pay transactions initiating a stop, start, or update of pay). For regular pay, we drew random statistical samples of 45 transactions from each of the following pay category populations. Domestic Housing Other (allowances for cost of living, family separation, and uniforms and clothing) For change actions, we drew one random statistical sample of 90 transactions from a population representing all the pay categories listed above. To determine whether documentation supporting domestic housing transactions was complete and had been improved as a result of the Air Force’s corrective action, we requested and reviewed the supporting documentation for a randomly selected statistical sample of 45 domestic housing transactions for the July 2014 active duty military payroll. The Air Force’s corrective action—taken in response to deficiencies that the Air Force Audit Agency identified—was to require recertification of domestic housing benefits for servicemembers with dependents. Our sampling plan was based on a 90 percent confidence interval and a tolerable error rate of 10 percent. This sampling plan allowed us to determine that for a randomly selected statistical sample size of 45, our tolerable error rate would be 1 or fewer transactions with errors and for a sample size of 90, the tolerable error would be 4 or fewer transactions with errors. We examined the documentation provided by the Air Force and determined its sufficiency and appropriateness with regard to the data necessary to establish eligibility and calculate pay. Our primary sources of criteria were the Department of Defense’s (DOD) Financial Management Regulation, Air Force instructions, federal travel regulations, and official pay rate tables. Our sampling tests were not designed to project a dollar impact of any errors found. We counted as errors all instances in which a necessary document could not be found, was not readily provided or was incomplete. For all sampled items, we looked for sufficient and appropriate evidence of the servicemember’s eligibility for the pay and the amount of the pay. For basic pay sample items and for change actions based on a change in grade, we tested support for rank and years of service. For other pay types that depend in part on these factors, we tested only the support for the other factors involved, such as duty location, number of dependents, or specialty code. Our scope of work did not include observation of Air Force personnel’s document-handling procedures, identification or testing of internal controls, or examination of the Air Force’s document management systems. Rather, we tested whether a specific control objective— maintaining and readily providing supporting documentation—was being met. We relied on the Air Force to retrieve the supporting documents and to determine, as far as possible, the causes of missing or incomplete documents. We conducted this performance audit from July 2014 to December 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, key contributors to this report were Paul Kinney (Assistant Director), Russell Brown, Jason Kelly, Sharon Kittrell, James Ungvarsky, and Matthew Ward.
|
As part of DOD's efforts to achieve auditability of its financial statements, the Air Force in July 2014 asserted audit readiness for its Schedule of Budgetary Activity, of which military payroll is a significant part. In fiscal year 2014, the Air Force obligated over $23 billion for direct compensation of active duty servicemembers. GAO was asked to examine Air Force military payroll systems and processes. This report evaluates whether for the period October 1, 2013, through July 31, 2014, the Air Force (1) had in place systems and processes designed to provide a complete universe of active duty military pay transactions and (2) could provide adequate documentation to support individual military payroll transactions. GAO examined the Air Force's reconciliations of its payroll system to personnel, accounting, reporting, and disbursement information. GAO selected random statistical samples from various categories of military pay to determine if the Air Force could provide key documents supporting those transactions. Based on GAO's testing of the first 10 months of fiscal year 2014, the Air Force and its service provider, the Defense Finance and Accounting Service, had in place systems and processes designed to provide a complete universe of Air Force active duty military payroll transactions. In addition to reconciling payroll information with accounting, financial reporting, and disbursement information, the Air Force reconciles its payroll system with its personnel system each month to help reasonably assure that actual servicemembers are being paid and are paid correctly. GAO found that the reconciliations were documented and supported the Air Force's ability to provide a population of active duty military payroll transactions for the period tested. GAO's testing of randomly selected active duty payroll transactions, for the period tested, found that the Air Force could not always provide or readily provide supporting documentation for some categories of pay. While the Air Force provided adequate support for three categories of military payroll—including basic pay, the largest category by dollar amount—it could not always provide or readily provide support for domestic and overseas housing transactions or special pay benefits. The Air Force stated that it could not determine the cause for the missing documentation, nor did it identify to us the cause of delays in retrieving and providing the supporting documentation. Documentation supporting payroll transactions is one of many key financial controls that helps to provide reasonable assurance that servicemembers are appropriately paid. Department of Defense (DOD) regulations and guidance as well as Standards for Internal Control in the Federal Government , require that audited entities document transactions and events, and that this documentation be readily available for examination for management and audit purposes. Without continued focus on ensuring that documentation is readily available to support military payroll, both the Air Force and DOD are at risk that military personnel may not be paid appropriately and that financial statement auditability goals may not be achieved. GAO recommends that the Air Force improve its ability to maintain and readily provide, for management and audit purposes, documentation to support certain types of active duty pay by (1) identifying the causes of the deficiencies responsible for delayed, missing, and incomplete documentation and (2) implementing controls to help assure that documentation is readily available. The Air Force agreed with the report and its recommendations.
|
On June 20, 1997, the nation’s largest tobacco companies and attorneys general representing 40 states proposed a national settlement that, if implemented, would significantly change the way tobacco products are manufactured, marketed, and distributed in the United States. The tobacco industry agreed to pay about $368.5 billion (in 1997 dollars) over a period of 25 years—subject to consumption or volume adjustment.Annual payments would range from $8.5 billion to $15 billion. Among other things, these payments would be used to fund an extensive federal enforcement program, including a state-administered retail licensing system to stop minors from obtaining tobacco products; an annual national counter-advertising and tobacco-control campaign; a nationwide smoking cessation program; and expenditures for states’ health benefits programs. The proposed settlement would also impose substantial surcharges on the tobacco industry if underage tobacco use does not decline by at least 30 percent in 5 years, 50 percent in 7 years, and 60 percent in 10 years. In addition, the settlement clarifies FDA’s authority to regulate tobacco products—including regulating the level of nicotine in cigarettes—under the Food, Drug, and Cosmetic Act and requires the tobacco industry to pay for FDA’s oversight of the industry; bans all outdoor tobacco advertising and the use of cartoon characters and human figures, such as the Marlboro Man, in tobacco advertising; requires manufacturers to disclose internal research relating to the health effects of their products; establishes a minimum federal standard to restrict smoking in public places, with enforcement funding coming from the industry’s payments; settles all punitive damages claims against the tobacco industry and places limits on future class-action suits against the industry; provides for the annual payments to be reflected in the prices that manufacturers charge for tobacco products; and treats all payments as ordinary and necessary business expenses, which makes them tax-deductible. Many of the elements of this proposed national tobacco settlement were included in a bill introduced by the Chairman, Senate Committee on Commerce, Science, and Transportation, on November 7, 1997. Over the past several months, other Members of Congress have presented their own alternative settlement proposals. Comparing the provisions of all these various proposals is beyond the scope of this report, but we note that a common goal of many of the proposals we reviewed is to reduce smoking by youth, in part through a large increase in the price of a pack of cigarettes—ranging from 62 cents to about $2.00 per pack. On September 17, 1997, the President announced five key goals that he believes should be included in any national tobacco settlement legislation: (1) reducing smoking by teens, through, among other things, a combination of industry payments and penalties; (2) granting FDA full authority to regulate the manufacturing, marketing, and sale of tobacco products; (3) changing the way the tobacco industry does business, such as through restricting the marketing and promoting of tobacco to children, and requiring the industry to disclose scientific and health-related research; (4) addressing other related public health goals, such as promoting smoking cessation, researching the health consequences of smoking, and further restricting smoking in the workplace and in public areas; and (5) minimizing the impact of a national settlement on tobacco farmers and their communities. While the Congress has been deliberating the issue of a tobacco settlement, individual state lawsuits continue to move through the legal process, and to date, three have been settled, resulting in settlements totaling at least $28.8 billion. If there is a national tobacco settlement, some terms of these individual state settlements may be superseded. From 353,000 to 555,000 U.S. jobs are directly related to the growing, warehousing, manufacturing, wholesaling, and retailing of tobacco products, according to the studies we reviewed that examined the national and regional economic impacts of the tobacco industry. These studies do not specifically address the potential economic impacts of either a national tobacco settlement or the absence of a national settlement.However, two studies—Warner’s and USDA’s—examined the potential net impact on U.S. employment if tobacco consumption were to decline, which is likely in the event of a national settlement. These two studies concluded that overall, the negative impact on U.S. employment would be offset by ex-smokers’ spending the money they previously spent on tobacco products on other, potentially more labor-intensive goods and services. However, the new jobs related to these other goods and services might be lower-paying, on average, than the tobacco-related jobs they replaced. The Southeast Tobacco Region, where tobacco production is most heavily concentrated, would likely experience job losses. The studies that we reviewed divided estimates of total tobacco-related employment into three categories—the core sector, supplier sector, and expenditure-induced sector. (See table 1.) The estimates for total core-sector employment (or direct employment) range from 353,000 to 555,000. These jobs include, for example, ones associated with the growing, warehousing, manufacturing, wholesaling, and retailing of tobacco products. The estimates of total supplier-sector employment—indirect tobacco-related jobs associated with the producers of farm chemicals, paper, cellophane, and others that supply materials and services to the tobacco core sector—range from about 149,000 to about 213,000 employees. In addition, the income earned by tobacco growers, manufacturers, suppliers, and others is spent on a variety of consumer goods and services that generate additional revenue for a wide range of industries throughout the U.S. economy. The estimates of employment associated with this expenditure-induced sector range from about 504,000 to about 2.3 million jobs. The combined estimates of tobacco-related employment associated with the supplier and expenditure-induced sectors range from about 653,000 to over 2.3 million jobs. Overall, according to the studies we reviewed, tobacco-related employment totals from about 1.2 million to 3.1 million jobs nationwide. (For more detailed information on tobacco-related employment by industry, see app. I, tables I.1, I.2, and I.3.) The estimates shown in table 1 indicate the total number of jobs (or gross impact) associated with the production and sales of U.S. tobacco products. However, if tobacco consumption were to fall, according to Warner’s and USDA’s studies, the money previously spent on tobacco products would be spent on other consumer goods and services, and, therefore, employment in other sectors would rise, offsetting part or all of the employment decline in the tobacco industry. Warner estimated the impact of decreasing tobacco consumption under two scenarios. (See table 2.) The first scenario assumes that all domestic spending on tobacco stops immediately (which is unlikely in the absence of an outright tobacco ban), while the second scenario assumes that annual domestic spending on tobacco decreases at twice the current rate, or at about 4 percent. This study also assumes that money previously spent on tobacco products would be reallocated to all goods and services in the U.S. economy in the same proportions as these goods and services currently contribute to the gross domestic product. As table 2 shows, under the first scenario, the U.S. economy would gain 133,000 jobs nationwide over a 7-year period and about 20,000 jobs under the second scenario. Job losses in the retail and wholesale trade, farm, manufacturing, and government sectors would be more than made up by job gains in the services and other private industry sectors. Warner explained the reason for this net gain by suggesting that most of the industries that produce the products that would replace tobacco are more labor intensive than the tobacco industry. However, because 91 percent of tobacco farming and manufacturing jobs are located in the Southeast Tobacco Region, this region could suffer net job losses in all sectors of the economy. These total losses are likely to be less than 1 percent of the region’s total employment. (For more detailed information see app. I, tables I.4 and I.5.) In 1995, USDA also estimated the net impact of immediately ending domestic tobacco consumption. Under a scenario that USDA examined, the money currently spent on tobacco products would be reallocated to snack food and beverage products, and the U.S. economy would gain 156,000 jobs nationwide—a result similar to that of Warner’s scenario described above. USDA’s study concluded that jobs in tobacco farming and tobacco manufacturing would be reduced considerably. However, the industries that produce the products that would replace tobacco are more labor-intensive, although, on average, lower paying, and thereby would more than offset the job losses resulting from reductions in tobacco consumption. The study also concluded that the general result does not change—that is, an overall gain in jobs nationwide—regardless of how tobacco expenditures are assumed to be reallocated. Although numerical results were not presented specifically for the Southeast Tobacco Region, USDA noted that Kentucky, North Carolina, Tennessee, and Virginia would lose tobacco farming and manufacturing jobs. According to a 1997 University of Michigan survey, in 1977, about 29 percent of 12th-graders smoked daily. This level decreased to about 17 percent by 1992 and then rose to about 25 percent in 1997. The recent increase in smoking probably occurred because the upward trend in real cigarette prices ceased. The most recent estimates indicate that an increase in the price of cigarettes leads to a drop in the smoking rate for youths. The data available on smoking behavior for Canadian youths indicate a trend that is consistent with that shown for the United States. Studies indicate that increases in the real price of a pack of cigarettes contribute to decreases in the percentage of U.S. youths who smoke daily. According to the University of Michigan survey, from 1977 through 1992, the percentage of U.S. 12th-graders smoking daily generally declined from about 29 percent in 1977 to about 17 percent in 1992. (See fig. 1.) However, since 1992, the smoking rate has risen significantly, to about 25 percent in 1997. After initially decreasing from about $1.44 per pack of cigarettes in 1977 to about $1.23 per pack in 1981, the real price of a pack of cigarettes (in 1997 U.S. dollars) rose steadily, to a high of about $2.10 in 1992, before falling to about $1.95 in 1997. These two trends show an inverse relationship between the real price of a pack of cigarettes and the smoking rate for youths. The most recent estimates indicate that this inverse relationship remains strong, even when antismoking regulations and restrictions on youths’ access are included in the analysis. According to these estimates, a real price increase of 10 percent will cause a 4- to 9-percent decrease in the percentage of youths who smoke. Data describing the smoking behavior of youths in Canada are incomplete. (See fig. II.1 in app. II for available data.) The data that are available, which we obtained from Canada’s National Clearinghouse on Tobacco and Health, suggest that the percentage of Canadians aged 15 to 19 who smoke daily fell sharply, from about 42 percent in 1977 to about 16 percent in 1991. Since then, however, the data suggest that smoking by youths is on the rise—reaching the rate of about 20 percent in 1994. From the data available, it is impossible to conclude with any certainty the reason for the higher 1994 rate. However, we believe two likely factors are (1) a possible reaction to the federal and provincial cigarette tax decreases enacted earlier that year, which suggests a relationship between smoking by youths and the price of cigarettes consistent with that observed for U.S. youths, and (2) Canadian youths’ increasing access to contraband cigarettes. National tobacco settlement legislation would likely result in a decline in state revenues from cigarette excise taxes. A settlement would probably contain a provision to increase the real price of cigarettes, with the goal of reducing the smoking rate. Already, several legislative proposals would increase real cigarette prices. According to our analysis, if such price increases were to take effect, they alone could cause cigarette consumption to fall substantially. As a result, states, collectively, could lose billions of dollars annually in associated revenues from cigarette excise taxes; however, individual states, on average, would lose less than 1 percent of their total tax revenues. Several current tobacco settlement proposals contain provisions to increase the price of cigarettes. It has been estimated that the original $368.5 billion settlement proposal between the tobacco industry and 40 state attorneys general is likely to be passed on to consumers, resulting in a price increase of about 62 cents per pack. The President’s plan calls for an increase of up to a $1.50 per pack over a 10-year period. Table 3 presents a range of estimated changes in consumption that could result from these price increases. A price increase of $2.00 per pack is included in table 3 to illustrate a reasonable upper bound on the likely impact on consumption, because one tobacco bill we reviewed, although not a comprehensive settlement proposal, proposed a price increase higher than the one proposed by the President. As table 3 shows, an increase of 62 cents per pack in the real price of cigarettes could result in a 9- to 16-percent decline in cigarette consumption, depending on how consumers react to these increases. The President’s $1.50 per pack increase could result in an even greater decline in cigarette consumption—from about 19 to 33 percent. Overall, table 3 shows that the price increases included in current proposals could reduce consumption from about 9 to 40 percent. Nationwide, the increase in the real price of cigarettes resulting from various tobacco settlement proposals could end up costing the states from about $673 million to $3 billion annually in lost revenues from cigarette excise taxes. (See table III.1 in app. III. Table III.1 provides a range of the estimated lost revenues from cigarette excise taxes for each state.) The states that stand to lose the most tax revenues are those with large populations of smokers and/or the highest state rates for cigarette excise taxes. For example, Michigan, which has one of the highest rates, could lose from about $50 million to $220 million in annual tax revenues. On the other hand, Ohio, which has approximately the same population of smokers as Michigan, or more, would stand to lose from $26 million to $117 million because of its much lower tax rate. Overall, all but one state would lose less than 2 percent of their total tax revenues from all sources (see app. III, table III.1); on average, states would lose less than 1 percent of their total tax revenues. Smuggling cigarettes from low- to high-tax states, or interstate smuggling, which was prominent in the 1970s, may now be a reemerging problem.The opportunity for individuals to profit from interstate smuggling exists because of the wide disparity in excise taxes across the states. Currently, the states’ cigarette excise taxes range from 2.5 cents per pack in Virginia to $1.00 per pack in Alaska. (See fig. 2.) According to the Department of the Treasury’s Bureau of Alcohol, Tobacco, and Firearms (ATF), cigarettes are currently being smuggled across state borders to avoid the payment of state excise taxes. This activity can violate federal and/or state laws. A January 1997 study by the Washington State Department of Health estimated the extent of interstate smuggling activity in terms of packs per capita by state—which we converted to the associated loss (or gain) of state tax revenue. (See app. IV, table IV.1.) According to our analysis of these data, some states are losing up to about $100 million annually in potential tax revenues. As expected, the Washington State study indicated that substantial smuggling occurs from states with low tax rates to states with high tax rates. For example, Washington and Michigan, states with among the highest tax rates, had estimated annual losses in tax revenues of $51 million and $105 million, respectively. On the other hand, exporting states—such as Kentucky, North Carolina, and Virginia—did not show revenue losses; however, at most, they showed only modest revenue gains because their tax rates are so low that extra sales to buyers in the high-tax states do not generate significant tax revenues. Recent experience demonstrates that international smuggling can occur when differences in cigarette tax rates are substantial. For example, international smuggling has occurred recently between Canada and the United States. From 1984 through 1993, the average real price of a pack of cigarettes in Canada—in 1994 Canadian dollars—increased from $2.64 in 1984 to $5.65 in 1993, as a result of sharp increases in Canadian federal and provincial cigarette taxes. According to a 1994 study for the National Coalition Against Crime and Tobacco Contraband, because of these price increases, Canadians found lower-priced alternatives on the black market. Organized criminal groups purchased Canadian cigarettes that had been exported tax-free to the United States and smuggled them back into Canada. The Canadian government estimated that, in 1993, contraband cigarettes made up over 60 percent of the Québec market and from 15 to 40 percent of the market in other parts of the country. Violence increased, merchants suffered, and in 1 year alone, Canada and its provinces lost over $2 billion (in Canadian dollars) in tax revenues. The Canadian Prime Minister believed that Canadian tobacco manufacturers were aware that tobacco exports to the United States had been reentering Canada illegally and that these manufacturers benefited directly from this illegal trade. Canada responded in 1994 by sharply reducing federal and provincial cigarette taxes and increasing its enforcement efforts, among other steps. Since then, international cigarette smuggling has declined considerably. Available evidence also shows that international smuggling is currently occurring between the United States and Mexico; however, the extent of this activity is not known. (For more information on international cigarette smuggling, see app. V.) We provided a draft of this report to USDA for review and comment. We met with officials from the Economic Research Service, including two agricultural economists; Foreign Agricultural Service, including a senior tobacco economist; and Farm Service Agency, including the Deputy Administrator for Farm Programs. USDA generally agreed with the accuracy of the report and provided clarifications on the economic impact of the tobacco industry. USDA noted that employment in the tobacco industry is most accurately characterized by counting only employment in the growing, processing, manufacturing, and wholesaling of tobacco products. Because the studies we reviewed generally included a broader definition of jobs directly related to tobacco by including the retail industry, we used this broader definition throughout our report. In addition, USDA commented that it is important to note that while cigarette consumption has been declining, production and exports have been increasing. We included language in our final report to make this point clear. We also incorporated other suggested clarifications where appropriate. We searched the literature to identify studies that assessed the national and regional economic impacts of the tobacco industry and talked to officials at USDA, the tobacco industry, and academia. We obtained data on smoking trends for U.S. youths from the University of Michigan’s “Monitoring the Future” survey; obtained information on cigarette prices from the Tobacco Institute, which we converted to 1997 dollars; and searched the economic literature for estimates of the price/quantity elasticities for cigarette purchases by youths. We obtained all available data on the smoking rate for Canadian youths from Canada’s National Clearinghouse on Tobacco and Health, which Statistics Canada reviewed for accuracy. The National Clearinghouse also provided us with the latest available information on the price of Canadian cigarettes. We obtained estimates of price elasticities for U.S. domestic cigarette consumption by reviewing the economics literature and used a methodology similar to the Federal Trade Commission’s to estimate the impact of price increases on cigarette consumption. The Tobacco Institute provided us with the states’ rates for cigarette excise taxes and data on the states’ revenues from these excise taxes, which we used to calculate estimates of the impact of declining cigarette consumption on states’ revenues from cigarette excise taxes. We obtained data on total state revenues (from all sources) from the Statistical Abstract of the United States, 1997. For information on interstate smuggling in the United States and U.S.-Canadian international smuggling, we talked to officials from ATF; USDA; Canada’s Office of the Auditor General; FIA International Research, Ltd.; and Empire Pacific Group; and we obtained a study that estimated the extent of interstate cigarette smuggling from the Washington State Department of Health. To obtain information on U.S.-Mexican smuggling, we interviewed officials from the California Board of Equalization; California Alcoholic Beverage Control; Glendale, California Police Department; ATF; the U.S. Customs Service; U.S. Border Patrol; FIA; Empire Pacific Group; and the Mexican Embassy, and we visited the border ports of San Ysidro, California; and Otay Mesa, California; and the border checkpoint at San Clemente, California. We conducted our review from July 1997 through February 1998 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time we will send copies to the Senate Committee on Agriculture, Nutrition, and Forestry; Senate Committee on Commerce, Science, and Transportation; House Committee on Agriculture; House Committee on Commerce; the Secretaries of Agriculture, the Treasury, and Health and Human Services; the Attorney General; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-5138 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. Agricultural products, forestry, and fishery services Lumber, wood products, and furniture Stone, clay, and glass Transportation equipment, except motor vehicles Electric, gas, water, and sanitation services (Table notes on next page) Finance, insurance, and real estate 3,060 Number of jobs gained or (lost) Southeast Tobacco (Ga., Ky., N.C., S.C., Tenn., Va.) (36,584) New England (Conn., Mass., Maine, N.H., R.I., Vt.) Mideast (Del., Md., N.J., N.Y., Pa.) Great Lakes (Ill., Ind., Mich., Ohio, Wis.) Plains (Iowa, Kans., Minn., Mo., N.D., Nebr., S.D.) Southeast Nontobacco (Ala., Ark., Fla., La., Miss., W.V.) Southwest (Ariz., Okla., N.M., Tex.) Rocky Mountain (Colo., Idaho, Mont., Utah, Wyo.) Far West (Alaska, Calif., Hawaii, Nev., Oreg., Wash.) Number of jobs gained (or lost) in Southeast Tobacco Region (6,004) (6,477) (6,401) (2,713) (5,472) (7,554) (846) (5,957) (7,069) (3,890) (78) (2,924) (36,584) Lower bound: state cigarette tax losses associated with a consumption loss of 9 percent (dollars in millions) Upper bound: state cigarette tax losses associated with a consumption loss of 40 percent (dollars in millions) Upper bound: state cigarette tax losses as a percentage of total state tax revenues associated with a consumption loss of 40 percent (continued) Lower bound: state cigarette tax losses associated with a consumption loss of 9 percent (dollars in millions) Upper bound: state cigarette tax losses associated with a consumption loss of 40 percent (dollars in millions) Not available. Not applicable. Change in state cigarette tax revenues (1996 dollars in millions)($3) (1) (16) (6) (80) (4) (12) (7) (32) (4) (6) (56) (6) (4) (4) (4) (17) (41) (105) (19) (3) (3) (1) (4) (4) (21) (1) (93) (2) (5) (continued) Change in state cigarette tax revenues (1996 dollars in millions)(3) (3) (20) (6) (1) (1) (57) (3) (51) (1) (14) ($674) Changes in tax revenue derived from estimates of nontaxed sales (packs per capita, 1995) presented in A Tax Study: Cigarette Consumption in Washington State, Washington State Department of Health, Youth Tobacco Prevention Program, 1997. State cigarette tax rates from 1995-96, and 1996 state population data were used in our analysis to be consistent with the time period of the study. Not applicable. This appendix presents information on cigarette smuggling between the United States and Canada and between the United States and Mexico. According to the Canadian government, Canada increased the price of cigarettes through federal and provincial excise taxes for several years, which resulted in a steady decline in the number of Canadians who smoke. From 1984 through 1993, federal taxes on a pack of 20 cigarettes increased from 42 cents to $1.93 Canadian. Provincial taxes, levied in addition to the federal taxes, increased significantly as well. For example, from 1984 through 1993, Québec’s cigarette taxes rose from 46 cents to $1.78 per pack, and Ontario’s rose from 63 cents to $1.66 per pack (in Canadian dollars). However, during most of this period, cigarettes made in Canada were exported tax-free to the United States. According to the 1994 study for the National Coalition Against Crime and Tobacco Contraband, an Indian reserve that straddles the U.S.-Canadian border between Cornwall, Ontario, and Massena, New York, had become the primary conduit for smuggling these cigarettes back into Canada. Once in Canada, the cigarettes were passed through elaborate networks for distribution to vendors throughout the country. By evading the Canadian federal and provincial taxes, smugglers were able to earn huge profits from contraband cigarettes. According to the Canadian government, profits for smuggled cigarettes were an estimated $500 per case, or $500,000 per truckload, in Canadian dollars. The extent of this smuggling activity is indicated by the more than an 11-fold increase in U. S. cigarette imports from Canada from 1990 to 1993. (See fig. V.1.) In addition, according to the Canadian government, in 1993, approximately 2.1 million Canadians consumed an estimated 90 million to 100 million cartons of contraband cigarettes with a legal retail value of about $4.5 billion in Canadian dollars. Million packs (20 cigarettes per pack) While citing the effectiveness of past efforts to reduce smoking by increasing cigarette taxes, Canadian Prime Minister Chrétien stated in February 1994 that the widespread availability of relatively inexpensive contraband cigarettes was negating government controls on the distribution, sale, and consumption of cigarettes. According to the Prime Minister, as the portion of the Canadian market supplied by smuggled tobacco increased, the average price paid for cigarettes dropped. Access to cheap contraband tobacco undermined the government’s health policy objectives of reducing tobacco consumption, particularly among youths. In February 1994, Prime Minister Chrétien addressed the smuggling problem by proposing, among other actions, strengthening enforcement at targeted smuggling areas, particularly along the U.S.-Canadian border; reducing the federal cigarette tax by $5 per carton in all provinces, effective February 9, 1994, and matching any provincial tax reduction over $5, to a maximum federal reduction of $10 (in Canadian dollars); imposing an export tax of $8 per carton (in Canadian dollars) to be paid by tobacco manufacturers, with an exemption provided for shipments in accordance with each manufacturer’s historic level of exports; imposing a 3-year federal surtax on tobacco manufacturers’ profits to fund a major public education program and other health measures; requiring manufacturers to clearly mark individual cigarettes to differentiate cigarettes manufactured for domestic and export use; and further restricting access to cigarettes by minors. From February 9 through April 15, 1994, federal and provincial taxes were significantly lowered in the five provinces—including Québec and Ontario—where international smuggling was particularly troublesome. For example, combined taxes in Québec fell by $2.10 per pack, and taxes in Ontario fell by $1.92 per pack (in Canadian dollars). Although taxes in these provinces have increased slightly since, once the initial tax cuts took effect, the contraband cigarette market dried up, according to the 1994 study for the National Coalition Against Crime and Tobacco Contraband. Consistent with the study’s findings, U.S. cigarette imports from Canada dropped about 96 percent from 1993 through 1996. (See fig. V.1.) With respect to international cigarette smuggling that may be occurring between the United States and Mexico, currently, there is no consensus among the authorities we interviewed on the extent of this activity. An official from California’s Board of Equalization, which, among other things, is responsible for ensuring that state excise taxes are paid, told us that curtailing U.S.-Mexican smuggling of cigarettes is a priority for their agency. The California Board of Equalization estimates that California loses from $20 million to $50 million annually in revenues from state cigarette excise taxes because of tax evasion, most of which it believes is a result of smuggling between the United States and Mexico. In addition, officials from the Bureau of Alcohol, Tobacco, and Firearms (ATF) told us that such international cigarette smuggling activity is widespread, and they suspect the main source of the cigarettes is duty-free shops located along the border. They stated that instead of permanently leaving the United States through the export market, cigarettes are diverted mostly back to the Los Angeles area, where they are sold on the black market. Both California Board of Equalization and ATF officials told us that for the most part, the tobacco companies and the duty-free shops were not helpful in the government’s attempts to stop the cigarette smuggling occurring between California and Mexico. These officials also said that the tobacco companies profit from the sales of their products whether or not federal and state taxes have been paid. An official from the Mexican Embassy in Washington, D.C., also told us that the Mexican government has recently become aware of cigarette smuggling occurring between the United States and Mexico. Although his government did not have any data on the extent of this activity, he believes it is increasing. He also told us that cigarettes are being brought into Mexico and then being smuggled back into the United States; however, he was not sure where the majority of these cigarettes came from. On the other hand, U.S. Customs Service officials at the ports of San Ysidro, California, and Otay Mesa, California, and U.S. Border Patrol officials at San Clemente, California, told us that they have not seen much evidence of cigarette smuggling between the United States and Mexico. Although Customs officials told us their number one priority is preventing the smuggling of narcotics into the country, this focus does not preclude them from finding other contraband products during their routine searches of vehicles. Customs officials at Otay Mesa—a large border port in California for commercial vehicles entering the United States—told us that their inspections of commercial vehicles over the last 4 years have yielded virtually no instances of cigarette smuggling. At San Ysidro—a border port through which some 40,000 personal vehicles enter the United States each day, Customs officials also told us that they have found very little evidence of cigarette smuggling as a result of their inspections. Our discussions with U.S. Border Patrol officials in San Clemente yielded similar results. The Border Patrol conducts vehicle inspections to search for illegal aliens. These inspections could uncover a wide range of contraband goods. However, although the officials in San Clemente have discovered contraband cigarettes as a result of these inspections, to date, they have not found quantities sufficient to conclude such smuggling activity is widespread. Although the extent of U.S.-Mexican cigarette smuggling is unknown, a 1995 case in the Los Angeles area illustrates that this activity is occurring. A 1998 study by FIA International Research Ltd. (FIA), a Toronto-based investigative research firm, concluded that international cigarette smuggling is occurring between California and Mexico involving “For Export Only” cigarettes. For example, FIA described a scheme in which a cigarette smuggling operation linked to Mexico was supplying contraband cigarettes to the Los Angeles and San Diego areas. Raids conducted in 1995 yielded 13 arrests and the seizure of seven vehicles and over 4,700 cartons of cigarettes. Authorities found that Mexican citizens had crossed into California and purchased cigarettes from duty-free stores and brought them back into Mexico. Once these duty-free cigarettes were in Mexico, smugglers concealed them in personal vehicles and smuggled them back across the border into California. Once in California, the cigarettes were consolidated at storage facilities before being distributed to the San Diego and Los Angeles areas, where they were sold in small convenience stores, on street corners, and out of catering trucks and the trunks of cars. This case illustrates that cigarette smugglers are profiting by evading federal, state, and local taxes through a variety of export- and duty-free-cigarette diversion schemes. Currently, the price of a carton of cigarettes in California is about $10.50 at duty-free stores—as compared with a retail price of about $20. If a tobacco settlement increases the price of cigarettes, this differential could increase further, thus further increasing the profitability of obtaining these cigarettes for resale. Scott Smith, Assistant Director Daniel Coates, Senior Economist Kirsten Landeryou, Economist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed issues surrounding a proposed national tobacco settlement, focusing on: (1) tobacco-related industries and existing studies that assess the national and regional economic impacts of the tobacco industry; (2) smoking trends for United States and Canadian youths; (3) the potential effect of a settlement on state revenues from cigarette excise taxes; and (4) the extent to which interstate and international cigarette smuggling affects the United States. GAO noted that: (1) according to recent studies, from 353,000 to 555,000 jobs are directly related to the tobacco industry nationwide, including jobs in the tobacco growing, warehousing, manufacturing, wholesaling, and retailing industries; (2) according to these studies, an additional 653,000 to over 2.3 million jobs nationwide are estimated to be indirectly related to the tobacco industry; (3) this additional employment includes jobs associated with the producers that supply materials and services to the tobacco industry and jobs associated with the industries that provide goods and services to the employees of the tobacco industry and its suppliers; (4) two of the studies estimated that declining tobacco consumption--that would occur, for example, as a result of an increase in the price of a pack of cigarettes--would likely result in job losses in the tobacco growing and manufacturing industries; (5) however, it must be recognized that the money previously spent on tobacco products would not simply disappear from the nation's economy; rather it would be reallocated to other goods and services; and (6) these studies indicate that this reallocation could have little effect on national employment, although the Southeast Tobacco Region could experience job losses.
|
The Army maintains maintenance depots for overhauling, upgrading, and maintaining missiles, combat vehicles, tactical vehicles, and communication and electronic equipment for the Army, other military services, and foreign countries. These depots, which were established from 1941 through 1961, repair end itemssuch as ground combat systems, communication systems, and helicoptersand reparable secondary itemsvarious assemblies and subassemblies of major end items, including helicopter rotor blades, circuit cards, pumps, transmissions, and thousands of other components. The number of these facilities has been reduced from 10 in 1976 to the existing 5 as of June 2003, and 2 of the remaining 5 were significantly downsized and realigned as a result of implementing the 1995 Base Realignment and Closure (BRAC) decisions. Figure 1 shows the locations of the remaining five Army maintenance depots. In fiscal year 2002, the depots reported that the total value of work performed was $1.5 billion. In a separate report on the distribution of depot maintenance funds between the public and private sector, the Army stated that DOD employees performed about 51 percent of the work included in the Army’s fiscal year 2002 depot maintenance program. Table 1 provides the name and location of each of the five Army depots, the primary work performed in each, the hours of work performed in fiscal year 2002, the value of that work, and the number of civilian personnel employed at each depot in fiscal year 2002. Depot maintenance work performed in Army depots has declined significantly since fiscal year 1987. However, the total depot maintenance program, of which the work assigned to the depots is a part, has grown in dollar value by 72 percent from $1.55 billion to $2.66 billion over that period. The decline in the amount of work performed in Army depots reflects the downsizing in the number of systems that followed the end of the Cold War, the trend toward greater reliance on the private sector, and the use of regional repair activities at Army active installations and Army National Guard activities for depot-level maintenance. The type of work performed in the depots also changed from fiscal year 1987 through fiscal year 2002. While workloads were once predominately the overhaul of Army end items, the percentage of work for non-Army customers and for repair of Army secondary items has increased over the last 16 years. Projections of future work indicate further decline, except that 2003 is likely to have a slight increase at least partially because of support for Operation Iraqi Freedom (the recently completed conflict in Iraq). The extent to which Operation Iraqi Freedom will result in increases in future years is not clear. Future projections may not be a reliable indicator, since they change with changing conditions. The reliability of the estimates decreases with an increase in the projection beyond the current year. Comparing the amount of maintenance work accomplished in the Army depots with the Army’s total maintenance program shows that the total program has increased, while the amount of work assigned to the depots has declined. Figure 2 shows the dollar value of the total Army depot maintenance program from fiscal year 1987 through fiscal year 2002. The dollar value of the total Army depot maintenance program grew by 72 percent from fiscal year 1987 through fiscal year 2002. As reflected in figure 3 the labor hours for maintenance programs completed in each of the fiscal years from 1987 to 2002 at the five current Army depots show a significant overall decline in work during much of this period, with a slight upturn from fiscal year 2000 to fiscal year 2002. The total number of hours for depot maintenance programs completed in Army depots in fiscal year 2002 was 11.0 million—-36 percent less than the 17.3 million hours for maintenance programs completed in fiscal year 1987. Figure 3 indicates that in fiscal year 2000, the number of hours for maintenance programs completed in Army depots were the lowest since 1987. In fiscal year 1999, the Army completed the transfer of operational command and control of Army depots to the depots’ major customers, the Army Materiel Command’s (AMC) subordinate commands, which are also the coordinating inventory control points for the depots’ products. In making these realignments, the Army has tasked AMC to pay more attention to the amount of work assigned to the depots, since these commands are now responsible for the depots’ budgets and operations. The type of work performed in Army depots has changed significantly from fiscal year 1987 through fiscal year 2002. While Army depot work in fiscal year 1987 predominately involved the overhaul of Army end items (such as tanks, helicopters, and wheeled vehicles), in fiscal year 2002, the percentage of work for repairing Army secondary items (reparable components such as engines, transmissions, and rotor blades) was greater than that for end item repair. Our analysis of the labor hours for maintenance programs completed in fiscal years 1987 through 2002 showed that the overhaul of Army end items steadily decreased from 68 to 26 percent of the total workload over that period, while the repair of Army secondary items increased from 4 to 31 percent of the workload total. In addition, the percentage of work performed for non-Army customers increased from fiscal year 1987 through fiscal year 2002 from 6 to 26 percent of the total hours for maintenance programs completed in those years. At the Tobyhanna depot, which now has the largest amount of non-Army work, Air Force work accounted for only 4 percent of the hours of all maintenance programs completed at the depot from fiscal year 1987 through fiscal year 1997. However, from fiscal year 1998 through fiscal year 2002, repair work on Air Force systems was 23 percent of the total amount of work completed at this depot. At Corpus Christi, labor hours for Navy work accounted for 9 percent of the hours for all programs completed from fiscal year 1987 through fiscal year 1995. The hours spent for Navy work grew to 22 percent of the hours for all programs completed from fiscal year 1996 through fiscal year 2002. However, since the Navy withdrew some of its helicopter work from Corpus Christi in fiscal year 2003, that level of Navy work is not likely to continue unless new Navy workloads are designated for repair at Corpus Christi. At the Letterkenny depot, labor hours for foreign military sales accounted for 4 percent of the hours for all programs completed from fiscal year 1987 through fiscal year 1999. For fiscal years 2000 through 2002, foreign military sales work at that depot increased to 15 percent of the total hours of work completed. Workload projections suggest that in fiscal year 2003, the small upward trend begun in fiscal year 2001 will continue for another year, but another period of decline may occur from fiscal year 2004 through fiscal year 2008. Army component and recapitalization workload is projected to be the majority of the depots’ work. These projections are an April 2003 estimate from the Army Workload and Performance System (AWPS), an analytically based workload-forecasting system that projects future workloads and coordinates personnel requirements. This projection includes some recent increases in prior estimates for fiscal year 2003 to reflect revised estimates for reparable components to support Operation Iraqi Freedom. Officials at several depots said they are working overtime and have hired some temporary employees to support this increased requirement, but an official at one depot said it is not likely to be able to produce the amount of work currently estimated for fiscal year 2003 in AWPS because it does not have enough people. Depot officials said they do not know if reconstitution requirements following Operation Iraqi Freedom will result in increases in depot workload in fiscal year 2004 and beyond, and AWPS does not reflect increases in the out-years resulting from Operation Iraqi Freedom. According to an AMC official, the Army does not yet have a plan for managing the reconstitution, but one is being developed. Army officials said and we have confirmed that out-year estimates are not always reliable predictors of the specific work that will be performed in a future year. These projections are only as good as the knowledge of the personnel preparing them about future requirements. As we have reported in the past, workload estimates for Army maintenance depots vary substantially over time owing to the reprogramming of operations and maintenance appropriation funding and unanticipated changes in customer orders. Workload estimates are subject to much uncertainty and to frequent fluctuations with changing circumstances. For example, as previously noted, fiscal year 2003 requirements in AWPS have increased during the year as Operation Iraqi Freedom’s demands have generated more work than previously expected. On the other hand, reductions to future requirements frequently occur. For example, to fund other priorities, the Army has been considering reducing recapitalization work, which is forecasted to be about 29 percent of the depots’ future workload. Furthermore, the impact that reconstitution requirements could have on the Army depots following Operation Iraqi Freedom is unclear. Additionally, according to Army officials, AWPS does not receive actual programs planned for the out-years of all depot customers. Customers whose actual programs are entered into AWPS vary by subordinate command. Out-year workload for customers that are not based on actual programs must be estimated by the subordinate commands on the basis of past history and discussions with their customers about workload planned for the depots. Thus, these estimated workloads may not represent future workloads. Moreover, since work from other customers has become a much more substantial share of the total Army depot workload, the impact of these estimates’ accuracy will be more significant. Finally, future workloads that Army acquisition officials might have planned for the depots are difficult to identify, and AWPS will not accurately reflect these workloads unless acquisition officials provide the subordinate commands with such information. The Army acquisition community is primarily responsible for establishing future capability at the depots on the basis of the results of source-of-repair decisions and other factors such as core requirements. However, as discussed later, the amount of such work likely to be assigned to the depots is unclear. Army officials explained that the acquisition community does not enter these workloads into AWPS and that no central database exists of systems undergoing source-of-repair decisions to help the subordinate commands identify planned workloads and adjust AWPS projections accordingly. For these reasons, depot managers do not consider workload projections from AWPS reliable beyond 2 years, and they recognize that changes will occur even in the first 2 years. As previously discussed, the reliability of out-year projections in AWPS is affected by a number of factors such as changing requirements, funding limitations, and the work that may be planned but has not been identified and included in AWPS. While requirements and funding changes are expected occurrences, the Army is faced with the possibility that incomplete projections in AWPS regarding the size of the direct labor force required in the future can occur. This is because the Army’s current capability to identify maintenance workloads that are being planned for its depots is limited. Specifically, officials stated that the Army has no standard business rules or procedures for identifying the work that the Army acquisition community and non-Army customers may be planning for Army depots. They said that, at best, the current process is a hit-or- miss situation, depending on how aggressive the Army commands are in requesting such customers to identify their forecasted workloads, if it is done at all. Moreover, an Army official told us that the Army does not have a mechanism in place to adjust these estimates when it becomes clear that such forecasts are inaccurate. Improvements in this area could increase the reliability of future depot workload projections, as well as depot planners’ ability to manage depot operations efficiently. In its comments on a draft of this report, DOD officials stated that the lack of workload projection data for inter-service depot workloads should be addressed across all the military servicesnot just at Army depots. Consequently the Department will initiate a study to examine how the identification and reporting of depot inter-service workload projections across all the military services can be improved. Army depots have had some efficiency problems, caused by several factors, including the loss of work to the private sector and field-level maintenance activities. Initiatives such as facility and equipment downsizing, depot partnerships, and “lean manufacturing” have been implemented to address depot inefficiencies. Trends in two key metrics— capacity utilization and employee productivity—-show that progress has been made in recent years, although further improvements are still desirable. Additional workloads could play a key role in further improving the cost-effectiveness of the Army depots, and acquiring work for new systems is essential for long-term depot viability. Whether new systems work will be assigned to the depots is unclear, but depot officials believe that partnerships may offer the best potential for new systems work. Whether depot-level work that gravitated to field-level activities will return to the depots is also unclear. Depot maintenance operations have not been as efficient as Army depot managers would like them to be. This is, in part, due to a host of factors, including the impact of workload reductions, the changing nature of the work assigned, and workload performance issues such as less-than- expected employee productivity and work slow-downs caused by a lack of required spare and repair parts or inefficient repair processes. We have identified several issues that adversely affected depot efficiency and productivity, including DOD’s policy for greater reliance on the private sector for depot support of new weapon systems and major upgrades, the increased reliance on the use of regional repair activities and private- sector contractors for work that otherwise might be done in the depots, cost and schedule overruns, excess capacity, and difficulties in effectively using depot personnel. In August 2002, an Army task force identified problems with depot efficiency and productivity at the Corpus Christi depot. The task force pointed to the following as key problem areas: the use of inaccurate data to price maintenance programs, schedule and cost overruns caused by work performed against wrong standards and beyond the statement of work, and the use of direct workers to perform indirect tasks. Initiatives that have been implemented to improve depot efficiency and productivity include “rightsizing” at realigned depots, depot partnerships designed to improve the efficiency and performance of depot operations, and lean manufacturing initiatives. The 1995 base realignment and closure process significantly realigned two of the remaining five Army depots. Significant efforts were made to rightsize the workforce, property, plant, and equipment on the basis of assigned and projected workloads at the Letterkenny and Red River depots, which had the benefit of BRAC funding to support their realignment activities. The other depots have attempted to improve their efficiency as well. Various partnering initiatives have been undertaken to improve depot performance. In fiscal year 2002, the Army had 42 depot maintenance partnerships, the largest number in any of the military services. One of the most successful has been the partnership initiative implemented at Corpus Christi for the T700 engine. Wanting to reduce the repair time and improve the reliability of the Army’s T700 helicopter engine, the Corpus Christi Depot entered into a partnership with General Electric to achieve these improvements. Under the partnership, Corpus Christi provides the facilities and equipment and repairs the engine. General Electric provides spare parts as well as technical, engineering, and logistics services. According to depot officials, this effort has introduced General Electric’s best practices at the depot, which has resulted in a 26-percent reduction in engine turnaround time in the T700 engine repair line and a 40-percent increase in test cell pass rates for the repaired engines. Depot and contractor officials both attribute improved depot repair times for the T700 engine to better parts availability and improvements to the depot’s repair processes, although they also recognize that the related T700 recapitalization effort begun shortly after the formation of the partnership may also be a factor influencing these improvements. Figure 4 shows the repair line for the T700 engine. Other initiatives are also being implemented to improve efficiency and productivity in Army depot maintenance operations under the umbrella of lean manufacturing. Most of these initiatives are in the early phases of implementation, but some progress is being reported. Anniston officials report that they have identified a more efficient reciprocating-engine process. Corpus Christi officials reported improvements for other maintenance processes as a result of their lean-manufacturing initiatives. Depot managers at Letterkenny set a goal to reduce the repair time for the Patriot missile launcher and report that they have already reduced the number of technicians by three and the floor space by 70 percent. Red River officials reported that process improvements have allowed them to increase monthly maintenance production for a truck engine from 17 to 40. A Tobyhanna official stated that, because of its process improvements, unit costs for the Sidewinder missile’s guidance and control system have decreased substantially. Furthermore, with planned improvements on the Sidewinder and two other systems, Tobyhanna officials expect major reductions in overhaul and recapitalization timelines, reduced customer costs, gains in customer satisfaction, and greater employee satisfaction, as depot workers take the lead in transforming their work. Trends in two key metrics—-capacity utilization and employee productivity—-show that progress has been made in recent years, although improvements are still desirable. DOD measures capacity utilization by considering the amount of work produced relative to the work that could potentially be produced on a single shift operation using the number of personnel on board. Table 2 shows capacity utilization in each of the five Army depots from fiscal year 1999 through fiscal year 2002. Compared to a DOD goal of 75 percent utilization, capacity utilization for the five Army depots has fluctuated from fiscal year 1999 through fiscal year 2002, but generally has improved. In fiscal year 1999, three of the five depots were below the goal by an average of 12 percent while two depots exceeded the goal by an average of 6 percent. In contrast, by fiscal year 2002, all five depots exceeded the goal by an average of 4 percent. At 83 percent utilization, the Corpus Christi depot showed the highest capacity utilization in fiscal year 2002, and Letterkenny and Red River had utilization rates of 82 and 79 percent, respectively. The higher-capacity utilization was largely achieved by decreasing the physical layout or “footprint” of the maintenance depot. Downsized by decisions of the 1995 BRAC process, Letterkenny and Red River received BRAC funds to support their realignment activities. While the capacity utilization of these two depots for fiscal year 2002 was relatively high, they have the smallest workloads of the five depots. It is important to remember that DOD’s capacity-utilization computation somewhat understates the depots’ full potential for producing work. The capacity-utilization computation assumes operations during an 8-hour workday and a 5-day workweek. However, all the depots have some overtime and some shift work and, if needed, could increase the amount of overtime and shift work. Another metric—-employee productivity—-also indicates that Army depot operations are improving. Employee productivity measures the average number of productive hours worked in a year by depot workers after leave, holidays, training, and other time away from the job are excluded. Table 3 shows average employee productivity at each of the five Army depots from fiscal year 1999 through fiscal 2002. The Army depot average of 1,600 hours for fiscal year 2002 was significantly higher than it was a few years ago and is progressing toward the DOD standard of 1,615 hours. In fiscal year 1999, none of the depots were at the standard averaging 1,504 hours and ranging from a low of 1,421 hours to a high of 1,599 hours. In fiscal year 2002, the number of employee productive hours at the Tobyhanna depot was 1,625 and 1,614 at Red River. The employee productivity of all of the Army depots has improved since 1999. Depot managers said they were successful in improving worker productivity by emphasizing to direct workers the need for reducing the amount of time spent in nonproductive areas. Additional workloads could play a key role in further improving the cost- effectiveness of the Army depots and are essential for the depots’ long- term viability. As the systems currently being repaired in the depots age, they will be withdrawn from the Army’s inventory and replaced with new and/or upgraded systems. If repair and overhaul for the new and upgraded systems go to the private sector, workload in the depots will continue to diminish. In considering additional workloads for its depots, the Army has several options: (1) move work that the private sector is performing either by reassignment at contract renewal time or establishing a partnership arrangement with the private sector, (2) assign new work from the source- of-repair process the Army uses to identify where the work will be performed, (3) and move work from field-level activities that now perform depot tasks. In considering additional workload, an essential issue for the Army is whether its depots have the capability or whether establishing capability is affordable to take on work that is being performed by other sources. Acquiring new systems work will be the key to the survivability of the depots in the long term. In recent years, the depots have received very little new and upgraded systems work. As older systems are withdrawn from the inventory, the repair work on systems currently assigned to the depots will continue to decline. Unless new systems work is identified for the depots, they will become more and more inefficient as their workload declines. With regard to the potential for additional workloads for the depots from new systems, Army acquisition officials told us that establishing new capability at the depots has become more difficult with the Army’s implementation of performance-based logistics because the Army is not buying the technical data or the rights to use the data in establishing repair capability at its depots. This could adversely affect the Army’s ability to realign existing work from the private sector to government-owned depots. An internal Army study found that weapon systems program officials make decisions to outsource the repair of new and upgraded systems without considering the impact of these decisions on the requirement to maintain core capability for essential systems in military depots. Depot managers believe that partnership arrangements are an effective means for improving the efficiency and productivity of depot operations and are the best opportunity to bring additional workloads into the depots. Among potential partnerships being explored for new workload are the following: Anniston, for the M1A2 tank service extension program; Corpus Christi, for the Comanche helicopter; Letterkenny, for the Javelin missile; and Red River, for the Heavy Expanded Mobility Tactical Truck. With regard to moving work from field-level activities that now perform depot tasks, the Army has taken some initiative to get control over this problem, but the extent that it has dealt with proliferation of depot work in field-level activities is unclear. The Report of the House Committee on Armed Services on the Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001 said that the Army has yet to account accurately for depot-level maintenance workloads performed by organizations outside the depot system. That report directed the Army to provide a report identifying the proliferation of depot-level maintenance in these activities by February 1, 2001, and directed us to review the Army’s report and provide the Congress with an analysis, including an assessment of the Army’s ability to comply with 10 U.S.C. 2466, the requirement that not more than 50 percent of funds made available for depot-level maintenance be used to contract for performance by nonfederal personnel. The Army has not yet reported to the Congress, but Army officials stated that as of July 3, 2003, the report was being reviewed internally. We will analyze the report when it is completed. Beginning in November 1993, the Army did biennial identification of core capability requirements and the workloads necessary to sustain those depot maintenance core capabilities. The most recent core identification, however, was in December 1999 for fiscal year 2001 and showed that 10.8 million work hours were associated with maintaining core capability requirements for the five depots. An updated core identification is overdue, but in January 2003 the Deputy Under Secretary of Defense for Logistics and Materiel Readiness issued a new core identification methodology and, at the time of our review, had additional revisions under way to the methodology. Thus, the Army has not yet computed core capability requirements based on this new methodology. Furthermore, the Army does not routinely assess whether the work performed by its five depots is adequate to sustain their core capabilities; and workloads performed by the five depots have not been at the level identified by the 1999 core identification as necessary to maintain core capabilities. The identification of core logistics capability involves a complex process that has been evolving over the past 10 years. This process is based on a requirement contained in 10 U.S.C. 2464 to identify and maintain within government owned and operated facilities a core logistics capability, including the equipment, personnel, and technical competence required to maintain weapon systems identified as necessary for national defense emergencies and contingencies. Specifically, the Secretary of Defense is to identify the workloads required to maintain core logistics capabilities and assign to government facilities sufficient workload to ensure cost efficiency and technical competence in peacetime, while preserving capabilities necessary to fully support national defense strategic and contingency plans. To accomplish this requirement, beginning in November 1993, the Office of the Deputy Secretary of Defense for Logistics outlined a standard multi-step method for determining core capability requirements and directed the services to use this method in computing biennial core capability and associated workload requirements. In November 1996, the core methodology was revised to include (1) an assessment of the risk involved in reducing the core capability requirements as a result of having maintenance capability in the private sector and (2) the use of a best-value comparison approach for assigning workload not associated with maintaining a core capability to the public and private sectors. The core methodology provided a computational framework for quantifying core depot maintenance capabilities and the workload needed to sustain these capabilities. It included three general processes: 1. The identification of the numbers and types of weapon systems required to support the Joint Chief of Staff’s wartime-planning scenarios. 2. The computation of depot maintenance core capability workload requirements measured in direct labor hours to support the weapon systems’ expected wartime operations as identified in the wartime- planning scenarios. 3. The determination of industrial capabilities (including the associated personnel, technical skills, facilities, and equipment) that would be needed to accomplish the direct labor hours identified above that were generated from the planning scenarios. That determination is adjusted to translate those capabilities into peacetime workloads needed to support them. These peacetime workloads represent the projected workload necessary to support core capability requirements for the next program year in terms of direct labor hours. To conclude the process, the services then identify specific repair workloads and allocate the core work hours needed to accomplish the maintenance work that will be used to support the core capabilities at the public depots. We previously reported that the DOD depot maintenance policy was not comprehensive and that the policy and implementing procedures and practices provided little assurance that core maintenance capabilities were being developed as needed to support future national defense emergencies and contingencies. Some of the weaknesses were that (1) the existing policy did not provide a forward look at new weapon systems and associated future maintenance capability requirements, (2) the existing policy did not link the core identification process to source-of-repair policies and procedures for new and upgraded systems, and (3) the various procedures and practices being used by the services to implement the existing policy, such as using “like” workloads to sustain core capabilities, were affecting the ability of the depots to establish core capabilities. In October 2001, DOD revised the methodology by dividing the core methodology into two distinct parts to more clearly distinguish between core capability requirements and the depot maintenance workloads needed to satisfy those requirements. Detailed core capability and associated workload computations would be performed on a biennial basis in conjunction with the planning, programming, and budgeting system in order to address both the requirements for new systems and changes to existing systems. Also, core computations would be reviewed annually to assess the impact of unanticipated budgetary adjustments. Regarding the new methodology issued in January 2003, DOD officials told us that some revisions are being made and the methodology has not yet been finalized. Thus, we have not reviewed the methodology in detail and cannot be sure whether the new methodology will correct the weaknesses we identified in the core process. The Army’s identification of core capabilities and workloads required to sustain them in December 1999 showed that the five depots had a total workload requirement of 10.8 million work hours associated with its core capability requirements. As shown in table 4, work performed by the depots for the 4-year period, fiscal years 1999 to 2002, was generally below the amount identified for total core capability requirements. Depot officials stated that for the core identification process, the depots identify the skills required by job series to support the core capability. However, Army officials said that neither the depots nor the Army routinely assess the extent to which work performed by the depots compares with the identification of core capability requirements and associated workloads. Thus, they do not have the information needed to determine whether the level and nature of the work performed in the depots is sufficient to ensure cost efficiency and technical competence and to preserve core capability. When we discussed identification of core capability and associated workloads with depot managers, they said that ensuring appropriate workloads are going to the depots is essential to their being able to maintain required core skills to support combat readiness. They also expressed concern that the definition of core capability workload requirements seems to constantly fluctuate and that maintenance workloads that once were identified as required for core capabilities were being transferred to the private sector. For example, depot managers at the Red River Army Depot pointed out that workload associated with the Heavy Expanded Mobility Tactical Truck was a significant factor in the depot’s ability to maintain the necessary core capabilities. However, the truck workload was lost in October 2001 when the Army decided to stop recapitalization work at the depot and to use a contractor to perform an extended service life program for the truck. They said that other systems, such as the Bradley Fighting Vehicle, are headed in the same direction. Depot managers also pointed out that the depots are not always assigned work sufficient to ensure cost efficiency and technical competence and to preserve surge capability. Additionally, the depots are not capable of providing some core capabilities. For example, the depots do not have capability to repair key components of the M1A2 tank, the Apache helicopter, and the Bradley Fighting Vehicle for which core capability requirements were identified. More specifically, Anniston does not have the capability to support unique electronic components for the M1A2 tank, Corpus Christi does not have the capability to support Apache Longbow unique components, and Red River does not have the capability to support electronic components for the Bradley A3 model. Our October 2001 report identified a number of the same concerns with the fluctuations in core capability identification and the loss of work required to sustain depot core capabilities. DOD’s latest policy on core, which was released in January 2003, requires the services to develop an assessment of what specific workload is necessary to achieve its core goals at the DOD, service, and facility levels. However, the services have not yet been tasked by DOD to recompute core capability requirements based on the new policy. Officials said some changes to the revised policy are expected to occur. Although we previously recommended that a strategic plan for DOD- owned depots be developed, neither the Office of the Secretary of Defense nor the Department of the Army has implemented comprehensive or strategic plans for defense maintenance to revitalize or protect the future viability of its depot facilities and equipment and its depot workers. The Army has taken steps to develop a strategic plan for its depots, but it is not comprehensive or current and the Army has not yet implemented it. The Office of the Secretary of Defense has undertaken a depot planning study, but still has no depot strategic plan. Our prior reports have demonstrated that a strategic plan is critical to the future viability of the defense depot system. For example, in our October 2001 report, we pointed out that logistics activities represent a key management challenge and that depot maintenance is an important element of those activities. As such, we noted that DOD was at a critical point with respect to the future of its maintenance programs and that the future role for the military depots in supporting the department’s future maintenance programs was unclear. Finally, we pointed out that before DOD can know the magnitude of the challenge of revitalizing its depot facilities and equipment and its depot workforce, it must first know what its future workloads will be; what facility, equipment, and technical capability improvements will be required to perform that work; and what personnel changes will be needed to respond to retirements and workload changes. We recommended, among other things, that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in conjunction with the military services, to establish expedited milestones for developing strategic and related implementation plans for the use of military depots that would identify desired short- and long-term core capabilities and associated capital investments and human capital needs. However, although the Department is conducting a study that could lead to the development of a depot strategic plan, as of July 2003, DOD still has no strategic plan that provides required direction to shape the future of the depots if they are expected to remain viable for the future. We again addressed the issue of the need for strategic planning in our recent report on strategic workforce planning for the DOD industrial workforce, noting that DOD has not implemented our prior recommendations regarding the need for a DOD depot strategic plan. However, absent a DOD depot plan, the services have laid out a framework for strategic depot planning in varying degrees, but this is not comprehensive. While the Army has taken some actions regarding the development of a strategic depot plan, its plan is not comprehensive, and implementation of the plan was suspended. In January 2000, the Army Deputy Chief of Staff for Logistics published Army Depot Maintenance Enterprise Strategic Plan, a plan that provided mission and vision statements for the Army depots and identified five strategic issues for which they began developing action plans: 1. Identification and management of all depot maintenance requirements for Army systems throughout all phases of their life cycles in the Planning, Programming, and Budget Execution System. 2. Restructuring the process for determining the source of depot repair to ensure that appropriate approval authorities are utilized for decisions to rebuild, overhaul, upgrade, and repair decisions above a certain threshold (e.g., dollar value). 3. Ensuring that the Army depot workforce is capable of meeting future depot maintenance requirements. 4. Managing materiel/supplies (parts) used by the depots to provide for more efficient depot operations. 5. Making Army depots more competitive with private-sector depot maintenance providers. Identifying these broad strategic issues, along with some objectives, measures, and action plans, was a step in the right direction. However, the Army did not finalize or implement its action plans. Army planners told us that implementing the strategic plan was put on hold, pending an effort to reassess depot capabilities and requirements as part of the Army’s effort to identify the depot capabilities that had proliferated in field-level activities. The plan did not address depot maintenance that is being performed in field-level activities. The Army’s assessment of depot proliferation was supposed to result in a report to the Congress on this subject, but as previously discussed, the Army has not yet provided this report. Furthermore, Army officials stated that there has been no update to modify the strategic plan to address how the Army will manage this category of depot work. Continuing issues about (1) the assignment of reduced workloads to Army maintenance depots, (2) deficiencies in the process of quantifying both core depot maintenance capabilities and the workload needed to ensure cost efficiency and technical competence and to preserve surge capability, and (3) strategic planning for depots raise significant questions about the long-term viability of Army depots. We have discussed these issues in the past, but they remain unresolved. It will be important for the Congress and the Department of Defense to clarify these issues to ensure the continued performance of required support resources in the future. In addition to the issues discussed in the past, we identified another area where action would improve data reliability for Army depotsthe development and implementation of procedures for identifying and reporting depot workload projections from the Army acquisition community and from non-Army customers. By addressing both the identification and reporting of initial forecasts as well as subsequent changes to the forecasts, greater reliability should be achievable for the Army Workload and Performance System. Furthermore, as DOD has observed, improved projections of interserviced maintenance work would benefit all depots, not just those of the Army. To improve the reliability of future maintenance workload projections in all DOD maintenance depots, we recommend that the Secretary of Defense through the Under Secretary of Defense for Acquisition, Technology, and Logistics, require the Army Materiel Command in conjunction with the Army acquisition community to develop and implement standard business rules and procedures for identifying and reporting Army depot workload projections from the Army acquisition community and require the DOD depot maintenance community to develop and implement ways to improve the identification and reporting of depot inter-service workload projections across all the military services using standard business rules and procedures. In commenting on a draft of this report, the Department partially concurred with our recommendations to improve the reliability of future workload projections. Appendix II contains the text of DOD’s comments. The Department partially concurred with the recommendation in our draft report that the Army Materiel Command develop standard business rules and procedures for identifying and reporting Army depot workload projections. While agreeing that this could be done for work coming from the Army acquisition community, the response noted that the Department did not believe that the Army Materiel Command alone could establish standard business rules and procedures for identifying and reporting Army depot workload projections from the non-Army customers. However, the Department agreed with us that a need exists for Army Depots to have valid workload projections from the Army acquisition community and non-Army customers and that standard business rules and procedures are required. Moreover, the Department’s response stated that since the lack of workload projection data for inter-service depot workloads should be addressed across all the military services, the Department planned to initiate a study to examine how the identification and reporting of depot inter-service workload projections across all the military services can be improved. Consequently, we modified our recommendation to address the Department’s comments. The Department partially concurred with a second recommendation in the draft report requiring the Army acquisition community and non-Army customers to report depot workload projections for Army depot work through the Army Workload and Performance System using the standard business rules and procedures. The Department stated that it agreed in concept that Army customers should provide Army depots with workload projections, but that it currently does not appear feasible for all non-Army customers to report depot workload projections for Army depots through the Army Workload and Performance System. Therefore, we dropped the reference to the Army Workload and Performance System from our recommendation. The Department stated also that, as with the first recommendation, it plans to address this recommendation with a study to examine how the identification and reporting of inter-service workload projections across the military services can be improved. The Department also provided some technical comments for our draft report that were incorporated where appropriate. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretary of the Army; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions regarding this report, please contact me at (202) 512-8412 or [email protected] or Julia Denman, Assistant Director, at (202) 512-4290 or [email protected]. Other major contributors to this report were Bobby Worrell, Janine Prybyla, Jane Hunt, and Willie Cheely. To answer the specific questions posed by the Ranking Minority Member, Subcommittee on Military Readiness, House Committee on Armed Services, we interviewed Army officials and analyzed pertinent information at Army headquarters in the Washington, D.C., area; Headquarters, Army Materiel Command in Alexandria, Virginia; and three subordinate Army commands—the Army Aviation and Missile Command, Huntsville, Alabama; Communications-Electronics Command, Fort Monmouth, New Jersey; and the Tank-automotive and Armaments Command, Warren, Michigan. Additionally, we interviewed depot managers and reviewed pertinent information at the Army’s five depots: Anniston Army Depot, Anniston, Alabama; Corpus Christi Army Depot, Corpus Christi, Texas; Letterkenny Army Depot, Chambersburg, Pennsylvania; Red River Army Depot, Texarkana, Texas; and Tobyhanna Army Depot, Tobyhanna, Pennsylvania. We also made extensive use of our prior work and ongoing work related to Army depot maintenance. To assess the trends in historical and future workloads assigned to the Army depots, we analyzed workload information from the Army’s automated databases. Specifically, for historical workloads, we evaluated the trend in workload hours for closed maintenance programs from the Army Headquarters Application System for fiscal years 1987 through 2002. For trends in future workloads, we used workload hours from the Army Workload and Performance System for fiscal years 2003 through 2008. For the reliability of future workload projections, we used our prior work showing that Army depot workload estimates are subject to frequent changes because of factors such as fluctuations in requirements and funding levels. We also questioned Army officials and depot managers about the reliability of the workload estimates as shown by the Army Workload and Performance System. In determining whether the depots have sufficient workload to promote efficient maintenance operations, we compared metric data that others and we have previously identified with data on the Army’s current operations. We examined data on key metrics to determine how well the depots performed assigned workloads against key metrics and standards such as depot-level capacity utilization and employee productivity. We obtained metric data from the Army depots and from Headquarters, Army Materiel Command. Since Headquarters, Army Materiel Command, could not provide the data prior to 2001, the data for fiscal years 1999 and 2000 are from the depots. In those instances where the reported data for fiscal years 2001 and 2002 differed, we used the data reported by Headquarters, Army Materiel Command, because the headquarters data were reported by the depots. To identify whether additional workloads are possible, we relied upon our prior and ongoing work, which shows that the Army had contracted with the private sector for maintenance workloads that its depots had previously performed and that upgrades and modifications of older systems and new weapon systems were a potential source of work for the depots. We also questioned acquisition and logistics officials at the subordinate commands with responsibility for workloads for the depots about workloads that were being considered for the Army depots and the limitations to bringing these workloads to the depots. To identify whether initiatives are being implemented to improve efficiency, we examined the plans and projects that the depots had under the umbrella of lean manufacturing to improve maintenance operations. In addition, we reviewed depot reports on the extent to which these initiatives were improving operations. To answer whether the Army has identified the depots’ core capability and provided its depots with workload to use that capability, we reviewed the Department of Defense and Army guidance for computing core capability requirements and associated workloads in December 1999 for fiscal year 2001 and compared the results with workloads assigned to the depots since fiscal year 1999. We also examined the department’s new methodology issued in January 2003 for computing core capability requirements and questioned department and Army officials about the schedule for implementing the new methodology. We also questioned depot officials about the adequacy of workloads assigned and the extent to which the work allows the depots to maintain necessary capabilities. For the question of whether the Army has a long-range plan for the future viability of an efficient depot system, we relied upon prior work that shows that neither the department nor the Army had a comprehensive defense maintenance strategic plan. We conducted our review from September 2002 through June 2003 in accordance with generally accepted government auditing standards.
|
The Army's five maintenance depots produced work valued at $1.5 billion in fiscal year 2002, with the remaining 49 percent of the Army's depot work performed by contractors. GAO was asked to assess (1) the trends in and the reliability of depot workload projections; (2) whether workloads are sufficient for efficient depot operations, initiatives are under way to improve efficiency, and additional workloads are possible; (3) whether the Army has identified depots' core capability and provided workload to support that capability; and (4) whether the Army has a long-range plan for a viable, efficient depot system. The work assigned to Army maintenance depots has declined by 36 percent, although the cost of the Army's total maintenance program has increased since fiscal year 1987. Except for fiscal year 2003, projections for future work in the depots through fiscal 2008 show further decline. Depot work also changed from predominately overhauling Army end items to the increased repair of components. In addition, work from non-Army customers has increased from 6 to 26 percent. Army component and recapitalization work is projected to be the majority of depot work in the future. Depot planners generally do not have reliable projections of work requirements for non-Army customers. Because of this and other factors, including changing conditions, future projections have limitations. Potential increases in depot work resulting from the Iraq war are not yet clear. Various factors, including workload reductions and workload performance issues, have resulted in efficiency and productivity problems in Army depots. Such initiatives as facility and equipment rightsizing, depot maintenance partnerships, and "lean manufacturing" have been implemented. Trends in two metrics--capacity utilization and employee productivity--show that, while more needs to be done, efficiency and productivity improvements have been made. Additional workloads, particularly for new and upgraded systems, are essential for future depot viability. However, in the past most new work has gone to private contractors. Some new-systems work is being explored for depots, and depot managers believe that partnering with the private sector may be the best chance for getting such work. The Army has not identified its depots' core capability requirements using a revised DOD methodology meant to overcome weaknesses in the core process. At the same time, it is unclear whether the revised methodology, which is undergoing further changes, will correct weaknesses in the core process. Moreover, no one in the Army assesses the extent to which depot work compares with identified core capability requirements. Depot managers are concerned about the loss of work and the failure to obtain work necessary to support core capabilities. The Army does not have a comprehensive and current strategic plan for the depots and has not implemented the limited plan it developed. GAO concluded in a 1998 report that the Army had inadequate long-range plans for its depots and that such planning is essential if significant progress is to be made in addressing the complex, systemic problems facing the depots. Despite the time that has passed, the same issues remain. DOD has not implemented a comprehensive and current plan for resolving continuing issues about (1) reduced workloads being assigned to Army maintenance depots and (2) deficiencies in the process of quantifying both core depot maintenance capabilities and the workload needed to ensure cost efficiency and technical competence and to preserve surge capability. Without such a plan, the long-term viability of Army depots is uncertain.
|
The end of the cold war left the United States with a surplus of weapons- grade plutonium. Much of this material is found in a key nuclear weapon component known as a pit. In 1997, DOE announced a plan to dispose of surplus, weapons-grade plutonium through an approach that included fabrication of plutonium into MOX fuel for use in domestic commercial nuclear reactors. In 2000, the United States and Russia entered into a Plutonium Management and Disposition Agreement, in which each country pledged to dispose of at least 34 metric tons of surplus, weapons- grade plutonium. Through a protocol to the agreement signed in 2010, the United States and Russia reaffirmed their commitment to dispose of surplus, weapons-grade plutonium as MOX fuel in nuclear reactors, and the agreement entered into force in 2011. The MOX facility is designed to remove impurities from plutonium feedstock obtained from nuclear weapon pits, form the plutonium into MOX fuel pellets, and fabricate pellets into fuel assemblies for use in a reactor. The MOX facility is a reinforced concrete structure measuring about 600,000 square feet (including support buildings) and, when complete, will include about 300 separate process systems using approximately 23,000 instruments; 85 miles of process piping; 500,000 linear feet of conduit; 3,600,000 linear feet of power and control cable; and 1,000 tons of heating, ventilation, and air conditioning duct work. The WSB will be a 33,000 square foot reinforced concrete structure and will include tanks, evaporators, and solidification equipment to process radioactive liquid waste streams from the MOX facility into solid waste forms suitable for disposal at DOE sites in New Mexico and Nevada. Figure 1 shows aerial views of construction progress for the MOX facility and WSB as of June 2013 and July 2013, respectively. In addition to the MOX facility and WSB, NNSA’s plans for the U.S. Plutonium Disposition program include the following two additional components: MOX Irradiation, Feedstock, and Transportation (MIFT). Among other activities, this component includes: (1) production of plutonium feedstock for the MOX facility, (2) qualification of MOX fuel for use in commercial nuclear reactors, and (3) procurement and maintenance of shipping containers for plutonium feedstock and MOX fuel. Plutonium Disposition and Infrastructure Program (PDIP). This component includes overall management and integration of the MOX facility and WSB projects and integration of the projects with activities falling under MIFT; preparation of environmental impact statements and records of decision for the program in accordance with the National Environmental Policy Act; support for infrastructure at the Savannah River Site, such as site roads; and other activities. NNSA’s plans for producing plutonium feedstock previously included design and construction of a stand-alone Pit Disassembly and Conversion Facility (PDCF) at the Savannah River Site. As we reported in March 2010, NNSA never established a definitive cost and schedule estimate for the PDCF, but NNSA estimated in January 2011 that the cost of the facility could range from $4.5 billion to $4.8 billion. NNSA canceled the PDCF in January 2012 and, instead, proposed in a July 2012 draft environmental impact statement to meet the feedstock requirements for the MOX facility through existing facilities at DOE’s Los Alamos National Laboratory and the Savannah River Site.cycle cost estimate for the Plutonium Disposition program, NNSA spent $730.1 million on the PDCF prior to its cancellation. According to NNSA’s draft life- In July 2012, NNSA also announced its preferred alternative for disposition of 13.1 metric tons of surplus plutonium not already included in the 34 metric tons planned for disposal as MOX fuel. The additional plutonium included pits declared excess to national defense needs, as well as surplus non-pit plutonium. According to NNSA officials, the preferred alternative would increase the amount of plutonium disposed as MOX fuel to about 42 metric tons. As of December 2013, DOE had not issued a final supplemental environmental impact statement or record of decision on the facilities to be used to meet plutonium feedstock requirements for the MOX facility or on the disposition pathway for the 13.1 metric tons of surplus plutonium. NNSA’s Office of Defense Nuclear Nonproliferation provides policy direction for the Plutonium Disposition program, develops and manages annual budgets and the life-cycle cost estimate for the overall program, and manages the MIFT and PDIP components of the program. NNSA’s Office of Acquisition and Project Management is responsible for managing construction of the MOX facility and WSB projects within approved cost and schedule estimates. To do so, the office manages teams of federal project directors and federal staff that provide direction and oversight of the contractors for both projects, report monthly on the projects’ cost and schedule performance, and evaluate contractors’ performance in areas such as management of subcontractors. The office also conducts reviews of the construction projects to evaluate technical, cost, scope, and other aspects of the projects so that any necessary course corrections can be made. DOE’s project management order requires that such reviews be conducted at least once per year. NNSA entered into cost-reimbursable contracts for construction of the MOX facility and WSB. A cost-reimbursable contract provides for payment of a contractor’s allowable incurred costs to the extent prescribed in the contract. Agencies may use cost-reimbursable contracts when uncertainties in the scope of work or cost of services prevent the use of contract types in which prices are fixed, known as fixed-price contracts. The MOX and WSB contracts included fees with payment tied to meeting or exceeding preestablished requirements or withholding of fees for any requirements not met, thereby reducing contractors’ profits. Under the MOX contract, NNSA provided four types of fees that the contractor could earn: (1) incentive fees—a type of fee specifically tied to meeting a project’s cost and schedule estimate; (2) milestone fees tied to on-time completion of construction milestones; (3) award fees, which are generally intended to motivate performance in areas other than cost and schedule, such as safety; and (4) fixed fees, a set amount a contractor receives for contract performance. In contrast, NNSA included only one type of fee for the WSB—a performance incentive fee under the contract for management and operation of the Savannah River Site, which included construction of the WSB. In order to provide the contractor performance incentives specifically related to construction of the WSB, NNSA established various performance measures, such as meeting the project’s cost and schedule and completing construction milestones, and allocated portions of the fee to each performance measure. The contractors for the MOX facility and WSB work with subcontractors to construct the facilities. For example, the WSB contractor entered into a subcontract that included all construction activities for the WSB with the exception of early site work, such as installation of underground utilities. Once the construction subcontractor completes its work, the WSB contractor is responsible for start-up testing and operation of the facility. Under DOE’s project management order, the Deputy Secretary of Energy is the senior DOE official accountable for all of the department’s project acquisitions. In addition, the Deputy Secretary approves cost and schedule estimates for all major construction projects—defined as those with values of at least $750 million, which includes the MOX facility—and approves any cost increase over $100 million for a major or nonmajor project. The DOE Office of Acquisition and Project Management conducts external independent reviews to validate estimates prior to approval by the Deputy Secretary. Once estimates have been approved, this office monitors projects’ cost and schedule performance and reports to the Deputy Secretary on a monthly basis. Figure 2 depicts the roles of NNSA, DOE, and contractors in managing the Plutonium Disposition program. The GAO Cost Estimating and Assessment Guide and the GAO Schedule Assessment Guidethe four characteristics of high-quality, reliable cost and schedule estimates, respectively: compiled best practices corresponding to The characteristics of a high-quality, reliable cost estimate are comprehensive, well-documented, accurate, and credible. For example, (1) a comprehensive estimate has enough detail to ensure that cost elements are neither omitted nor double counted, (2) a well- documented estimate allows for data it contains to be traced to source documents, (3) an accurate estimate is based on an assessment of most likely costs and has been adjusted properly for inflation, and (4) a credible estimate discusses any limitations because of uncertainty or bias surrounding data or assumptions. Our cost estimating guide also lays out 12 key steps that should result in high- quality cost estimates. For example, one of the steps is to conduct an independent cost estimate––that is, one generated by an entity that has no stake in approval of the project but uses the same detailed technical information as the project estimate. Having an independent entity perform such a cost estimate and comparing it with a project team’s estimate provides an unbiased test of whether a project team’s estimate is reasonable. The four characteristics of a high-quality, reliable schedule are comprehensive, well-constructed, credible, and controlled. For example, (1) a comprehensive schedule includes all government and contractor activities necessary to accomplish a project’s objectives, (2) a well-constructed schedule sequences all activities using the most straightforward logic possible, (3) a credible schedule uses data about risks and opportunities to predict a level of confidence in meeting the completion date, and (4) a controlled schedule is updated periodically to realistically forecast dates for activities. NNSA identified various drivers of the cost increases for the MOX facility and WSB. NNSA’s budget request for fiscal year 2014 summarized the cost drivers that NNSA considered to be most significant. In addition, NNSA identified some of these drivers in earlier documents, including in reports of project reviews conducted in 2011 and 2012, in monthly status reports for the projects, and, for the WSB, in the document requesting approval for a cost increase. NNSA and contractor officials provided additional details on these drivers during interviews with us. Key drivers NNSA identified for the cost increase for the MOX facility included the following: DOE’s approval of the cost and schedule before design was complete. The head of NNSA’s Office of Acquisition and Project Management told us that, judging from the MOX contractor’s design costs during construction of the MOX facility, the overall design was about 58 percent complete when DOE approved the project’s cost and schedule estimate in April 2007. In contrast, according to DOE’s project management order, to support the development of a cost estimate, the design of complex nuclear processing facilities needs to be closer to 100 percent complete than the design of basic facilities, such as administrative buildings and general purpose laboratories. NNSA’s budget request for fiscal year 2014 stated that the cost of critical system components for the MOX facility averaged 60 percent higher than estimated as a result of approval of these estimates before design was complete. According to NNSA and MOX contractor officials, after the contractor completed designs for critical system components, such as the gloveboxes used in the facility for handling plutonium and related infrastructure, equipment suppliers submitted higher bids than the contractor anticipated. For example, according to the contractor’s Vice President of Operations, a vendor submitted a bid in 2008 that was four times the amount the same vendor had estimated in 2005. Higher-than-anticipated costs to install equipment. For example, the MOX contractor estimated in its September 2012 proposal to increase the cost of the facility that the labor hours to install each foot of the approximately 85 miles of piping in the facility increased by as much as 26 percent and that, as facility designs became more definitive, the total amount of pipe increased by close to 33 percent over the previous estimate. In addition, according to NNSA, the number of safety systems needed to meet Nuclear Regulatory Commission (NRC) requirements was greater than anticipated, further adding to equipment installation costs. According to NNSA officials, NNSA and the contractor did not have a good understanding of the cost of designing the facility to meet NRC requirements related to demonstrating the ability to withstand an earthquake. The officials explained that the facility’s design is based on a similar facility in France but that NRC regulatory requirements differ from those in France. The contractor’s difficulty identifying suppliers and subcontractors able to fabricate and install equipment meeting nuclear quality assurance criteria.project was experiencing the same issues identifying qualified suppliers and subcontractors as other nuclear projects across DOE. These issues included a higher than expected effort associated with attracting qualified vendors and, after vendors were selected, responding to questions or correcting noncompliance with requirements. For example, according to NNSA and the MOX contractor, the contractor needed to station quality assurance personnel at supplier and subcontractor locations to oversee activities. According to NNSA’s review of the MOX project in 2011, the Greater-than-expected turnover of engineering and technical staff. In particular, the project lost staff to other nuclear industry projects, including projects in neighboring states, resulting in a nearly complete turnover of construction management personnel over a period of several years and the need to provide training to replacement personnel. NNSA identified this driver in its budget request for fiscal year 2012. Specifically, the budget request stated that over 15 percent of the project’s engineering and technical personnel had left for other nuclear industry jobs in the previous year with pay increases of at least 25 percent. The budget request further stated that finding experienced replacements had become difficult and expensive. According to the budget requests for fiscal years 2013 and 2014, the loss of experienced engineering and technical staff to other nuclear industry projects has continued. Change in scope of the project to add capability to the MOX facility to produce plutonium feedstock. As part of its decision to cancel plans for a stand-alone PDCF and to instead meet feedstock requirements through existing facilities, NNSA directed the MOX contractor to include feedstock capability in its September 2012 proposal to increase the cost of the facility. The contractor’s proposal included an estimate of $262.3 million to add feedstock capability. In identifying these drivers of the cost increase for the MOX facility, NNSA did not identify the dollar amount associated with each cost driver. An NNSA official said that the MOX contractor’s system for tracking and reporting on cost and schedule performance could potentially be used to determine dollar amounts that each driver added to the overall cost increase—which is one possible use of such a system—but that doing so would be time-consuming and difficult. As a result, NNSA officials could not substantiate the relative importance of the cost drivers. For example, NNSA officials said they had not conducted a formal analysis to back up an estimate, which they had made when we first discussed the cost drivers with them, that lack of design maturity of critical system components accounted for more than half of the increase. In reviewing the MOX contractor’s system, we found that, as NNSA officials stated, using the system to determine the dollar amounts each driver added to the cost increase would be difficult—for example, because the system’s identification of cost increases at a summary level, such as site construction support, did not correspond to the cost drivers identified by NNSA. Key cost drivers NNSA identified for the WSB included the following: Higher-than-anticipated bids for the construction subcontract. According to the NNSA federal project director for the WSB, the WSB contractor received two bids in 2009 from prospective construction subcontractors that both came in at about $26 million higher than the contractor’s estimate. NNSA officials did not explain the reason for the difference, stating that the bidders were not required to provide details of their estimates. The federal project director said that NNSA supported the WSB contractor awarding the construction subcontract, despite the higher cost, in order to maintain the schedule for completing the WSB in time to support the start-up of the MOX facility. According to NNSA officials, the project applied cost savings from earlier work to cover part of the increased cost of the construction subcontract and had sufficient contingency—the portion of a project’s budget that is available to account for uncertainties in the project’s scope—to absorb the remainder of the increase. Consequently, however, contingency to absorb further cost increases as construction progressed was reduced. Design errors, omissions, and inconsistencies. According to the NNSA federal project director, the WSB contractor and subcontractor made hundreds of design changes, which led to an additional cost increase in the construction subcontract. According to NNSA’s log of design changes, as of August 2013, design changes increased the cost of the construction subcontract by about $15 million, from $91.5 million to $106.5 million. The federal project director said that, unlike the design of the MOX facility, the design of the WSB was about 90 percent complete at the start of construction. A September 2008 report of NNSA’s independent review of the WSB prior to approval of the cost and schedule estimate found that the design was essentially complete. Nevertheless, according to the federal project director, design changes were needed because of constructability issues, such as equipment that met specifications in design documents not being available by the time the project reached construction. Schedule delays resulting from the construction subcontractor not meeting required targets. According to the NNSA federal project director’s feedback on the WSB contractor’s performance in September 2009, NNSA had concerns related to the project schedule and the ability to meet the completion date in part because of a delayed start in the construction subcontract. By the time NNSA approved the cost increase for the WSB in December 2012, schedule delays in the construction subcontract had grown to 15 months. The approved cost increase included about $30 million in the contractor’s delay-related costs because NNSA’s contract for the WSB is cost- reimbursable. The actual cost attributable to the WSB may be even higher depending on the outcome of a lawsuit filed by the subcontractor against the WSB contractor related to design changes and schedule delays that increased the subcontractor’s costs in excess of the amount specified in its fixed- price subcontract. The approved cost increase for the WSB included contingency to account for the possibility of higher costs incurred by the construction subcontractor. NNSA has not analyzed the underlying, or root, causes of the close to $3 billion in construction cost increases for the MOX facility and WSB. DOE’s project management order requires that lessons learned be captured throughout a project to allow for the exchange of information within DOE in the context of project management and to benefit future endeavors. However, the project management order does not include a requirement for a root cause analysis of projects experiencing significant cost increases or schedule delays. NNSA officials said that they decide on a case-by-case basis whether to conduct a root cause analysis. In contrast, under the Weapon Systems Acquisition Reform Act of 2009, the Department of Defense must perform a root cause analysis of a cost increase that exceeds a certain threshold. Documentation NNSA provided to us on the cost drivers for the MOX facility and WSB do not provide clear details about the causes of the cost increases. Such details can be found in a root cause analysis, which would help address questions about why the drivers identified by NNSA occurred and help inform lessons learned. Key questions about the cause of the key drivers include the following: DOE’s reasons for approving a cost and schedule estimate for the MOX facility before the design was complete, even though a July 2006 review of the project found that the cost estimate’s basis on portions of the design that were less than 50 percent complete posed a risk to the project. Similarly, a root cause analysis would address why one of the drivers of the cost increase for the WSB identified by NNSA was design errors, omissions, and inconsistencies, given that a review prior to approval of the project’s cost and schedule estimate found that most of the design was ready for construction. The extent to which NNSA and its contractors shared responsibility for cost drivers, such as the greater-than-anticipated number of safety systems needed in the MOX facility to meet NRC requirements. According to NNSA officials, the department hired the MOX contractor because it considered the contractor to be well-qualified to engineer and estimate all of the safety systems for the facility, taking into account NRC requirements. However, the record for DOE’s approval of the cost and schedule estimate for the facility shows that DOE was aware of complexities in adapting MOX technology to comply with NRC requirements. Specifically, the minutes from DOE’s July 2006 meeting to request approval of the estimate stated that these complexities had already contributed to a $1.1 billion increase in the estimated cost. The sufficiency of measures DOE took to ensure that the cost estimate for the MOX facility it approved in 2007 reflected an awareness of market conditions, such as the availability of suppliers and subcontractors with the ability and experience to meet nuclear quality assurance criteria. As required under the MOX contract, in October 2006—before DOE approved the cost and schedule estimate for the facility—the contractor submitted a construction market analysis report, which stated that the contractor had experienced trouble obtaining qualified suppliers and that the subcontractor pool using nuclear quality standards had been decreasing due to inactivity in the nuclear industry. However, the report provided limited detail and did not include recommendations to address availability of qualified suppliers. The thoroughness of DOE’s review, required under DOE’s project management order, to ensure that the WSB contractor’s system for tracking and reporting on cost and schedule performance provided accurate information. DOE recertified the contractor’s system in December 2011 after identifying and closing out several corrective actions and continuous improvement opportunities. However, DOE found additional problems with the system after January 2012, when the WSB contractor informed NNSA that schedule delays for the project were greater than the contractor previously revealed. Based in part on the contractor’s revelations, DOE reexamined the contractor’s system and suspended its certification in November 2012. The corrective actions NNSA and its contractors took after periodic project reviews identified problems, including problems cited by NNSA as drivers of cost increases for the MOX facility and WSB. For example, multiple reviews of the MOX facility found that costs to install equipment were underestimated. A July 2006 review found that installation for electrical; piping; and heating, ventilation, and air- conditioning equipment were underestimated by close to $160 million and nearly 3 million labor hours. NNSA’s project reviews of the facility in 2011 and 2012 continued to raise concerns about unrealistic installation rates. The responsiveness of NNSA project managers to emerging cost and schedule issues. Without a review of the timing of NNSA initiating the process of increasing the projects’ cost and schedule estimates, it is not clear whether NNSA acted in a timely manner or whether project cost and schedule indicators warranted earlier action. For example, an NNSA review of the MOX facility in the spring of 2011 found that the most significant risk to delivering the project within cost centered on the ability of the project team to identify about $364 million in savings to offset expected cost growth, but NNSA did not initiate the process of increasing the project’s cost and schedule estimates until January 2012. Without a root cause analysis, it is uncertain whether NNSA will be able to accurately identify underlying causes of the cost increases for the MOX facility and WSB in order to identify and implement corrective measures and identify lessons learned to share with and apply to other DOE construction projects. After determining that the performance of the contractors for the MOX facility and WSB contributed to the projects’ construction cost increases, NNSA took steps to hold the contractors accountable for their performance by withholding fees specified under the contracts. Specifically, NNSA withheld portions of two of the four types of the MOX contractor’s fees and 41 percent of the WSB contractor’s fees. NNSA withheld portions of two of the four types of fees that the MOX contractor could earn under the contract for construction of the facility— incentive fees and award fees. In total, NNSA withheld $45.1 million or close to one-third of all fees the contractor could earn as of November 2013. Under the terms of the MOX contract, the contractor could still earn incentive fees that have been withheld, but only if it completes the overall project within cost and schedule. Table 1 summarizes fees paid to and withheld from the contractor as of November 2013. Details of fees NNSA withheld and paid under the MOX contract include the following: Incentive fees. NNSA did not pay $36.5 million or over half of the $65.6 million in incentive fees that the MOX contractor could earn from fiscal year 2008, when construction began, through fiscal year 2013. Of the $29.1 million in incentive fees paid to the contractor, $21.6 million remains provisional, meaning that NNSA can require that the fees be paid back as a result of the project not being completed within cost. The amount not paid represented the contractor’s entire incentive fees for fiscal years 2011 through 2013. Specifically, under the terms of the MOX contract, NNSA can withhold quarterly payments of incentive fees if an increase in the projected cost to complete the MOX facility exceeds $200 million. NNSA began withholding incentive fees for the first quarter of fiscal year 2011 when, for the first time, the increase in the projected cost to complete the facility exceeded this threshold. NNSA memos for subsequent quarters in fiscal year 2011 noted that the project’s cost and schedule metrics continued to worsen, reducing the likelihood of resumption of payments. In a July 2011 letter to the contractor explaining its rationale for not resuming payments, NNSA stated that it was sensitive to the potential impacts of the “nuclear renaissance”—the contractor’s term for the resurgence of U.S. nuclear engineering and manufacturing capability after being dormant for more than 20 years, which the contractor stated limited the availability of qualified suppliers and subcontractors and led to staff turnover and higher-than- anticipated costs to install equipment. However, NNSA stated that such impacts would not necessarily overcome other evidence showing that the contractor was not meeting the overarching goal of the incentive fees, which is that the facility be completed within cost. Award fees. NNSA withheld $8.6 million or about a quarter of the $32.6 million in award fees that the MOX contractor could earn from fiscal year 2008 through fiscal year 2012. The amount withheld included about half of the fees the contractor was eligible to earn in fiscal year 2012. NNSA’s award fee evaluation for fiscal year 2012 cited various factors, such as poor construction planning; less than optimal coordination of work; and overly conservative specifications for installation of fire doors, resulting in delays and unnecessary costs. In contrast, NNSA paid $24.0 million in award fees for performance in other areas, such as maintaining a high level of worker safety—an area in which the contractor has consistently performed well, according to NNSA’s award fee evaluations. Milestone fees. NNSA did not withhold any milestone fees and instead paid milestone fees of $30.8 million for tasks with deadlines ranging from February 2009 to March 2014. Examples of tasks for which NNSA paid milestone fees (some of which the MOX contractor completed early) included completing the roof, installing the first glovebox, constructing a technical support building, and completing a start-up plan for the facility. According to the NNSA officials, although NNSA did not withhold milestone fees, NNSA stopped paying any of the $30.2 million in remaining milestones fees as part of an understanding with the contractor to renegotiate the amount of and conditions for earning milestone fees. Fixed fees. According to the contracting officer, NNSA did not withhold any of the $15.7 million in fixed fees—the total amount of fixed fees for construction-related work under the MOX contract. NNSA included these fees in the contract to reward the contractor for work performed during contract negotiations, when other fees had not yet been negotiated. In a March 2013 analysis of the WSB contractor’s performance, the NNSA contracting officer for the WSB recommended that the contractor should be held accountable for performance failures that contributed to the project’s cost increase. For example, the analysis stated that the contractor did not require the subcontractor to add crews or take other steps to correct delays until almost 2 years after the federal project director began expressing concerns about the delays. In accordance with this assessment, NNSA withheld $7.7 million or about 40 percent of the $18.9 million in performance incentive fees that the WSB contractor could earn from fiscal year 2009, when construction began, through fiscal year 2012, for the portion of fees allocated to construction of the WSB under the management and operation contract for the Savannah River Site. Most of the fees withheld were for the contractor’s performance in fiscal years 2011 and 2012 (see table 2). In particular, NNSA withheld $3.3 million of the $6.9 million in fees the contractor could earn in fiscal year 2011 and $3.9 million of the $4.0 million in fees the contractor could earn in fiscal year 2012. The fees withheld were tied to various performance measures, which DOE acquisition regulations require be established prior to the start of each evaluation period. Performance measures NNSA established included meeting the schedule for testing various types of equipment, providing engineering support to and coordinating with the construction subcontractor, and maintaining the project within pre- established cost and schedule metrics. The $3.3 million in fees withheld for fiscal year 2011 included $2 million that NNSA took back—that is, was paid back by the contractor—after making its fee determination for the contractor. Specifically, according to a December 2012 letter from the NNSA contracting officer to the contractor, the fiscal year 2011 fee determination was premised on the contractor’s statements that schedule delays were recoverable and that the project would be completed within the approved cost estimate. Shortly after NNSA made its fee determination, however, the contractor notified NNSA that the project was further behind schedule than previously represented and that cost factors not included in the contractor’s system for tracking and reporting on cost and schedule performance would result in a cost overrun. The contracting officer’s letter stated that NNSA would have reduced the contractor’s fee if it had known the extent of delays and cost overruns when it made its fee determination, and NNSA required the contractor to repay $4 million. In May 2013, NNSA agreed to a settlement with the contractor to reduce the amount taken back to $2 million after the contractor appealed NNSA’s initial demand. In addition to withholding fees, in a June 2012 letter to the contractor, NNSA’s contracting officer questioned why she should not conclude that the contractor’s actions rose to the level of gross negligence or willful misconduct, warranting disallowance of costs, meaning that the contractor would bear part of the cost increase resulting from the project’s schedule delays. For example, the letter stated that the contractor’s system for tracking and reporting on cost and schedule performance did not meet industry standards and impeded NNSA’s ability to understand the potential impact of delays in construction of various segments of the project on the final delivery date. According to NNSA officials, NNSA is waiting until after completion of WSB construction, and total construction costs are known, to determine unallowable costs. NNSA’s most recent cost and schedule estimates for the Plutonium Disposition program did not fully reflect the characteristics of high-quality, reliable estimates as established by best practices used throughout government and industry and documented in the GAO Cost Estimating and Assessment Guide and GAO Schedule Assessment Guide. Specifically, (1) NNSA’s draft April 2013 life-cycle cost estimate for the overall program was partially comprehensive, partially well-documented, and partially accurate but did not meet any of the best practices for a credible estimate; (2) the MOX contractor’s September 2012 proposal for increasing the cost of the MOX facility was substantially comprehensive but partially well-documented and accurate and minimally credible; and (3) the WSB contractor’s February 2013 monthly update to its schedule estimate was minimally well-constructed and partially met the other three characteristics of a reliable schedule—comprehensive, credible, and controlled. In developing its draft April 2013 life-cycle cost estimate of $24.2 billion for the Plutonium Disposition program, NNSA followed several of the 12 key steps for developing high-quality cost estimates, including defining the estimate’s purpose, defining the program’s characteristics, and obtaining the data. NNSA did not follow other key steps, however, such as conducting an independent cost estimate. As a result, the estimate was not reliable. In particular, NNSA’s draft life-cycle cost was partially comprehensive, partially well-documented, and partially accurate but did not meet any of the best practices for a credible estimate. Table 3 summarizes the major components of NNSA’s draft April 2013 life-cycle cost estimate. The estimate assumed that the MOX facility would start operations in November 2019 and that it would take approximately 15 years to complete the mission to dispose of 34 metric tons of surplus weapons-grade plutonium. Table 4 lists the steps, or best practices, necessary for developing a high- quality cost estimate. Appendix II summarizes our assessment of NNSA’s process for developing its draft life-cycle cost estimate against the steps that should result in the four characteristics of a high-quality cost estimate. Our assessment of NNSA’s process for developing its draft life-cycle cost estimate included the following observations: Comprehensive. The draft life-cycle cost estimate was partially comprehensive because work breakdown structures were developed for the MOX and WSB projects and other components of the program, but NNSA had not formalized a program-level work breakdown structure. A typical work breakdown structure provides a clear picture of what needs to be accomplished, how the work will be done, and a basis for identifying resources and tasks for developing a cost estimate. Without a program-level work breakdown structure, NNSA cannot ensure that its life-cycle cost estimate captures all relevant costs, which can mean cost overruns. Well-documented. The draft life-cycle cost estimate was partially well- documented because NNSA defined the estimate’s purpose and the program’s characteristics, but it did not develop a single document to describe data sources and steps taken in developing the estimate— such as applying escalation rates to account for inflation—so that the estimate could be replicated by someone other than those who prepared it. In addition, NNSA stated that a document identified the estimate’s ground rules and assumptions but that the assumptions have changed frequently, hindering development of a life-cycle cost estimate. Examples of changes in assumptions not reflected in NNSA’s draft April 2013 estimate included the slowdown of activities during the assessment of alternative plutonium disposition strategies and NNSA’s plans to increase the amount of plutonium disposed of as MOX fuel. Accurate. The draft life-cycle cost estimate was partially accurate in that NNSA followed the best practice for developing a point estimate—a best guess at a cost estimate usually falling between best and worst case extremes. NNSA also updated the estimate periodically to include actual costs and changes to program and project requirements. However, NNSA did not use a formal system for tracking and reporting on cost and schedule performance to update the estimate, limiting the ability of someone other than those who prepared the estimate to check the estimate’s accuracy and to identify when, how much, and why the program cost more or less than planned. Credible. The draft life-cycle cost estimate was not credible because NNSA did not conduct an independent cost estimate to provide an unbiased test of whether its estimate was reasonable, a formal sensitivity analysis to examine the effects of changing assumptions and ground rules, or a risk and uncertainty analysis to assess variability in point estimates due to factors such as errors and cost estimators’ inexperience or biases. NNSA conducted such analyses for portions of its life-cycle cost estimate, but not for the entire estimate. For example, NNSA’s Plutonium Disposition program office arranged for another office within NNSA to conduct an independent assessment of the MOX facility’s operations costs, but not for the program’s entire life-cycle cost. NNSA did not follow all key steps for developing high-quality cost estimates in part because it did not have a requirement to develop its life- cycle cost estimate. According to NNSA officials, DOE’s project management order includes requirements for development of cost and schedule estimates for a project, such as the MOX facility or WSB, but does not specify equivalent requirements for a program like Plutonium Disposition, which includes multiple projects, as well as supporting activities. As a result, when developing the life-cycle cost estimate for the Plutonium Disposition program, NNSA officials used an ad hoc approach to adapt requirements for managing projects in DOE’s project management order. NNSA officials also said that its April 2013 life-cycle cost estimate did not include all the steps of a high-quality, reliable estimate in part because NNSA considered the estimate to be draft and, therefore, had not fully implemented plans for developing it. In the absence of a specific requirement in DOE’s project management order for developing a life-cycle cost estimate for a program, NNSA officials said they developed a life-cycle cost estimate for the Plutonium Disposition program for several reasons. According to these officials, these reasons included that the cost of the program is largely made up of capital projects, such as the MOX facility, and that requirements for congressional budget submissions specify that the full life-cycle cost of such projects be presented. In addition, each year NNSA must submit to Congress its estimated expenditures covering the fiscal year with respect to which the budget is submitted and at least the four succeeding fiscal years. NNSA officials said that, to accurately estimate expenditures for this 5-year period, they needed to develop a life-cycle cost estimate for the overall Plutonium Disposition program. An NNSA official noted that NNSA plans to use a version of its life-cycle cost estimate as a basis for evaluating alternative strategies to dispose of surplus weapons-grade plutonium. The MOX contractor’s September 2012 proposal for increasing the cost of the MOX facility was substantially comprehensive but was partially well- documented, partially accurate, and minimally credible. The contractor’s estimate did not fully reflect the characteristics of a high-quality, reliable estimate in part because it was a proposal, as opposed to an approved cost estimate. For example, one of the best practices for a well- documented estimate—and a requirement of DOE’s project management order—is that a cost estimate be reviewed and accepted by management. Because DOE had not approved it and instead postponed its review and approval pending the outcome of NNSA’s assessment of alternative plutonium disposition strategies, the contractor’s estimate partially met this best practice. This best practice would be met by DOE’s completion of its review and approval of a new estimate for the MOX facility, assuming the assessment of alternative plutonium disposition strategies maintains the current strategy of disposing plutonium as MOX fuel. Though the contractor’s September 2012 estimate did not fully reflect the characteristics of a high-quality estimate and cannot be considered reliable, the MOX contractor began using it as a provisional baseline for purposes of monthly reporting on the project’s cost and schedule performance. Specifically, as directed by NNSA, the contractor began a transition in June 2012 to report its monthly performance against the contractor’s proposed estimate of $7.7 billion. The contractor completed the transition and ceased any reporting of performance against the previously approved estimate early in 2013. Managing projects that no longer have an approved cost and schedule estimate is a challenge because cost and schedule estimates provide a baseline for measuring progress. At a July 2013 hearing, the Deputy Secretary of Energy noted that not having such a baseline is the point of maximum risk of unrestricted cost growth on a project. Appendix III summarizes our assessment of how well the MOX contractor’s proposal met the characteristics of a high-quality estimate. Our assessment included the following observations: Comprehensive. The proposal was substantially comprehensive in that it included all construction costs, as defined by the statement of work under the MOX contract. The proposal was not fully comprehensive, however, because it only partially met certain best practices for a comprehensive estimate, such as documenting all cost-influencing ground rules and assumptions. The proposal partially met this best practice because it did not provide justifications for some assumptions, such as not more than 10 percent of the supports for piping systems being nonstandard and requiring unique designs. Well-documented. The proposal was partially well-documented because it described in sufficient detail the calculations performed and the estimating methodology used to derive the cost of each element in the work breakdown structure. However, it did not provide all types of information specified in best practices for a well-documented estimate, such as how data on labor and travel costs were normalized. Data normalization is often necessary to ensure comparability because data can be gathered from a variety of sources and in different forms that need to be adjusted before being used. Accurate. The proposal was partially accurate in that it appeared to adjust cost elements for inflation and contained only a few minor mistakes, but the contractor did not update its proposal with actual costs incurred after it developed the proposal and submitted it to NNSA in September 2012. NNSA and contractor officials agreed that the estimate was no longer an accurate reflection of the cost to complete construction—for example, because the proposal assumed a higher level of funding than the project received in fiscal year 2013. The officials said that, if the MOX project continues, the contractor would need to prepare a new proposal that includes costs for work conducted after the initial proposal was developed. Credible. The proposal was minimally credible because DOE halted its independent cost estimate of the proposal pending the outcome of NNSA’s assessment of alternative plutonium disposition strategies. Moreover, the proposal did not include a formal sensitivity analysis to examine the effects of changing assumptions and ground rules, and it provided no evidence that major cost elements were cross-checked to determine whether alternative cost-estimating methods produced similar results. Finally, the proposal included an analysis of risks, such as difficulty attracting and retaining workers, and uncertainty in estimating materials and other costs. On the basis of this analysis, the proposal included a total of $713.1 million to account for risks and uncertainty—$641.4 million for the original scope of the MOX facility and $71.7 million for the addition of plutonium feedstock capability (see table 5). However, the contractor did not properly conduct or clearly document all steps in the analysis to determine the amount of funding to account for risks and uncertainty that could increase the cost of the project. The WSB contractor’s February 2013 monthly update to its schedule estimate did not fully reflect the characteristics of a high-quality, reliable schedule estimate as established by best practices. Specifically, the contractor’s schedule estimate was minimally well-constructed and partially met the other three characteristics of a reliable, high-quality schedule as measured against best practices—comprehensive, credible, and controlled. Table 6 shows the characteristics of a high-quality schedule estimate and corresponding best practices. Appendix IV summarizes our assessment of how well the WSB contractor’s February 2013 schedule estimate met the characteristics of a high-quality estimate. Our assessment of the WSB contractor’s February 2013 schedule estimate included the following observations: Comprehensive. The estimate was partially comprehensive in that it captured and established the durations of contractor and government activities to complete the project but did not capture the remaining detailed work to be performed by the construction subcontractor. Specifically, it reduced the subcontractor’s 3,851 activities to complete its portion of the work to one placeholder activity. According to the NNSA federal project director, the WSB contractor reduced the subcontractor’s activities to a placeholder because the subcontractor submitted unreliable schedules with repeated changes in the estimated completion date for its portion of work. Well-constructed. The estimate was minimally well-constructed in that it sequenced activities in ways that can obscure a schedule’s earliest completion date. In addition, the sequencing of activities included “merge points”—the convergence of many parallel activities into a single successor activity, which decreased the probability of successor activities starting on time. For example, performance of an assessment of readiness to operate the WSB was preceded by 212 activities. NNSA officials explained that the merge points resulted from the need to complete activities in parallel to meet requirements set forth in DOE’s project management order. Credible. The estimate was partially credible in that the WSB contractor conducted a schedule risk analysis to determine the amount of schedule contingency—a reserve of extra time to account for risks and ensure completion of the project on time. However, a DOE review conducted prior to approval of an increase in the project’s cost and a delay in the start of operations found that the results of the contractor’s analysis were unreliable—for example, because project team members were not consulted regarding risk inputs. As a result, the schedule risk analysis did not clearly support the 12 months of schedule contingency included in the approved cost increase and schedule delay. Controlled. The estimate was partially controlled in that, according to project officials, the schedule was updated weekly and used to measure performance, but no narrative accompanied weekly updates to provide decision makers with a log of changes and their effect, if any, on the schedule time frame. In addition, project officials did not provide documentation enabling the schedule to be validated, such as documentation describing sequencing of activities or assumptions used in developing the schedule. The NNSA federal project director and contractor’s project leader said that the contractor had begun to correct problems in the contractor’s schedule estimate—for example, by replacing the placeholder for the subcontractor’s activities with a schedule of more detailed activities independently developed by the contractor. However, delays on the project continued after the contractor began correcting the problems. Notably, according to DOE’s October 2013 monthly report on the WSB, continuing delays in completion of the construction subcontract—one of the key drivers NNSA identified for the WSB cost increase—already used up about 10 of the 12 months of schedule contingency, placing the project’s completion date in jeopardy. NNSA has identified drivers of the close to $3 billion increase in the projected cost to complete the MOX facility and WSB and has taken steps to hold the MOX and WSB contractors accountable for their role in the cost increases by withholding and taking back fees. However, the various drivers identified by NNSA, such as DOE’s approval of the cost and schedule estimate for the MOX facility before design was complete, do not provide the level of detail that can be found in a root cause analysis. In addition, DOE’s project management order requires that lessons learned be captured throughout a project but does not include a requirement for a root cause analysis when a project exceeds its cost estimate, even when a project exceeds its cost estimate by billions of dollars. The decision whether to conduct such an analysis is instead made on a case-by-case basis. Because NNSA has not conducted a root cause analysis to identify the underlying causes of the cost increases for the MOX facility and WSB, it cannot provide assurance that it has correctly identified the underlying causes to ensure that they will not lead to further cost increases as the projects move forward. Further, without a root cause analysis, NNSA’s ability to identify recommended solutions and lessons learned that could be applied to other projects is lessened. Conducting a root cause analysis of the cost increases for the MOX facility and WSB could help NNSA address its long-standing difficulties in completing projects within cost and on schedule, which has led to NNSA’s project management remaining on GAO’s list of areas at high risk of fraud, waste, abuse, and mismanagement. NNSA has drafted a life-cycle cost estimate of $24.2 billion for the Plutonium Disposition program—an important step toward presenting the full cost of NNSA’s current strategy to dispose of surplus weapons-grade plutonium as MOX fuel. A cost estimate that presents the full cost of NNSA’s current plutonium disposition strategy is essential to inform NNSA’s ongoing evaluation of alternative plutonium disposition strategies and provide Congress with a complete picture of the cost of the program. NNSA developed its life-cycle cost estimate even though neither DOE nor NNSA required the estimate. In particular, DOE’s project management order does not explicitly require that life-cycle cost estimates be developed for programs like the Plutonium Disposition program that include both construction projects and other efforts and activities not related to construction, such as producing plutonium feedstock for the MOX facility. In the absence of such a requirement, NNSA followed several of the 12 key steps described in the GAO Cost Estimating and Assessment Guide for developing high-quality, reliable cost estimates, but it did not follow other key steps. Because NNSA did not follow all of the steps, the life-cycle estimate for the Plutonium Disposition program is not reliable. Similarly, the contractors’ cost and schedule estimates for the MOX facility and WSB did not meet all best practices compiled in GAO’s guides for preparing high-quality, reliable cost and schedule estimates. Not meeting these best practices increased the risk of further cost increases and delays for the projects and, because the projects are components of NNSA’s life-cycle cost estimate, for the overall Plutonium Disposition program. We are making six recommendations in this report to the Secretary of Energy. To identify lessons learned from and provide assurance of preventing recurrence of cost increases for the MOX facility and WSB, and to develop reliable cost estimates for the Plutonium Disposition program, we recommend that the Secretary of Energy direct the DOE and NNSA Offices of Acquisition and Project Management and the NNSA office responsible for managing the Plutonium Disposition program, as appropriate, to take the following four actions: Conduct an analysis of the root causes of the cost increases for the MOX facility and WSB, such as the causes of the design changes that led to cost increases, and identify and prioritize recommended solutions. Revise and update the program’s life-cycle cost estimate following the 12 key steps described in the GAO Cost Estimating and Assessment Guide for developing high-quality cost estimates, such as conducting an independent cost estimate to provide an objective and unbiased assessment of whether the estimate can be achieved. Ensure that the MOX contractor revises its proposal for increasing the cost of the MOX facility to meet all best practices for a high-quality, reliable cost estimate—for example, by cross-checking major cost elements to determine whether alternative estimating methods produce similar results. Ensure that the approved cost increase for the WSB is based on a schedule that the contractor has revised to meet all best practices for a high-quality, reliable schedule estimate, such as reflecting all activities (both government and contractor) needed to complete construction. To ensure that future DOE projects benefit from lessons learned that reflect the underlying causes of cost increases or schedule delays experienced by other projects, and that Congress and DOE have life- cycle cost estimates for DOE programs that include individual construction projects, we further recommend that the Secretary of Energy take the following two actions to revise DOE’s project management order or otherwise implement a departmentwide requirement: Require a root cause analysis of all projects that experience cost increases or schedule delays exceeding a certain threshold established by DOE. Require life-cycle cost estimates covering the full cost of programs that include both construction projects and other efforts and activities not related to construction. We provided a draft of this product to DOE for comment. In written comments, reproduced in appendix V, NNSA stated that the agency and DOE generally agreed with our recommendations. In particular, NNSA concurred with four of our six recommendations and partially concurred with the other two. NNSA described actions it planned to take to implement the recommendations with which it concurred and time frames for taking these actions. NNSA also provided technical comments, which we incorporated into the report as appropriate. We are pleased that NNSA concurred with our recommendation to conduct an analysis of the root causes of the cost increases for the MOX facility and WSB and stated that it is planning to conduct such an analysis, which was not mentioned during the course of our review. NNSA also concurred with our recommendation to revise and update the Plutonium Disposition program’s life-cycle cost estimate and stated that it would do so after a decision was made on the path forward for the program. The path forward could involve the use of alternative strategies to dispose of surplus weapons-grade plutonium. NNSA also concurred with our recommendation to ensure that the MOX contractor revises its proposal for increasing the cost of the MOX facility to meet all best practices for a high-quality, reliable cost estimate. In its comment letter, NNSA stated that it is working with the contractor to ensure that the cost estimating processes and procedures are updated such that the best practices are met. In addition, NNSA concurred with our recommendation to ensure that the approved cost increase for the WSB is based on a schedule that the contractor has revised to meet all best practices for a high-quality, reliable schedule estimate. In its comment letter, NNSA stated that it has revised the schedule since we reviewed it and that it now reflects all activities needed to complete construction. We did not review the update to the WSB contractor’s schedule to confirm that it captured all activities to complete construction, which is one of the best practices associated with the characteristics of a high-quality schedule. Moreover, as detailed in appendix IV, the schedule we reviewed only partially or minimally met 7 of the other 9 best practices. To fully implement our recommendation, NNSA would need to ensure that the contractor has revised its schedule to meet all best practices for a high-quality, reliable schedule estimate. In its comment letter, NNSA stated that during the next project review, which is expected to occur by December 31, 2014, NNSA will review the schedule against best practices. NNSA partially concurred with our fifth recommendation that DOE require a root cause analysis of all projects that experience cost increases or schedule delays exceeding a certain threshold established by the department. In its comment letter, NNSA stated that DOE program offices currently perform tailored root cause analyses as part of the baseline change proposal process outlined in the department’s project management order for increasing a project’s cost and schedule estimates. NNSA stated that, as a result, the department does not believe that an update to the project management order is required. NNSA further stated that the department will review the lessons learned from NNSA's root cause analyses for the MOX and WSB projects to see what best practices may be of benefit to other projects. However, as we stated in the report, DOE’s project management order does not include a requirement for a root cause analysis of projects experiencing significant cost increases or schedule delays, and NNSA officials said that they decide on a case-by-case basis whether to conduct a root cause analysis. Moreover, the order does not define what a root cause analysis is, how or when a root cause analysis should be conducted, or what is meant by a tailored analysis. In addition, NNSA’s written comments did not provide information on the conditions that would trigger a root cause analysis. Leaving root cause analyses to an informal and undefined process within DOE program offices could result in such analyses not being conducted, not being conducted consistently, or not accurately identifying underlying causes of cost increases in order to identify and implement corrective measures and apply lessons learned to other DOE projects. We continue to believe that a root cause analysis should be conducted for all projects that experience cost increases or schedule delays above a threshold established by the department. We note that our recommendation is consistent with a requirement in the Weapon Systems Acquisition Reform Act of 2009; under the act, the Department of Defense must perform a root cause analysis of a cost increase that exceeds a certain threshold. NNSA partially concurred with our final recommendation that DOE require life-cycle cost estimates covering the full cost of programs that include both construction projects and other efforts and activities not related to construction. In its comment letter, NNSA stated that the department’s project management order requires a comprehensive life-cycle cost analysis as part of the alternative selection process and that no further update to the order is required to address this recommendation. The intent of our recommendation goes beyond that of preparing a life-cycle cost estimate at the stage of selecting an alternative for a new capital asset project. Instead, the recommendation applies to departmental programs that include capital asset projects to meet the overall program need. As we stated in the report, NNSA did not follow all key steps for developing high-quality cost estimates in developing its draft April 2013 life-cycle cost estimate for the Plutonium Disposition program, which currently includes the MOX facility and WSB capital asset projects, in part because it did not have a requirement to develop it. NNSA’s response to our recommendation suggests that the life-cycle cost estimates for the MOX and WSB projects that were required to be prepared years ago, when the projects were selected from among other alternatives, are the only life-cycle cost estimates needed to manage the Plutonium Disposition program. Furthermore, NNSA’s response contradicts the fact that it concurred with our recommendation to revise and update the life- cycle cost estimate for the overall Plutonium Disposition program in accordance with cost estimating best practices. We continue to believe that our recommendation that the department require life-cycle cost estimates covering the full cost of programs that include construction projects should be implemented. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, the NNSA Administrator, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To assess drivers of the construction cost increases for the Mixed Oxide (MOX) Fuel Fabrication Facility and Waste Solidification Building (WSB) that the National Nuclear Security Administration (NNSA) identified, we reviewed the Department of Energy’s (DOE) budget request for NNSA for fiscal year 2014, which provided a summary of the cost drivers for both projects. To assess cost drivers in further detail, we reviewed the MOX contractor’s September 2012 proposal for increasing the project’s cost, which discussed drivers from the contractor’s perspective. We also reviewed DOE’s December 2012 document approving an increase in the estimated cost of the WSB and a delay in the start of operations, which summarized cost drivers and their impact on the project’s cost and schedule. We visited the Savannah River Site to observe construction progress for both projects and interviewed NNSA and contractor officials responsible for managing the projects. We also interviewed officials from the NNSA Office of Fissile Materials Disposition, the NNSA Office of Acquisition and Project Management, and the DOE Office of Acquisition and Project Management. Separately, to understand how, if at all, cost drivers for the MOX facility were related to Nuclear Regulatory Commission (NRC) regulation and licensing of the construction and operation of the facility, we reviewed NRC construction inspection reports and related documents, and we interviewed NRC officials responsible for overseeing the facility’s construction. In order to understand the components of cost growth for the MOX facility, which represented most of the Plutonium Disposition program’s construction cost increase, we also analyzed the MOX contractor’s earned value management (EVM) system that the contractor used to track and report on cost and schedule performance, including data from the EVM system on cumulative cost and schedule variance trends from July 2011 through April 2012 and the contractor’s variance report for April 2012. To determine the extent to which NNSA analyzed underlying causes of the cost increases, we reviewed documents providing context for cost drivers. The documents we reviewed included NNSA Office of Acquisition and Project Management project review reports and monthly status reports; DOE Office of Acquisition and Project Management monthly status reports; DOE documents related to approval of the previous cost and schedule estimates for the MOX facility and WSB in April 2007 and December 2008, respectively; and documents related to specific cost drivers identified by NNSA, such as the MOX contractor’s October 2006 report on construction markets and DOE reports related to its suspension of the WSB contractor’s system for tracking and reporting cost and schedule performance in November 2012. We also interviewed NNSA officials to determine the extent to which they had conducted or planned any analyses to identify underlying causes of cost increases for the Plutonium Disposition program’s construction projects. To determine steps NNSA took to hold contractors accountable for their role in the cost increases for the Plutonium Disposition program’s construction projects, we reviewed the contracts for the MOX facility and WSB, fees specified under the contracts, and NNSA’s fee evaluations and other documentation supporting its fee determinations. We also interviewed NNSA contracting officers who were responsible for administering the MOX and WSB contracts regarding the terms of the contracts, fees specified under the contracts, and actions NNSA took or planned to take to hold contractors accountable for their role in the cost increases. We obtained NNSA data on fees it paid to and withheld from the contractors, and we assessed the reliability of the data by checking for obvious errors in accuracy and completeness; comparing the data with other sources of information, such as NNSA’s fee determinations; and interviewing NNSA contracting officers who had knowledge of the data. We determined that NNSA’s data on fees were sufficiently reliable for reporting on the fees paid to and withheld from the contractors. To assess the extent to which NNSA’s most recent estimates of the Plutonium Disposition program’s life-cycle cost and of the cost and schedule for completing the program’s construction projects met best practices we have compiled in guides identifying the characteristics of high-quality, reliable cost and schedule estimates, we tailored our methodology to the differing stages of NNSA’s development and approval of each estimate: NNSA’s life-cycle cost estimate for the Plutonium Disposition program. Because NNSA had not finalized a life-cycle cost estimate, we assessed NNSA’s most recent available estimate—spreadsheets dated April 2013 representing NNSA’s draft life-cycle cost estimate. In particular, we assessed the process NNSA used to develop the estimate against the 12 key steps described in the GAO Cost Estimating and Assessment Guide that should result in a high-quality, reliable cost estimate. To provide information on NNSA’s process, NNSA officials responsible for developing the estimate filled out a data collection instrument we developed. The data collection instrument summarized each of the 12 key steps and provided space for NNSA officials to describe actions they had taken to meet the criteria for each step. To review the information provided by NNSA, we checked NNSA’s April 2013 estimate for obvious errors in accuracy and completeness and compared it with previous versions of the life-cycle cost estimate provided by NNSA. In addition, we interviewed NNSA officials to determine what requirements, if any, they followed for developing the estimate, their purpose for developing it, and their plans for presenting it for management approval. Finally, we interviewed NNSA officials from the Office of Analysis and Evaluation, which the Plutonium Disposition program had tasked with conducting an independent assessment of the MOX facility’s operating costs. NNSA’s estimate to complete the MOX facility. Because NNSA had not approved a revised cost and schedule estimate for the MOX facility, we assessed the MOX contractor’s September 2012 proposal for increasing the project’s cost, which NNSA had directed the MOX contractor to use as a provisional baseline for purposes of monthly reporting. We compared data presented in various tables of the proposal for consistency and reviewed additional documents, including the technical baseline providing a detailed description of the MOX facility. We provided a draft of our assessment to NNSA and revised the draft, as appropriate, after discussing our assessment with NNSA program officials and the contractor. NNSA’s estimate to complete the WSB. We assessed the WSB schedule estimate that the cost increase for the project approved in December 2012 was based on because, as described in the GAO Schedule Assessment Guide, a reliable schedule can contribute to an understanding of the cost impact if a project does not finish on time. Specifically, we compared the contractor’s February 2013 monthly update to its schedule estimate, which was the most recent available update when we conducted our analysis, with the 10 best practices associated with the characteristics of a high-quality schedule. As part of our assessment, we reviewed documents related to the project’s schedule, including NNSA’s project execution plan for the WSB, the project’s work breakdown structure, and the project’s February 2013 update to the document showing the longest path to project completion. In addition, we interviewed the NNSA federal project director for the WSB and the WSB contractor’s project leader and scheduler. We provided a draft of our assessment to NNSA and revised the draft, as appropriate, after discussing our assessment with NNSA program officials and the contractor. We conducted this performance audit from November 2012 to February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Detailed assessment Partially met. NNSA assigned a team to develop and update the estimate but did not have a written plan for developing it. Partially met. Work breakdown structures to define in detail the work necessary to accomplish objectives were developed for the MOX and WSB projects and other components of the program, but NNSA had not formalized a program-level work breakdown structure. Substantially met. NNSA officials described the purpose as supporting annual budget requests, which include requirements that NNSA (1) present the full life-cycle cost of capital projects, such as the MOX facility, and (2) estimate expenditures for the fiscal year with respect to which the budget is submitted and at least the four succeeding fiscal years. Substantially met. NNSA developed a program requirements document to identify the scope, functions, and requirements of the Plutonium Disposition program. NNSA documented performance characteristics for program components in contracts, technical baselines, and execution plans. Partially met. NNSA identified ground rules and assumptions, but NNSA officials said that assumptions for the program change frequently, hindering development of a life-cycle cost estimate. Substantially met. NNSA collected data at the project level, where, according to NNSA, data were documented in contractor systems and estimates were developed by teams of knowledgeable staff using historical information, current cost and pricing information, engineering and vendor quotes, cost guides, and current material and labor costs. Minimally met. NNSA documented the estimate on spreadsheets, but it did not develop a single document to describe data sources and steps taken in developing the estimate so that it could be replicated by someone other than those who prepared it. Not met. NNSA considered the estimate to be draft and predecisional, and NNSA officials said they did not have plans to present an estimate to management for approval until NNSA completes its reevaluation of its strategy for disposing of surplus weapons-grade plutonium. Partially met. NNSA developed a point estimate, but it did not use a program-level work breakdown structure to do so because it had not formalized such a structure. Partially met. NNSA updated the estimate periodically to include actual costs and changes to program and project requirements, but it did not clearly document how changes affected the estimate. Detailed assessment Not met. NNSA did not conduct an independent cost estimate for the overall program’s life-cycle cost estimate, and it had not completed independent cost estimates for the program’s two construction projects. Not met. NNSA did not conduct a formal sensitivity analysis at the program level. Not met. NNSA did not conduct a risk and uncertainty analysis at the program level. Best practice The cost estimate includes all life-cycle costs. Detailed assessment Met. The estimate covered construction and startup costs; at NNSA’s direction, the estimate excluded operation and maintenance costs. The cost estimate completely defines the program, reflects the current schedule, and is technically reasonable. Substantially met. The estimate was based on NNSA’s statement of work and the contractor’s technical baseline for the original scope of the MOX facility. The cost estimate work breakdown structure is product-oriented, traceable to the statement of work/objective, and at an appropriate level of detail to ensure that cost elements are neither omitted nor double-counted. Partially met. The work breakdown structure clearly outlined the end product and major work of the project, but some cost elements were missing from the work breakdown structure. The estimate documents all cost- influencing ground rules and assumptions. Partially met. The estimate documented that it was based on a profile of NNSA’s projected annual funding to complete the project but did not provide justifications for some assumptions, such as not more than a set amount of work being nonstandard. The documentation captures the source data used, the reliability of the data, and how the data were normalized. Partially met. The estimate was based on actual costs through May 2012 and used a database of labor and other costs, but it did not state whether or how all data had been normalized to ensure data comparability. The documentation describes in sufficient detail the calculations performed and the estimating methodology used to derive each element’s cost. Met. The estimate used a combination of expert opinion and extrapolation from actual data to develop estimates for and sum up individual cost elements of the work breakdown structure. The documentation describes, step by step, how the estimate was developed so that a cost analyst unfamiliar with the program could understand what was done and replicate it. Partially met. The estimate used quantities of materials and labor hours to develop estimates for individual cost elements but did not document how these quantities were estimated. The documentation discusses the technical baseline description and the data in the baseline is consistent with the estimate. Partially met. The estimate agreed with NNSA’s statement of work and the contractor’s technical baseline for the original scope of the MOX facility, but the technical baseline did not cover the addition of capability to supply plutonium feedstock. The documentation provides evidence that the cost estimate was reviewed and accepted by management. Partially met. DOE began a review of the proposed estimate but did not approve it. The cost estimate results are unbiased, not overly conservative or optimistic, and based on an assessment of most likely costs. Minimally met. The estimate was higher than needed to achieve an 85 percent confidence level—the level directed by NNSA—that the final cost would be less than the estimate. The estimate has been adjusted properly for inflation. Substantially met. The estimate appeared to adjust cost elements for inflation, but adjustments were not well-documented. Best practice The estimate contains few, if any, minor mistakes. Detailed assessment Met. The estimate contained few minor mistakes, and calculations within the estimate were internally consistent. The cost estimate is regularly updated to reflect significant changes in the program so that it always reflects current status. Partially met. The estimate was based on actual costs through May 2012 and did not reflect updated costs from the contractor’s system for tracking and reporting cost and schedule performance. Variances between planned and actual costs are documented, explained, and reviewed. Minimally met. The estimate explained variances between planned and actual costs at a high level but not at the cost element level. The estimate is based on a historical record of cost estimating and actual experiences from other comparable programs. Partially met. The estimate did not explain to what extent it was based on historical data from other similar programs or facilities. The estimating technique for each cost element was used appropriately. Substantially met. The estimating method used— developing the estimate at the lowest level of the work breakdown structure, one piece at a time, with the sum of the pieces becoming the estimate—was appropriate for a project under way. The cost estimate includes a sensitivity analysis that identifies a range of possible costs based on varying major assumptions, parameters, and data inputs. Not met. The estimate did not include a sensitivity analysis. A risk and uncertainty analysis was conducted that quantified the imperfectly understood risks and identified the effects of changing key cost driver assumptions and factors. Partially met. The estimate included a risk and uncertainty analysis but did not properly conduct or clearly document all steps in the analysis. Major cost elements were cross-checked to see whether results were similar. Not met. The estimate provided no evidence that major cost elements were cross-checked. An independent cost estimate was conducted by a group outside the acquiring organization to determine whether other estimating methods produce similar results. Not met. DOE halted its independent cost estimate of the contractor’s proposed estimate as part of DOE’s decision to reevaluate its strategy for disposing of surplus weapons-grade plutonium. Detailed assessment Minimally met. The schedule estimate’s 2,429 activities to complete the project included one summary activity in place of the construction subcontractor’s 3,851 activities and, therefore, did not capture the remaining detailed work to be performed by the subcontractor. Partially met. The schedule estimate assigned resources, such as labor and materials, to only about half of the remaining 2,429 activities. Substantially met. The schedule estimate included activity durations that were generally short enough to be consistent with the needs of effective planning. Minimally met. The schedule estimate sequenced activities in ways that decreased the probability of activities starting on time and contained activities that were not properly tied with the start or end date of other activities, potentially obscuring the critical path determining the project’s earliest completion date. Partially met. Changes to the critical path were evaluated monthly and tracked in monthly status reports, but constraints in scheduled dates of certain activities convoluted the critical path. Ensuring reasonable total float Minimally met. The schedule estimate included high total float values—the amount of time by which an activity can slip without affecting a completion date—potentially resulting in an inaccurate assessment of the project’s completion date. Substantially met. The schedule estimate was traceable horizontally (i.e., across sequenced activities) and vertically (i.e., between activities and subactivities). Minimally met. The contractor conducted a schedule risk analysis, but the results of the analysis were unreliable for determining the likelihood of the project’s completion date and did not align with DOE’s revised cost and schedule estimate. Partially met. According to project officials, the schedule was updated weekly, but no narrative accompanied the weekly updates. Maintaining a baseline schedule Minimally met. Project officials stated that they used the schedule to measure performance, but they did not provide thorough documentation enabling the schedule to be validated, such as a narrative providing a log of changes and their effects. In addition to the individual named above, Daniel Feehan, Assistant Director; Remmie Arnold; Antoinette Capaccio; Juaná S. Collymore; Joseph Cook; Tisha Derricotte; Emile Ettedgui; Cristian Ion; Alison O’Neill; Cheryl Peterson; and Karen Richey made key contributions to this report.
|
NNSA, a separately organized agency within DOE, manages the Plutonium Disposition program to dispose of surplus weapons-grade plutonium by burning it as MOX fuel—a mixture of plutonium and uranium oxides—in specially modified commercial nuclear reactors. In 2012, DOE forecasted cost increases of close to $3 billion over the previous estimates for the program's two construction projects, the MOX facility and the WSB for disposing of waste from the MOX facility. GAO was asked to review these cost increases and the life-cycle cost estimate. This report examines: (1) drivers NNSA identified for the cost increases; (2) the extent to which NNSA analyzed underlying causes of the cost increases; (3) steps NNSA took to hold construction contractors accountable for their role, if any, in the cost increases; and (4) the extent to which NNSA's most recent estimates met cost- and schedule-estimating best practices. GAO reviewed NNSA's draft life-cycle cost estimate and contractor estimates of the MOX project's cost and WSB schedule, compared the estimates with cost- and schedule-estimating best practices, and interviewed DOE and NNSA officials. The Department of Energy's (DOE) National Nuclear Security Administration (NNSA) identified various drivers for the close to $3 billion increase in the estimated cost of the Plutonium Disposition program's two construction projects—the Mixed Oxide (MOX) Fuel Fabrication Facility and the Waste Solidification Building (WSB). These drivers included DOE's approval of the MOX facility's cost and schedule estimates before design was complete and schedule delays in construction of the WSB. According to NNSA, the cost of critical system components for the MOX facility averaged 60 percent higher than estimated as a result of approval of estimates before design was complete. NNSA has not analyzed the underlying, or root, causes of the Plutonium Disposition program construction cost increases to help identify lessons learned and help address the agency's difficulty in completing projects within cost and schedule, which has led to NNSA's management of major projects remaining on GAO's list of areas at high risk of fraud, waste, abuse, and mismanagement. DOE's project management order requires that lessons learned be captured throughout a project to, among other things, benefit future endeavors. NNSA officials said that, because the order does not require a root cause analysis of cost increases, NNSA decides on a case-by-case basis whether to conduct one. Unlike a root cause analysis, the cost drivers NNSA identified provided few details about why the drivers existed, such as DOE's reasons for approving the MOX facility's cost and schedule estimates before the design was complete. Without a root cause analysis, it is uncertain whether NNSA will be able to accurately identify underlying causes of the increases to identify and implement corrective measures and identify lessons learned to apply to other projects. After determining that the performance of the contractors for the MOX facility and WSB contributed to cost increases, NNSA took steps to hold the contractors accountable by withholding fees specified under the contracts. In particular, as of November 2013, NNSA withheld $45.1 million or close to one-third of the MOX contractor's fees, including fees tied to meeting the MOX project's cost and schedule estimates. In addition, NNSA withheld $7.7 million or about 40 percent of the WSB contractor's fees tied to various performance measures for the WSB, such as completing construction milestones. NNSA's most recent estimates for the Plutonium Disposition program did not fully reflect all the characteristics of reliable cost estimates (e.g., credible) and schedule estimates (e.g., well-constructed) as established by best practices for cost- and schedule-estimating, placing the program at risk of further cost increases. For example: (1) NNSA's draft April 2013 life-cycle cost estimate of $24.2 billion for the overall program was not credible because NNSA did not conduct an independent cost estimate to provide an unbiased test of whether the estimate was reasonable. (2) Because the MOX contractor's September 2012 proposal for increasing the cost of the MOX facility did not include a formal analysis to examine the effects of changing assumptions, it was minimally credible. (3) The WSB contractor's February 2013 monthly update to its schedule estimate was minimally well-constructed in that it contained activities that were not properly tied with the start or end date of other activities, which could potentially obscure the critical path determining the project's completion date. GAO is recommending, among other things, that DOE conduct a root cause analysis of the Plutonium Disposition program's cost increases and ensure that future estimates of the program's life-cycle cost and cost and schedule for the program's construction projects meet all best practices for reliable estimates. DOE generally agreed with GAO's recommendations.
|
NASA and its international partners—Japan, Canada, the European Space Agency (ESA), and Russia—are building the ISS as a permanently orbiting laboratory to conduct materials and life sciences research under nearly weightless conditions. Each partner is providing station hardware and crew members and is expected to share operating costs and use of the station. The NASA Space Station Program Manager is responsible for the cost, schedule, and technical performance of the total program. The Boeing Corporation, the station’s prime contractor, is responsible for ISS integration and assembly. As of June 30, 1997, the prime contractor reported that over 200,000 pounds of its station hardware was being built or had been completed. According to NASA, by the end of fiscal year 1998, hardware for the first six flights will be at Kennedy Space Center for launch processing. In our July 1996 report and subsequent testimony, we noted that the cost and schedule performance of the space station’s prime contractor had deteriorated and that the station’s near-term funding included only limited financial reserves. We also identified an emerging risk to the program: the indications of problems in the Russian government’s ability to meet its commitment to furnish a Service Module providing ISS power, control, and habitation capability. For several years, the space station program has been subject to a $2.1 billion annual funding limitation and a $17.4 billion overall funding limitation through the completion of assembly, which until recently had been scheduled for June 2002. According to NASA, these funding limitations, or caps, came out of the 1993 station redesign. Previous redesigns had been largely financially driven and the caps were intended to stabilize the design and ensure that it could be pursued. However, the caps are not legislatively mandated, although references to them in congressional proceedings and reports indicate that NASA was expected to build the space station within these limits. When the caps were first imposed, the program had about $3 billion in financial reserves. In our July 1996 report, we concluded that, if program costs continued to increase, threats to financial reserves worsened, and the Russian government failed to meet its commitment in a timely manner, NASA would either have to exceed its funding limitation or defer or rephase activities, which could delay the space station’s schedule and would likely increase its overall cost. In June 1997 testimony, we said that, if further cost and schedule problems materialized, a congressional review of the program would be needed to determine the future scope and cost level for a station program that merits continued U.S. government support. Over the past several months, NASA has acknowledged that the potential for cost growth in the program has increased. As a partner, Russia committed to making a variety of contributions to the ISS. These contributions include (1) the Service Module to provide crew habitation during assembly; (2) the Science Power Platform to help maintain the station’s orientation; (3) launch services to reboost and resupply the station, including the provision of propellant; and (4) Soyuz spacecraft to provide crew return capability during station assembly. In late 1995, NASA became concerned about Russia’s ability to provide steady and adequate funding for its commitments. According to the NASA Administrator and station program officials, the Russian government said repeatedly that the problem would be resolved, despite mounting evidence to the contrary. Finally, in the fall of 1996, Russia formally notified NASA that funding difficulties would delay the completion of the Service Module, which is a critical component for early assembly. Subsequently, NASA designed a three-step recovery plan. Step 1 focuses on adjusting the station schedule for an 8-month delay in the availability of the Service Module and developing temporary essential capabilities for the station in case the Service Module is further delayed by up to 1 year. Major activities in this phase include delaying the launch of station components that are to precede the Service Module into orbit and building an Interim Control Module to temporarily replace the Service Module’s propulsion capability. Step 1 is underway; the new or modified hardware being developed will be completed even if Russia maintains the Service Module’s revised schedule and delivers it on time. NASA officials told us that Russia has resumed its financial commitment, the Service Module assembly has restarted, and significant progress is being made. Step 2 is NASA’s contingency plan for dealing with any additional delays or the Russian government’s failure to eventually deliver the Service Module. This phase could result in permanently replacing the Service Module’s power, control, and habitation capabilities. NASA will decide later this fall on whether to begin step 2. Under step 3 of NASA’s plan, the United States and other international partners would have to pick up the remaining responsibilities the Russian government would have had, such as station resupply and reboost missions and crew rescue during assembly. A decision on step 3 is planned for sometime next year, at the earliest. In addition to their effects on space station development activities, these recovery plan steps place additional requirements on the space shuttle program. Under the plan, the space shuttle may be needed to launch and deliver the Interim Control Module and perform station resupply missions now expected to be done by Russia. Although the full impact of the recovery plan on the space shuttle program is not yet known, the plan has already resulted in the addition of two shuttle flights during the station’s assembly. The prime contractor’s cost and schedule performance on the space station, which showed signs of deterioration last year, has continued to decline virtually unabated. Since April 1996, the cost overrun has quadrupled, and the schedule slippage has increased by more than 50 percent. Figure 1 shows the cost and schedule variances from January 1995 to July 1997. Cost variances are the differences between actual costs to complete specific work and the amounts budgeted for that work. Schedule variances are the dollar values of the differences between the budgeted cost of work planned and work completed. Cost and schedule variances are not additive, but negative schedule variances can become cost variances, since additional work, in the form of overtime, is often required to get back on schedule. 1/95 4/95 7/95 10/95 1/96 4/96 7/96 10/96 1/97 4/97 7/97 -89 -123 -163 -223 -291 -35527-62-16-19-48 -88 -105 -107 -118 -129 -135-43-77-45-46-55 Between January 1995 and July 1997, the prime contract moved from a cost underrun of $27 million to a cost overrun of $355 million. During that same period, the schedule slippage increased from a value of $43 million to $135 million. So far, the prime contractor has not been able to stop or significantly reverse the continuing decline. In July 1996, independent estimates of the space station’s prime contract cost overrun at completion ranged from $240 million to $372 million. Since then, these estimates have steadily increased, and by July 1997 they ranged from $514 million to $610 million. According to program officials, some financial reserves will be used to help cover the currently projected overrun. Delays in releasing engineering drawings, late delivery of parts, rework, subcontractor problems, and mistakes have contributed to cost overruns. NASA’s concern about performance problems under the prime contract is evidenced by its recent incentive and award fee actions. In March 1997, NASA directed Boeing to begin adjusting its biweekly incentive fee accruals and billings based on a higher cost estimate at completion than Boeing was officially reporting. On the basis of an internal review, Boeing subsequently increased its estimate of cost overrun at completion from $278 million to $600 million. The increase in Boeing’s estimate potentially reduces its incentive award by about $48 million over the remainder of the contract period. Boeing was also eligible for an award fee of nearly $34 million for the 6-month period ending in March 1997. However, citing significant problems in program planning, cost estimating, and hardware manufacturing, NASA concluded that Boeing’s performance did not warrant an award fee. NASA also directed Boeing to deduct almost $10 million from its next bill to refund the provisional award fee already paid during the period. Boeing is implementing a corrective action plan for each identified weakness and has outlined a number of actions to improve the performance of the entire contractor team, including changing personnel, recruiting additional software engineers and managers, and committing funds to construct a software integration test facility. Boeing also presented a cost control strategy to NASA in July 1997. According to NASA officials, the strategy includes organizational streamlining and transferring some roles to NASA. Station officials assessed Boeing’s efforts to improve its performance as part of the midpoint review for the current evaluation period. They concluded that, while there was some improvement, it was insufficient to permit resumption of provisional award fee payments. When NASA redesigned the space station in 1993 and brought Russia into the program as a partner, the program had approximately $3 billion in financial reserves to cover development contingencies. Since then, the program reserves have been significantly depleted. In June 1997, the financial reserves available to the program were down to about $2.2 billion. NASA estimated that, by the end of fiscal year 1997, the remaining uncommitted reserves could be less than $1 billion. Financial reserves have been used to fund additional requirements, overruns, and other authorized changes. By June 1997, a station program analysis indicated that fiscal year 1997 reserves might not be sufficient to cover all known threats. More recently, station officials have estimated that a small reserve surplus is possible in fiscal year 1997, but concerns are growing regarding the adequacy of fiscal year 1998 reserves. NASA has already identified threats to financial reserves in future years that, if realized, would outstrip the remaining reserves. For example, program reserves have been identified to cover additional cost overruns; crew rescue vehicle acquisition; hardware costs, in the event that ongoing negotiations with partners are unsuccessful; and additional authorized technical changes. Thus, with up to 6 years remaining until on-orbit assembly of the station is completed, NASA has already identified actual and potential resource demands that exceed the station’s remaining financial reserves. Unless these demands lessen and are not replaced by other demands of equal or greater value, or NASA is able to find offsets and efficiencies of sufficient value to replenish the program’s reserves, the space station will require additional funding. NASA has been able to consistently report compliance with funding limitations and avoid exceeding its financial reserves, despite significant programmatic changes and impacts that have increased station costs. To enable it to do so, NASA has implemented or initiated a variety of actions, including those summarized below: The space station program is negotiating with ESA, Canada, and Brazil to provide station hardware. Under proposed offset arrangements, the ISS partners—ESA and Canada—would build hardware associated with the U.S. commitment in return for launch services or other considerations. Under a cooperative arrangement, Brazil would receive a small allocation of the station’s research capacity in return for any U.S. equipment it would agree to build. NASA estimates that $116 million in U.S. station development costs could be saved through these arrangements. Space station officials have scheduled a threat of $100 million against the program’s financial reserves in case the negotiations are unsuccessful. However, according to program officials, most of the negotiations are nearly completed. NASA dropped the centrifuge from the station budget and opened negotiations with the Japanese government to provide it. Also, the space station’s content at the assembly completion milestone was revised to exclude the centrifuge. This change enabled NASA to maintain the then-current June 2002 assembly completion milestone, even though the centrifuge and related equipment would not be put on the station until after that date. NASA transferred $462 million from its science funding to the space station development funding in fiscal years 1996 through 1998. NASA has scheduled the payback of $350 million—$112 million less than the amount borrowed—through fiscal year 2002. NASA is also planning to transfer another $70 million in fiscal year 1999. All of these funding transfers are within the $17.4 billion funding limitation through assembly completion. NASA transferred $200 million in fiscal year 1997 funding to the station program from other NASA programs to cover costs incurred due to Russian manufacturing delays. Congressional action is pending on the transfer of another $100 million in fiscal year 1998. These funds will be accounted for outside the portion of the program subject to the funding limitations. NASA uses actual and planned reductions in its fiscal year funding requirements to help restore and preserve its actual and prospective financial reserves. Typically, these actions involve rephasing or deferring activities to future fiscal years. For example, the agency’s current reserve posture includes actions such as moving $20 million in spares procurement from fiscal years 1997 to 1999 and $26 million in nonprime efforts from fiscal year 1997 to various future fiscal years. The cost impact of the schedule delay associated with step 1 of the Russian recovery plan is not yet fully understood. During congressional testimony in June 1997, the NASA Administrator stated that NASA was assessing the cost effects of a later assembly completion date. Any delay in completing the space station assembly would increase the program’s costs through the completion of assembly because some costs would continue to accumulate over a longer period. When NASA redesigned the station in 1993, it estimated that Russia’s inclusion as a partner would reduce program costs by $1.6 billion because the station’s assembly would be completed by June 2002—15 months earlier than previously scheduled.NASA has recently acknowledged that the completion of the station’s assembly will slip into 2003, but it has not yet scheduled the revised assembly completion milestone. If the scope and capability of the program under the June 2002 assembly completion milestone remain the same, the new milestone date will be set for the latter part of 2003. Consequently, most, if not all, of the reduced costs claimed by accelerating the schedule would be lost. NASA estimated the additional hardware costs associated with step 1 of the Russian recovery plan at $250 million. When the estimate was made, the specific costs of many of the components of the plan were not known. For example, NASA’s initial estimate includes $100 million for the Interim Control Module, but NASA now estimates that the module will cost $113 million. The total of $300 million in additional funding for the space station program in fiscal years 1997 and 1998 includes financial reserves. The most recent cost estimate for the Interim Control Module already indicates threats to those reserves. NASA plans to use the extra time created by the schedule slip to perform integration testing of early assembly flight hardware at the Kennedy Space Center. As of June 1997, the cost of this testing had not been fully estimated. However, NASA is currently budgeting $15 million in reserves for the effort. If NASA initiates further steps in the recovery plan, new or refined cost estimates would be required. Step 2 provides for the development of a permanent propulsion/reboost capability and modifications to the U.S. Laboratory to provide habitation. According to the NASA Administrator, the effort under this step could be funded incrementally, thus limiting the up-front commitment. NASA’s initial cost estimate for step 2 is $750 million. Step 3 of the plan would result in the greatest overall cost impact on NASA because it assumes that Russia would no longer be a partner and that NASA, along with its remaining partners, would have to provide the services now expected from Russia. For its share of the mission resupply role, NASA would have to use the space shuttle or purchase those services from Russia or others. In addition, the United States would have to purchase Soyuz vehicles from Russia or accelerate the development of the six-person permanent crew return vehicle. NASA has not officially estimated the cost of step 3, but it clearly would be very expensive: the potential cost of shuttle launches or purchased launch services alone over the station’s 10-year operational life would be in the billions of dollars. NASA expects to have more refined cost estimates for the contingency plan later this year. Some of NASA’s actions to reinforce its financial reserves and keep the program within its funding limitations have involved redefining the portion of the program subject to the limitations. Such actions make the value of the current limitations as a funding control mechanism questionable. Therefore, we recommend that the NASA Administrator, with the concurrence of the Office of Management and Budget, direct the space station program to discontinue the use of the current funding limitations. More complete estimates of the cost and schedule impacts of ongoing and planned changes to the program will be available later this year. This information will help provide a more complete and current picture of the cost and schedule status of the program and clarify some of the major future cost risk it faces. After this information is available, the Congress may wish to consider reviewing the program. This review could focus on reaching agreement with the executive branch on the future scope and cost level for a station program that merits continued U.S. government support. In view of the expected availability of revised cost estimates, the first opportunity for such a review would be in conjunction with NASA’s fiscal year 1999 budget request. At the end of the review, if the Congress decides to continue the space station program, it may wish to consider, after consultation with NASA, reestablishing funding limitations that include firm criteria for measuring compliance. In commenting on a draft of this report, NASA said that the report was a good representation of the program’s performance and remaining major challenges, but NASA was concerned that the report did not provide sufficient detail for the reader to appreciate the progress the space station program has made or understand the factors that have influenced the decisions already made and those that will be made in the future. NASA agreed with our recommendation. NASA said that it had consistently taken the position that the flat funding cap, while a fiscal necessity, was inconsistent with a normal funding curve for a developmental program. NASA added that the flat funding profile resulted in the deferral of substantial reserves to later years, instead of being available in the program’s middle years. NASA said that the station’s financial reserves were not intended to cover the unanticipated costs of the Russian contingency activities, but rather were largely intended to protect against U.S. development uncertainty. In response to NASA’s comments, we added more information to the report, including information on the status of the program and the origin of the funding caps. However, the question of what the station’s financial reserves were largely intended to cover is not relevant to our assessment, which focused on whether the funding cap was an effective cost control mechanism. Moreover, the central theme of our report is that funding requirements have been rising and additional funds may be needed. We do not suggest what the source of those funds should be. To obtain information for this report, we interviewed officials in the ISS and space shuttle program offices at the Johnson Space Center, Houston, Texas, and NASA Headquarters, Washington, D.C. We also interviewed contractor and DCMC personnel in Huntsville, Alabama, and Houston. We reviewed pertinent documents, including the prime contract between NASA and Boeing, contractor performance measurement system reports, DCMC surveillance reports, program reviews, international partner agreements, independent assessment reports, and reports by NASA’s Office of Safety and Mission Assurance. We performed our work from January to July 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the NASA Administrator; the Director, Office of Management and Budget; and appropriate congressional committees. We will also make copies available to other interested parties on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are Thomas Schulz, Frank Degnan, John Gilchrist, and Fred Felder. The following are GAO’s comments on the National Aeronautics and Space Administration’s (NASA) letter dated September 8, 1997. 1. We have modified the report based on NASA’s comments. 2. The purpose and use of financial reserves is not the relevant issue. Our focus was on whether or not funding caps could be effective cost control mechanisms under circumstances where program content subject to the controls can be flexibly defined. In the past, NASA claimed the benefits of Russian participation on the program’s cost and schedule, but now that Russian participation is having negative cost and schedule effects, NASA argues that the additional funding needed should be accounted for outside the portion of the program subject to the funding limitation. Doing so dilutes the cost control ability of a funding limitation. 3. NASA’s claimed cost savings from including Russia as a partner was based mainly on a 15-month acceleration of the station’s assembly completion milestone. Our purpose was to point out that the delay in the assembly completion date means that NASA will incur additional costs during the station’s developmental period. Only the amount remains to be determined. In this report, we do not evaluate any of the claimed benefits, including cost reductions, of Russian participation in the program as a partner. 4. NASA correctly points out that the negative schedule variance under the prime contract is growing at a much slower rate than the negative cost variance, as shown by the slope of the lines in figure 1. 5. Figure 1 in the report accurately reflects cost and schedule variance changes and is directly relevant to supporting our point that NASA could experience additional cost growth if the deteriorating trend was not reversed or at least slowed because the final actual cost growth could exceed expected cost growth. After we completed our fieldwork on this assignment, the prime contractor reported that its estimate of the cost overrun at completion had more than doubled, from $278 million to $600 million. 6. NASA correctly notes that the centrifuge was not included in the development program when it was initially capped at $17.4 billion. However, NASA subsequently budgeted the centrifuge within the program and scheduled it for launch before the June 2002 assembly completion milestone. The centrifuge was later removed from the budget and NASA began negotiations with the Japanese to provide it. At that time, it was rescheduled for launch after the June 2002 assembly completion date. The centrifuge example helps to illustrate the leeway NASA has to change the content of the station program within the current cap. Such leeway undermines the cap’s value as a cost control mechanism. 7. We were asked to identify those methods NASA had used to stay within its funding limitations, not to evaluate NASA’s use of “no-exchange-of-funds” or “negotiated offset” arrangements. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on the International Space Station (ISS), which is being developed by the United States and others, focusing on: (1) Russia's performance problems and the National Aeronautics and Space Administration's (NASA) reaction to them, including the additional cost and cost risk assumed by NASA; (2) cost and schedule experience under the prime contract; and (3) the status of and outlook for the program's financial reserves. GAO also identified actions taken by NASA to keep the space station program's funding within certain limits through the completion of the station's assembly. GAO noted that: (1) in May 1997, NASA revised the space station assembly sequence and schedule to accommodate delays in the production and delivery of the Service Module; (2) this revision occurred after more than a year of speculation regarding Russia's ability to fund its space station manufacturing commitments; (3) to help mitigate the adverse effects of the Russians' performance problems and address the possibility that such problems would continue, NASA developed and began implementing step 1 of a three-step contingency plan; (4) NASA has budgeted an additional $300 million from other NASA activities for the space station program to cover the hardware cost under step 1; (5) NASA will also incur other costs under step 1 that have not yet been estimated; (6) significant additional cost growth could occur in the station program if NASA has to implement steps 2 and 3 of its contingency plan; (7) the cost and schedule performance of the station's prime contractor has continued to steadily worsen; (8) from April 1996 to July 1997, the contract's cost overrun quadrupled to $355 million, and the estimated cost to get the contract back on schedule increased by more than 50 percent to $135 million; (9) so far, NASA and prime contractor efforts have not stopped or significantly reversed the continuing deterioration; (10) the station program's financial reserves have also significantly deteriorated, principally because of program uncertainties and cost overruns; (11) the near-term reserve posture is in particular jeopardy, and the program may require additional funding over and above the remaining reserves before the completion of station assembly; (12) to date, NASA has taken a series of actions to keep the program from exceeding its funding limitations and financial reserves; (13) NASA is accounting for these actions in ways that enable it to report its continuing compliance with the funding limitations; (14) however, to show continuing compliance in some cases, NASA has had to redefine the portion of the program subject to the funding limitations; (15) thus, the value of the current limitations as a funding control mechanism is questionable; (16) since GAO's June 1997 testimony, further cost and schedule problems have materialized and NASA has acknowledged that the potential for cost growth in the program has increased; and (17) GAO believes the program has reached the point where the Congress may wish to review the entire program.
|
The rapid growth in the consumption of bottled water has been attributed to a variety of factors. In a 2002 survey, The Gallup Organization (Gallup) found that the leading reason that consumers purchased bottled water was due to health-related issues; taste was the second leading reason, and the convenience of bottled water was also a factor. Tap water and bottled water are regulated under two different federal laws—the Safe Drinking Water Act and the Federal Food, Drug, and Cosmetic Act (FFDCA), respectively. Under the Safe Drinking Water Act, EPA, or states that have primary enforcement responsibility, are responsible for protecting the public from the risks of contaminated drinking water from public water systems and for ensuring that the public receives information on the quality of the water delivered by these systems. Specifically, the law requires EPA to establish national primary and secondary drinking water regulations for public water systems to control the level of contaminants in drinking water. National primary drinking water regulations are legally enforceable standards that protect water quality by limiting the levels of specific contaminants that can adversely affect public health and are known or anticipated to occur in water. Such standards take the form of either maximum contaminant levels or treatment techniques. EPA currently has national primary drinking water regulations for 88 contaminants. The agency may also set monitoring requirements to assist in determining whether public water systems are in compliance with the Safe Drinking Water Act. National secondary drinking water regulations are nonenforceable guidelines to control contaminants in drinking water that primarily affect the aesthetic or cosmetic qualities—such as taste, odor, or color—relating to public acceptance of drinking water. Although not required by EPA, states with primary enforcement responsibility may choose to adopt these secondary regulations as enforceable regulations in the state. Under the law, EPA regulations also require that public water systems provide consumer confidence reports—also known as annual water quality reports or drinking water quality reports—to their customers each year. These reports summarize local drinking water quality information about the water’s sources, any detected contaminants, and compliance with national primary drinking water regulations as well as information on the potential health effects of certain drinking water contaminants. Because the FFDCA treats bottled water as a food, FDA, within the Department of Health and Human Services, has broad statutory authority to ensure that bottled water that is sold in interstate commerce is safe, wholesome, and truthfully labeled. FDA has established specific regulations for bottled water, including a standard of quality, a standard of identity, and current good manufacturing practices. FDA establishes allowable levels for contaminants under the standard of quality for bottled water sold in interstate commerce on the basis of the national primary drinking water regulations established by EPA. By law, no later than 180 days before the effective date of a national primary drinking water regulation, FDA is required to issue a standard of quality regulation for that contaminant in bottled water or make a finding that such a regulation is not necessary to protect the public health because the contaminant is contained in water in public water systems, but not in water used for bottled water. FDA’s standard of quality regulation must be no less stringent than EPA’s maximum contaminant level for drinking water, or no less protective of public health than the treatment technique required by the national primary drinking water regulation. If FDA fails to promulgate a standard of quality by the statutory deadline, the EPA national primary drinking water regulation will be considered as the standard of quality for bottled water. When establishing a standard of quality regulation for bottled water, FDA also establishes monitoring requirements that the agency determines to be appropriate. Under FDA’s standard of identity regulation for bottled water, the agency defines bottled water as water that is intended for human consumption and that is sealed in bottles or containers with no added ingredients, except that it may contain safe and suitable antimicrobial agents. The standard of identity regulation also defines various types of bottled water, such as “artesian water,” “ground water,” and “spring water,” among others. FDA has also established current good manufacturing practice regulations specific to bottled water. These regulations cover protection of the water source from contamination; sanitation at the bottling facility; and sampling and testing requirements for microbiological, chemical, and radiological contaminants. Bottled water is one of the few foods subject to both current good manufacturing practice regulations for foods in general and to current good manufacturing practice regulations specific to the commodity itself. Bottlers must test their source water once a week for microbiological contaminants, unless it comes from a municipal source, which must meet EPA testing requirements. Source water must be tested at least once a year for chemical contaminants and once every 4 years for radiological contaminants. Finished bottled water must be tested weekly for microbiological contaminants and at least annually for chemical, physical, and radiological contaminants. If bottled water contains contaminants at levels considered injurious to health, it is deemed to be adulterated and is subject to enforcement action. To ensure that bottled water facilities and bottled water meet federal requirements, FDA uses a multipronged approach. The agency (1) requires bottlers to use water sources (e.g., wells, springs, and public drinking water systems) that have been tested and approved by government agencies having jurisdiction, such as state or local agencies; (2) inspects domestic bottling plants for proper operating practices and cleanliness; (3) inspects labels to confirm that labeling complies with FDA regulations; and (4) requires bottlers to test their source water and bottled water periodically to ensure compliance with the bottled water standard of quality. Furthermore, FDA tests selected samples of domestic source waters and finished bottled water for contaminants. Finally, for imported bottled water, FDA uses the same review process that applies to all imported food products. States are also responsible for regulating bottled water. Under FDA’s current good manufacturing practice regulations for bottled water, only approved sources of water can be used to supply a bottled water facility. The states or localities are responsible for approving sources of water, which may involve inspecting the source and reviewing water quality analyses. Some states also conduct inspections of bottled water facilities under contract with FDA. In addition, the states are solely responsible for regulating bottled water manufactured and sold within a single state, which does not generally fall under FDA jurisdiction. In addition to federal and state regulations and requirements for bottled water, industry standards have been established, through a code of practice, by the International Bottled Water Association (IBWA), to which its members are required to adhere. According to IBWA, its membership includes about 80 percent of the bottled water manufacturers in the United States. To be a member, IBWA requires bottled water facilities to undergo an annual plant inspection, conducted by an independent third-party organization, to assess compliance with all applicable regulations. The code of practice also establishes security standards that IBWA-member bottled water facilities must meet to ensure a secure facility. Such security standards are not required by FDA for bottled water facilities, but the agency does have guidance available for the facilities to follow. In addition, IBWA’s code of practice also contains water quality standards for bottled water, some of which are more stringent than those of FDA under the standard of quality. (See app. II for a comparison of these standards.) FDA’s bottled water standard of quality regulations, for the most part, mirror EPA’s drinking water requirements, although the case of DEHP (an organic compound widely used in the manufacture of polyvinyl chloride plastics) is a notable exception. However, FDA’s implementation of these regulations, particularly when compared with EPA’s implementation of its regulations concerning tap water, reveal key differences that reflect the limited nature of FDA’s approach to regulating bottled water. At the heart of these differences is that EPA regulates tap water under the Safe Drinking Water Act, while FDA regulates bottled water as a “food” under the FFDCA, which does not grant FDA statutory authority to implement regulations similar to those of EPA. These differences are amplified by the fact that among the foods it regulates, using a risk-based approach, FDA generally accords bottled water a low priority. We found that, for the most part, FDA’s bottled water standard of quality regulations are equivalent to EPA’s regulations for drinking water, but FDA has yet to set a standard for DEHP. Under the FFDCA, FDA is required to establish standard of quality regulations for bottled water that are no less stringent than the maximum contaminant levels established in EPA’s national primary drinking water regulations, and the agency has done so for most contaminants. In most cases where FDA has not adopted EPA’s national primary drinking water regulations, the agency has provided a rationale for not doing so. For example, FDA stated that it did not adopt EPA’s maximum contaminant level for asbestos or EPA’s treatment technique for the parasite Cryptosporidium because if municipal water is used as a source, it already has to meet EPA regulations, and it is unlikely that other sources of water, such as springs and aquifers, would contain these contaminants. One exception, however, is the case of a phthalate, DEHP. FDA has yet to establish a standard for this contaminant, even though EPA established a national primary drinking water regulation for it in 1992 and FDA’s statutory deadline for adopting the standard was in January 1993. EPA found that the potential health effects from exposure to DEHP above the maximum contaminant level could include reproductive difficulties, liver problems, and increased risk of cancer. Although FDA proposed a standard in August 1993, the agency subsequently deferred action on DEHP and has yet to either adopt a standard or publish a reason for not doing so. The agency delayed action on DEHP in 1996 because the compound was already approved for use in packaging that comes in contact with food (including bottled water), which FDA believed could have created a potential conflict with FDA’s proposed standard of quality for DEHP. According to FDA officials, an agency task force is currently examining information regarding the use of phthalates, including DEHP, in food contact materials. The results of this work by the task force will be used to set a standard for DEHP, but it is unclear when FDA will complete the study. Because FDA has not established a standard of quality for DEHP in bottled water, bottled water facilities are not required to test for it. While FDA’s standard of quality regulations for bottled water are generally consistent with EPA’s drinking water requirements, FDA’s regulation of bottled water has been limited. Among our key findings are that (1) when compared with EPA’s regulation of public water systems, several key differences reflect the limited nature of FDA’s regulation of bottled water, particularly regarding how violations are reported and whether the use of certified laboratories is required; (2) because FDA’s experience over the years has not shown that bottled water poses a significant public health risk, the agency devotes fewer resources to the enforcement of bottled water regulations than it does for higher risk foods; (3) while state regulatory requirements for bottled water often meet or exceed those of FDA, the requirements vary across the states and, in some states, are still less comprehensive than state requirements for tap water under the Safe Drinking Water Act; and (4) FDA’s oversight of imported bottled water is limited. FDA’s regulation of bottled water differs from EPA’s regulation of drinking water in key ways, largely because FDA does not have the specific statutory authority to regulate bottled water in the same manner EPA regulates drinking water. These differences relate to how violations are reported, whether bottlers are required to use certified laboratories to test their water, and the retention of water quality testing records. How violations are reported: The FFDCA does not specifically authorize FDA to require bottlers to report test results, even if violations of the standard of quality regulations are found. Instead, inspectors review testing records when they inspect bottling facilities. In contrast, under the Safe Drinking Water Act, public water systems must notify the public as well as the appropriate regulatory agency (e.g., state environmental agency) within 24 hours of detecting certain violations of the national primary drinking water regulations that have significant potential to have serious adverse effects on human health as a result of short-term exposure. For violations that have the potential to have serious adverse effects on human health and all other violations, public water systems must provide notice within 30 days and 1 year, respectively. FDA officials told us that to comply with the Food and Drug Administration Amendments Act of 2007, the agency is developing a means for all food facilities it regulates to report instances when there is a reasonable probability that the use of, or exposure to, a food will cause serious adverse health consequences or death to humans or animals. This act required FDA to establish, by September 2008, a Reportable Food Registry—an electronic portal by which responsible parties or public health officials may submit such instances to FDA. FDA officials have told us that the registry is still under development, and that it is taking steps to create an interim Reportable Food Registry by the end of fiscal year 2009. Whether certified laboratories are used: Another key difference is that FDA does not require bottle water facilities to use certified laboratories for water quality tests. Public water systems are required by the Safe Drinking Water Act to use such laboratories. In this regard, bottled water is treated like other food products, which generally are not required to be tested by certified laboratories. Instead, under the bottled water current good manufacturing practice regulations, sample analysis of source water and finished products may be performed by competent commercial laboratories. EPA and state-certified laboratories are cited as examples of competent commercial laboratories, but use of these certified laboratories is not required. FDA officials have stated that they are not aware of any special grounds or particular need to require the use of certified laboratories for bottled water. In addition, under the Safe Drinking Water Act, operators of public water systems must be certified to ensure that their public water system provides an adequate supply of safe, potable drinking water. There is no such requirement for operators of bottled water facilities. Retention of water quality testing records: FDA requires that bottled water facilities retain the results of all water quality tests for up to 2 years. On the other hand, EPA requires that public water systems retain the results of microbiological tests for 5 years and the results of chemical tests for 10 years. As we discuss in the following section, because FDA inspections of bottled water facilities are infrequent and because reporting is not required if problems are found, FDA would most likely not be aware that a contamination problem existed if a facility was not inspected within a 2-year time frame. The FFDCA also authorizes FDA to inspect bottled water facilities and sample products. According to FDA, since bottled water has had a relatively good safety record over the years, bottled water facilities are generally assigned a low priority for inspection, unless a facility has had violations in the past. On average, FDA has devoted approximately 2.6 full- time-equivalent positions per fiscal year to inspecting bottled water facilities in fiscal years 2000 through 2008. Specific inspection tasks for bottled water facilities include (1) verifying that the water used by the plant for its product and for its operations are obtained from an approved source; (2) checking whether bottled water labeling complies with FDA regulations; (3) inspecting washing and sanitizing procedures; (4) inspecting filling, capping, and sealing operations; and (5) determining whether the firms analyze, on schedule, their source water and finished products for the contaminants listed in the standard of quality and whether the firms meet the standard of quality’s allowable levels for the contaminants. In general, inspectors take water samples only “for cause” (i.e., if they observe a potential problem or if the facility has a history of contamination). We have found that the frequency of bottled water inspections varied. Domestic bottled water inspections generally averaged about 475 per fiscal year, but increased dramatically in fiscal years 2003 and 2005, to about 600 and 740, respectively. According to FDA officials, the increase in inspections in fiscal years 2003 and 2005 was most likely due to an increased focus on ensuring the security of all food facilities. Because FDA’s database of registered food firms does not capture data that would identify all U.S. firms manufacturing bottled water, we could not determine the percentage of bottled water facilities inspected. On the basis of interviews with FDA officials in the eight district offices we contacted, however, inspections of bottled water facilities took place at varying frequencies. For example, three of the district offices with which we spoke stated that bottled water facilities are inspected once every 2 to 3 years by the district office or by the state under contract with FDA. Other district offices reported inspecting bottled water facilities less often. Additionally, FDA has increasingly relied on states to inspect bottled water facilities. FDA establishes contracts with state agencies to inspect particular facilities, including bottled water facilities. State officials performing inspections as part of an FDA contract perform inspections the same way that an FDA inspector would perform an inspection. Like FDA inspectors, state-contracted inspectors do not generally take samples, unless there is a reason to do so. States that conduct contract inspections are audited by FDA district offices to ensure that their inspections are equivalent to FDA inspections. Twenty-two of the 26 states under the jurisdiction of seven of the eight district offices we contacted conduct bottled water inspections under contract with FDA. Our review indicates that from fiscal years 2000 through 2008, the state share of bottled water inspections has increased in recent years (see fig. 1). From fiscal years 2000 through 2005, the states, under contract with FDA, conducted about 65 percent of the bottled water inspections, while from fiscal years 2006 through 2008, the states conducted about 86 percent of the bottled water inspections. Overall, the states conducted approximately 70 percent of the bottled water inspections from fiscal years 2000 through 2008. rough 2008. Furthermore, FDA coordinates with states to better leverage inspection resources. We found that all eight FDA district offices we contacted obtained the results of inspections conducted by the states under contract with FDA. Most states shared this information with FDA through an electronic database, which also gave the states access to a food firm’s inspectional history. If any collected samples violated the standard of quality, the states generally shared this information as well, according to officials from the FDA district offices. Such information-sharing, according to FDA officials, allows the agency to leverage resources so that it can focus on more high-priority food inspections and ensures that they have complete information on facility inspections. In contrast, we found that most of the FDA district offices we contacted did not have agreements to obtain the results of bottled water facility inspections that states conduct under their own authority, not under contract with FDA. Still, all of the district offices with which we spoke said that state officials would most likely contact them if a serious problem at a bottled water facility surfaced during a state inspection. On the basis of inspections conducted by FDA and the states under contract with FDA, potential problems were identified in approximately 35 percent of the bottled water inspections conducted between fiscal years 2000 and 2008, but FDA took little enforcement action. A majority of the bottled water facilities that were inspected and found to have potential problems were designated as “voluntary action indicated,” meaning the inspector found objectionable conditions, but the district office determined that such objectionable conditions were not sufficient enough to warrant any administrative or regulatory action by FDA. Accordingly, the firms in those cases were left to take corrective actions voluntarily. FDA also indicated that there were a small number of cases in which FDA referred issues related to bottled water quality to local public health authorities that have their own enforcement authorities. On the basis of a review of FDA’s food recalls database, from fiscal years 2002 through 2008, bottled water has been recalled 23 times, primarily for excessive levels of contaminants, such as arsenic and bromate. Also during this period, FDA issued three warning letters to bottled water facilities for various violations, including failure to maintain documentation and inadequate sanitary practices. States have enacted their own laws and regulations in an effort to better ensure the quality and safety of bottled water. Nonetheless, (1) the laws and regulations are less consistent than state laws in protecting tap water, pursuant to the Safe Drinking Water Act, and (2) FDA does not have the statutory authority to oversee state regulation of bottled water, while the Safe Drinking Water Act requires EPA to oversee primacy states’ regulation of tap water. Our survey of 50 states and the District of Columbia identified variability in their requirements in governing certain key practices that protect and ensure bottled water quality and safety. For example, respondents in 31 states indicated that their states require that microbiological tests be done by a certified laboratory. Respondents in 12 states, however, do not require the use of a certified laboratory for such tests. States also exhibit variability in terms of what they require of bottled water facilities in reporting the results of quality tests to the state. For example, 21 respondents said their states require bottlers to notify the states if they detect violations in their samples, and 20 require bottlers to submit water quality test results to the states on a periodic basis, whether or not they are in violation. On the other hand, 20 states do not require that water quality tests or violations be reported to the state. Furthermore, states exhibited variability in the frequency at which bottled water facilities are inspected. Officials from 38 states reported that they inspected bottled water facilities annually or more often, whereas officials from 10 states indicated that their states inspected bottled water facilities less frequently than once a year. In contrast to the diverse practices among state authorities in regulating bottled water, the framework under the Safe Drinking Water Act for regulating tap water requires a high degree of consistency among the states. For example, one condition of being given primary enforcement responsibilities (or primacy) for their public water systems, is that states must have adopted and be implementing adequate procedures for the enforcement of state drinking water regulations that are no less stringent than EPA’s national primary drinking water regulations. Among other requirements, the adequate procedures must include the following: (1) statutory or regulatory enforcement authority adequate to compel compliance, (2) maintenance of an inventory of public water systems operating in the state, (3) a systematic program for conducting sanitary surveys of public water systems, and (4) a program for the certification of laboratories conducting analytical measurements of drinking water contaminants. The FFDCA and the Safe Drinking Water Act also require different levels of federal oversight. Specifically, under the Safe Drinking Water Act, states may be given primary responsibility for regulating drinking water with EPA conducting systematic oversight, whereas FDA retains responsibility for regulating bottled water under the FFDCA. At least annually, for example, EPA must review a state’s compliance with requirements for having primary enforcement responsibility. If the states do not meet these requirements, EPA must initiate proceedings to withdraw primacy approval. In addition, primacy states must submit quarterly reports to EPA that include both new violations of national primary drinking water regulations and new enforcement actions that states took against public water systems for those violations. In contrast, FDA does not have the statutory authority to grant states responsibility for bottled water regulation, nor does it have statutory authority to review state bottled water regulations or the enforcement actions taken by the states. FDA has provided limited oversight of imported bottled water, since relatively few bottled water imports are physically examined or sampled. The agency follows a two-tier strategy to oversee the importation of bottled water and the importation of food in general. First, FDA’s Prior Notice Center reviews information about scheduled food imports to determine whether there are any terrorism-related concerns or serious health risks associated with the products. Second, after the information pertaining to the articles offered for import is transmitted to U.S. Customs and Border Protection in the form of an entry, data pertaining to FDA are sent to an automated database, where they are screened. At this point in the process, the entry data are evaluated electronically and either are allowed to proceed or are flagged for review. To determine whether an article offered for import warrants further examination, reviewers are to take into account the perceived risk and whether an import alert has been issued for the particular commodity, importer, or country of origin. Since 2004, only one import alert has been associated with bottled water. The entry reviewer can request entry documentation pertaining to the product, review the product label, and request that the product be examined or sampled. If the agency finds a problem with an import—for example, contamination—the shipment is detained while the importer or agent is given a period of time to present exonerating evidence. If the importer or agent cannot provide evidence to overcome the apparent violation within the 10-day detention and hearing period, barring any extensions, the shipment is refused. After a refusal is issued, the importer must either destroy or export the article out of the United States within 90 days. FDA also examines other articles offered for import as part of general surveillance to meet its work plan. For example, FDA increased its review of bottled water imports as a result of the events of September 11, 2001. Our review of data from FDA’s imports database indicates that FDA’s oversight of imported bottled water has been limited. From fiscal years 2004 through 2008, there were 263,314 import entry lines associated with either bottled water or bottled spring or mineral water. Of these, approximately 50 percent of the bottled spring or mineral water and 33 percent of the bottled water were permitted to proceed without further review, while the remainder was subject to an on-screen review. Of the imports reviewed on screen, about 1 percent of the bottled spring or mineral water and about 4 percent of the bottled water were examined further. A smaller percentage of the bottled water imports was sampled for quality testing. In addition to reviewing FDA’s responsibilities for ensuring the quality and safety of bottled water imports, we also reviewed how several top exporting countries—including Canada, Fiji, and Turkey as well as the European Union and its member states—regulate bottled water. We found that, like the United States, these countries have established definitions for different types of bottled water and water quality standards to ensure safety. We identified a couple of examples in which foreign regulations are more stringent than FDA regulations. For example, Canadian regulations specify that bottled water cannot contain any coliform bacteria. In addition, Turkey requires that inspections of bottled water facilities be conducted more frequently than FDA requires. Specifically, licensed drinking water facilities are subject to inspections annually by the Ministry of Health and every 3 months by the local health authority. Licensed natural mineral waters are subject to inspections every 3 months by the ministry and every month by the local health authority. Manufacturers are responsible for the costs of the ministry’s and local health authority’s analyses of bottled water. A number of concerns emerge regarding FDA’s regulation of bottled water under the FFDCA and its enforcement practices, particularly in comparison with EPA’s regulation of drinking water under the Safe Drinking Water Act. These observations, however, should be viewed in the context of the legal limitations placed by the FFDCA on FDA, and the constrained resources that have affected FDA’s overall capabilities in recent years. The legal constraints arise because while the Safe Drinking Water Act authorizes EPA to require water samples to be tested by certified laboratories and violations of national primary drinking water regulations to be reported within certain time frames to EPA or the state agency with primary enforcement responsibility, the FFDCA does not grant FDA similar authority. Rather, the FFDCA requires FDA to regulate bottled water as a food—as opposed to drinking water subject to the Safe Drinking Water Act—and does not specifically authorize FDA to require that foods, including bottled water, be tested by certified laboratories or that violations of the standard of quality be reported to FDA. In addition to these legal constraints, bottled water’s status as a food has subjected it to many of the same problems more generally affecting FDA oversight of food safety. As we noted in January 2007, for example, when we designated federal oversight of food safety as a “high-risk” area affecting public health and the economy, federal oversight of food safety is fragmented, with about 15 agencies having food safety roles. We specifically cited FDA’s resource constraints, noting in 2008 that while the number of domestic firms under FDA’s jurisdiction increased from fiscal years 2001 through 2007 from about 51,000 firms to more than 65,500, the number of firms inspected declined from 14,721 to 14,566 during the same period. We cited resource constraints as a contributing factor, noting that the number of full-time-equivalent positions at FDA devoted to food safety oversight had decreased by about 19 percent from fiscal years 2003 through 2007. Along those same lines, we noted in 2005 that while FDA was responsible for regulating about 80 percent of the nation’s food supply, it accounted for only 24 percent of expenditures in fiscal year 2003 among the federal agencies with food-safety-related responsibilities (these other agencies included the U.S. Department of Agriculture, EPA, and the National Marine Fisheries Service). In light of its resource constraints, FDA’s Food Protection Plan, issued in 2007, cites the need to focus general food safety inspections based on risk. In addition, although not yet fully defined, FDA has indeed begun to take a more risk-based approach in identifying firms for safety inspections and has identified bottled water to be a low-risk food product. The result of this approach, therefore, has led FDA to devote fewer resources to bottled water oversight for general food safety because of a need to focus on higher-risk food products, such as seafood and fresh produce. Ultimately, as we recommended in 2007, a fundamental reexamination of the federal food safety system will be needed to look across the activities of individual programs within specific agencies with food-safety-related responsibilities. Toward that end, in 2001 we recommended, among other things, that Congress enact comprehensive, uniform, and risk-based food safety legislation and commission the National Academy of Sciences or a blue-ribbon panel to analyze alternative organizational food safety structures in detail. We believe that FDA’s lack of authority and resources to effectively regulate bottled water, as compared with how EPA regulates tap water, should be part of that reexamination. Because it is considered a food, bottled water must comply with FDA’s general requirements for food labeling, which include ingredient and nutrition information. These requirements include the name of the product; the name and address of the manufacturer, packer, or distributor; and the net contents. Although not required, bottled water labels may also include the type of water (i.e., standard of identity). In addition, like other food products, bottled water is subject to the same general prohibitions against misbranding. Responding to a petition from IBWA for FDA to more closely regulate bottled water in the face of inconsistent state regulation of bottled water, FDA in 1995, modified and expanded the standard of identity regulation, including definitions for different types of bottled water, such as mineral water and spring water (see table 1). According to FDA regulations, if a bottled water label includes a standard of identity, the water must satisfy that standard’s requirements or the product will be considered misbranded. For example, bottled water labeled as mineral water must, among other things, contain not less than 250 parts per million of total dissolved solids and originate from a geologically and physically protected underground water source, with no minerals artificially added. For bottled water that comes from a public water system, the standard of identity regulations require its label to clearly state that the product comes from a municipal source or community water system, unless the water has been treated and meets the standard of identity for purified, distilled, deionized, sterile, or sterilized water. Carbonated water, soda water, seltzer water, sparkling water, and tonic water are considered soft drinks and are not regulated as bottled water. In addition, other terms used on the label about the source, such as “glacier water” or “mountain water,” are not definitions included in the standard of identity regulation and may not be used to convey that the water comes from a pristine area. As with other foods, FDA guidance provides that when inspecting bottled water facilities, investigators should review labels to ensure that they are accurate and meet regulations. As we have previously mentioned, however, FDA often has limited assurance that companies are complying with food-labeling requirements, partly because FDA investigators are not required to keep track of labels reviewed. Therefore, in the absence of reliable FDA data, we were not able to determine the extent to which FDA reviews bottled water labels, or to substantiate the claims of FDA officials that they have not come across any widespread problems with bottled water labeling. Our own review of bottled water labels revealed that the information they contain—although limited—is generally accurate. Specifically, of the 83 labels we reviewed from across the country, only 1 included an unclear statement on the label regarding the standard of identity. In this case, the label listed the water as “mountain spring water” but after contacting the company, we determined that the water was actually artesian and not spring water as defined by FDA. The real question, however, is whether the label information is sufficient to adequately inform consumers about a water bottle’s contents. As we discuss in the following section, the actions of a number of states, and our own review, suggest that consumers could benefit from additional information. Many states have adopted FDA’s labeling regulations, but some states require additional information. For example, bottled water sold in New Mexico must be labeled with the treatment methods used in its production. Also, bottled water sold in Massachusetts is required to include information on the label identifying the type and the location of the source water (by municipality, state, or country). Massachusetts state officials said this requirement was put in place because of strong consumer demand for such information. Some states have also established further restrictions regarding source listings. For example, Alaska defines “glacier water” as either (1) runoff directly from the natural melting of a glacier, (2) water obtained from the melting of glacial ice at a food- processing establishment, or (3) water from a stream flowing directly from a glacier and not diluted or influenced by a nonglacial stream. As a related matter, California recently passed legislation requiring that, as a condition of being licensed in the state, a bottled water facility must annually prepare a bottled water report and make the report available to each customer upon request. The report must include, among other things, information on the source, treatment method, and health disclosures for certain contaminants that may be found in the water. According to California state officials, this legislation was passed to require that this information be made available so that the state’s consumers are afforded the same water quality “right-to-know” protections and regulatory oversight of bottled water as those established for tap water. Labels on bottled water from facilities licensed in California are now required to include a statement about how consumers can access the annual report. Such consumer right-to-know reports have been required by EPA for public water systems since 1998. These “consumer confidence reports” summarize information on sources, on any detected contaminants, and on compliance with primary drinking water regulations, among other information. Consumer confidence reports are one of several right-to- know provisions that were included in the Safe Drinking Water Act Amendments of 1996. These amendments contain several other provisions to improve public information about drinking water, including requiring public notification when a public water system fails to meet a maximum contaminant level. The Safe Drinking Water Act Amendments also required FDA to study the feasibility of the appropriate methods to inform customers about the contents of bottled water. In its 2000 report, FDA concluded that certain methods were feasible for the bottled water industry to provide the same type of information to consumers that the Safe Drinking Water Act requires public water systems to provide in an annual consumer confidence report—including the source and levels of contaminants tested for and found in the water. FDA further concluded that it would be feasible and appropriate for the industry to update the information annually and provide it by enabling the consumer to contact the producer directly through a telephone number or address listed on the label, or through a combined approach where some information about the water would be included on the label and the rest would be obtainable on request. Nonetheless, the agency was not required to take action on its findings and has yet to do so. FDA officials explained that since bottled water is not considered a significant health risk, and, in light of the agency’s limited available resources, FDA does not anticipate initiating a rule making in response to the study’s findings. Our work suggests that consumers may benefit from additional information. For example, when asked whether consumers in their state had misconceptions about bottled water, 24 of the 51 state and District of Columbia officials responding to our survey replied that consumers believe that bottled water is safer, is healthier, or is of higher quality than tap water. Their responses were consistent with a 2002 EPA-sponsored Gallup survey, which found that the main reason consumers either filtered tap water or purchased bottled water was due to health-related concerns. In a separate poll, the Water Research Foundation, in 2003, found that about 56 percent of the bottled water drinkers cited safety and health as the primary reason they sought an alternative to tap water. IBWA has also endorsed the concept that a consumer has a right to comprehensive information about bottled water, believing that the most feasible way for consumers to obtain this information is through a request to the bottler. In fact, IBWA requires that its members include a telephone number on their labels so consumers can contact the company and request information that should be readily available to the company. Nonetheless, our review of bottled water labels revealed that, when compared with what public water systems are required to provide to consumers of tap water, very few bottled water facilities provide such information to consumers, either through labels, company Web sites, telephone calls to company representatives, or any combination of these avenues. Of the 83 bottled water labels that we reviewed, 9 did not have contact information, such as a telephone number, Web address, or e-mail; 5 labels had only a postal address as a means of contacting the company. Bottled water labels for 12 brands did not contain source information, nor was this information available by telephone or a Web site review. In addition, 16 brands did not contain water quality treatment information on the label, nor was this information available by telephone or a Web site review. Furthermore, only 1 of the bottled water labels that we reviewed contained limited water quality or health-related information, and this information was available from just 34 of the bottled water companies that we had telephoned or from reviewing their Web site. Thirteen of the water quality reports that we did obtain were incomplete or unclear. For example, several of the water quality reports had test results for only some of the contaminants tested or did not reflect the most recent tests conducted; other reports only described which contaminants were tested or how often the tests were conducted. In addition to the safety and consumer issues associated with bottled water, some parties have raised concerns about the environmental impacts associated with its manufacture and transportation and with the extraction of water associated with its production. Among these issues are the impacts on (1) municipal landfill capacity of discarded water bottles, (2) the effects on U.S. energy demands from the manufacture and transport of plastic bottles for drinking water, and (3) communities and the environment of groundwater extraction for the purposes of bottling water. Most plastic water bottles produced in the United States are discarded rather than recycled. The most common water bottles are made of a plastic called polyethylene terephthalate, or PET. Precise information on the amount of PET in the bottled water containers produced, recycled, and discarded each year is not available. Representatives of the beverage industry and an environmental nonprofit organization reported that about 827,000 to 1.3 million tons of PET plastic water bottle containers were produced in the United States in 2006. Our analysis of data provided by these groups indicated that about 76.5 percent of these PET plastic water bottles were discarded in 2006, which is equivalent to about 632,655 to 999,001 tons of PET, or less than about 1 percent of the 170 million tons of the total discarded U.S. municipal solid waste and about 26 to 41 percent of the 2.4 million tons of total discarded PET plastic. Most discarded water bottles end up in U.S. landfills, although some bottles become litter or are incinerated, according to the officials with whom we spoke. Precise information was not available regarding the amount of discarded PET water bottles that ended up in U.S. landfills versus discarded PET water bottles that were incinerated or became litter. The near-term impact of the PET plastic water bottles in municipal landfills appears to be minimal. For example, an official from EPA’s Office of Resource Conservation and Recovery and an expert in solid waste management from the Solid Waste Association of North American told us that PET plastic is an inert material and, therefore, does not react when in contact with other materials in the waste stream. They also noted that PET plastic is not known to leach contaminants, nor is it associated with any known risks to public health or the environment while in a landfill. However, they emphasized that in a landfill, PET plastic water bottle containers are typically compacted and shielded from the sunlight and the atmosphere. According to the solid waste management expert, under these conditions it is not known precisely how long it takes for the PET plastic to decompose, although decomposition will occur over a very long time horizon, possibly thousands of years. Thus, this expert told us that for landfill management purposes, solid waste experts assume that PET plastic will never decompose. Knowledgeable officials from the beverage and PET plastic-manufacturing industries told us that bottled water companies have made significant investments in recent years to “light-weight,” or reduce the amount of PET plastic in each bottle. For example, Nestlé Waters North America reported in its 2008 Corporate Citizenship Report that it introduced a 12.4 gram half-liter PET water bottle on the market in 2008 that reduced the amount of PET plastic in its half-liter bottles by 30 percent, compared with the average half-liter plastic beverage container on the market in the previous year. These officials believed that these efforts will lead to substantial reductions over the next few years in the amount of PET plastic associated with discarded water bottles. It is unclear what impact efforts to produce bottles with less plastic will have on the total amount of PET plastic associated with discarded water bottles until more municipal solid waste statistics become available. We identified two organizations that have attempted to document the effects on U.S. energy demands of the manufacture and transportation of bottled water. Among the analyses we reviewed, the most comprehensive was a peer-reviewed study published in February 2009 by the Pacific Institute that computed the energy required for various phases of bottled water production, transport, and use. Specifically, the institute computed the energy required to make PET plastic material, to fabricate the bottles using the PET material, process the water before bottling, fill and seal the bottle, transport the bottled water for sale to end-users, and chill it for use. Because transportation energy costs can vary, depending on the distance from a bottling plant to market and the mode of transportation, the institute evaluated the energy costs for the following three transportation scenarios for transporting filled water bottles from a bottling plant to a point of sale in Los Angeles, California: (1) locally produced purified bottled water, delivered within 200 kilometers (about 125 miles) of a bottling plant by truck; (2) spring water transported from the island nation of Fiji in the South Pacific by cargo ship to Los Angeles and then delivered locally within 100 kilometers (about 60 miles); and (3) spring water transported from France by cargo ship to the eastern United States, transported by freight rail to Los Angeles, and distributed locally by truck. The results of these three scenarios apply to water shipped from the three locations and consumed in Los Angeles and, therefore, are not representative of all U.S. transportation of bottled water from the bottling plant to the point of sale. According to Pacific Institute officials, these scenarios were chosen to try to provide a low, medium, and high range for energy costs associated with the manufacture and transportation of bottled water. Although the Pacific Institute’s study was the most comprehensive analysis of the energy impacts of bottled water that we identified, certain aspects of its scope and methodology limit the generalizability and certainty of its results. For example, the scope of the institute’s study did not include energy estimates for all phases of bottled water production and use, such as the energy required to transport or convey the water to the bottling plant from either a municipal source or a self-supplied surface or groundwater source, nor did the study include the energy required for bottled water waste collection, disposal, and recycling. In addition, the institute’s analysis and results focused on the energy required for the production, transport, and use of a typical 1-liter PET bottle of water, which the institute estimated weighs about 38 grams. Lighter and heavier PET bottles could have significantly different energy impacts. The Pacific Institute’s study presented two major findings. First, the energy required to produce and use a typical 1-liter PET bottle of water weighing 38 grams varies substantially, depending on the mode of transportation and the distances traveled from the bottling plant to the point of sale. For example, the institute estimated that transportation energy costs varied from about 25 percent (1.4 megajoules per liter) of the total energy footprint for “purified” bottled water produced in Los Angeles and delivered locally within 200 kilometers (about 125 miles) of the bottling plant by truck, to about 57 percent (5.8 megajoules per liter) for “spring” water bottled in France, transported overseas by cargo ship, and transported by rail from the eastern United States to Los Angeles. Second, although the overall production and consumption of bottled water makes up a small share of the total U.S. energy demand, bottled water is much more energy-intensive than public drinking water. For example, on the basis of all the energy inputs for bottled water manufacture and use and the three transportation scenarios calculated, the institute estimated that the total energy required to bring a typical 1-liter PET bottle of water weighing about 38 grams to the consumer in Los Angeles would typically range from about 5.8 to about 10.2 megajoules per liter, or about 1,100 to 2,000 times the energy cost of producing tap water (about 0.005 megajoules per liter). According to state officials in Maine, Michigan, New Hampshire, and Vermont, existing groundwater extraction for the purposes of bottled water has not had an adverse impact on state waters or the environment and is small relative to other groundwater uses. However, these officials said that large-scale groundwater extraction can adversely impact local groundwater availability, surface water flows, and dependent resources. We chose to speak with officials in these four states about the impacts of groundwater extraction because in each of these states, local communities have expressed concerns about bottled water production, and recent state legislation was enacted to address these concerns. Among the cases we reviewed, we found that such concern centered on water extracted from a groundwater source by the bottled water producer, rather than water purchased from a municipal source. State officials told us that existing groundwater extraction for bottled water does not have a significant impact on state groundwater supplies. For example, state officials in Maine told us that in 2007, bottled water production constituted about 3 percent (or 650 million gallons) of the 19 billion gallons of total groundwater extracted in the state. Similarly, officials from the four states told us that existing groundwater withdrawals for bottled water are small relative to other groundwater uses. For example, a geologist from the New Hampshire Department of Environmental Services reported that most groundwater extraction in the state goes to municipal water systems, residential subdivisions, golf courses, power plants, and manufacturers of beverages other than bottled water. In addition, Michigan state officials told us that in areas of Michigan where groundwater can be limited, most groundwater extraction goes to agricultural and mining activities. While groundwater extraction may have minimal impacts on state groundwater supplies, it can, in some cases, alter local groundwater levels and flows to nearby surface waters, according to the U.S. Geological Survey. For example, pumping groundwater from a single well diverts the groundwater toward the extraction well in the area around the well. As a result, pumping can lower the local water table shared by nearby well users. When the aquifer is shallow and connected to a nearby stream, the pumping can diminish the available surface water supply by diverting some of the groundwater that otherwise would have flowed into the stream or by drawing flow from the stream into the surrounding aquifer. Reductions of surface water flows as a result of groundwater extraction are likely to be of greatest concern during periods of low flow. Groundwater extraction can also affect natural resources dependent on groundwater flowing to surface waters. For example, changes in the water that flows to and from a stream may affect temperature, oxygen levels, and nutrient concentrations in the stream. These changes may in turn affect aquatic life, such as certain fish populations whose spawning success may be greater where surface water temperature is modulated by incoming groundwater. The impacts from a single groundwater extraction site on local ground and surface waters depend on factors that include, among other things, the rate of water withdrawals, type and physical characteristics of an aquifer, degree of connection between the aquifer and surface waters, and rates of precipitation. The state officials we interviewed told us that while they have not seen adverse large-scale impacts on water supplies and the environment from existing bottled-water-related groundwater extraction, concerns among some local communities in these states about their effect have led to some conflict and litigation. For example, in 2001 residents in Mecosta County, Michigan, sued a water bottler, alleging that its withdrawals reduced water levels of a nearby stream and wetlands and unlawfully interfered with their water rights. State officials in Michigan, Maine, and Vermont told us that to address these concerns and ensure that effective groundwater resource protections were in place, their state legislatures enacted new or amended requirements for extracting groundwater for bottled water. For example, in 2006 and 2008, Michigan’s safe drinking water act was amended to require, among other things, a permit for a water-bottling operation that uses a new or increased groundwater withdrawal of more than 200,000 gallons per day. The law also requires that permitted groundwater withdrawals of more than 2 million gallons per day do not result in an individual or cumulative adverse impact, which refers to decreasing a stream’s or river’s flow or reducing the abundance or density of fish populations. While FDA’s standard of quality regulations for bottled water are generally consistent with EPA’s drinking water quality requirements, the agency could do more to ensure the safety of bottled water, either by (1) promptly adopting EPA’s health-based public drinking water standard for the phthalate, DEHP, and setting monitoring requirements for this contaminant or (2) publishing in the Federal Register a rationale for not doing so. We further believe FDA should act expeditiously after its DEHP task force study ends, since FDA’s statutory deadline for acting on DEHP was more than 15 years ago. Without a standard or monitoring requirement in place, bottled water facilities are not required to test for and potentially identify harmful levels of a contaminant that is currently regulated in public drinking water. In addition, to prevent public misconceptions about the health and safety of bottled water and to match consumer right-to-know standards pertaining to tap water, FDA could help to ensure that consumers have more complete product information by implementing its findings regarding the appropriate and feasible methods for informing consumers about the contents of bottled water. Although we have also raised a number of broader concerns about FDA’s oversight of bottled water facilities—particularly in comparison with EPA’s regulation of public water supply systems under the Safe Drinking Water Act—we acknowledge that many of these concerns reflect the legal limitations the FFDCA imposes on the agency and the decline in resources that has hampered overall food safety responsibilities in recent years. Regarding FDA’s effectiveness, we have recommended in the past that a fundamental reexamination of the federal food safety system be undertaken, including enactment of comprehensive, uniform, risk-based food safety legislation. We believe that FDA’s lack of authority and resources to effectively regulate bottled water should be part of this reexamination. We recommend that the Secretary of Health and Human Services direct the Commissioner of FDA to take the following two steps: Issue a standard of quality regulation for DEHP, or publish in the Federal Register the agency’s reasons for not doing so 1 year after the conclusion of its task force study on this matter. Implement FDA’s findings on methods that are feasible for conveying information about bottled water to customers, such as, at a minimum, requiring that companies provide on the label contact information directing customers on how to obtain comprehensive information. Should FDA determine that it lacks the necessary authority to implement its findings, it should seek legislation to obtain such authority. We provided the Environmental Protection Agency and the Department of Health and Human Services’ Food and Drug Administration with a draft of this report for their review and comment. EPA provided oral comments, stating that the agency agreed with the report’s findings. In its written response, FDA first noted that the agency “strives continually to advance its public health mission, and this includes efforts to improve the safety, sanitation, suitability, and proper labeling of bottled water.” It then expressed general agreement with our two recommendations. Regarding the first recommendation on issuing a standard of quality regulation for DEHP in bottled water, FDA agreed that it should reassess whether to issue the regulation as soon as possible after the conclusion of the task force study on phthalates. However, FDA noted that our recommended 180-day time frame to issue a DEHP standard for bottled water did not provide enough time for a notice and comment rule making. Accordingly, we changed the time frame in the recommendation from 180 days to 1 year. In the event that FDA decides to promulgate a standard of quality regulation for DEHP, we think that 1 year provides FDA with sufficient time to conduct rule making since it will be based on the study’s results. Moreover, we think FDA should move expeditiously on DEHP since the statutory deadline for taking action was more than 15 years ago. Regarding our recommendation to improve the way in which information about bottled water is conveyed to consumers, FDA agreed that bottled water should be labeled with contact information that allows consumers to more easily contact the manufacturer to obtain comprehensive information about the product. The agency said it intends to pursue this issue with bottled water manufacturers. FDA also provided comments to improve the draft report’s technical accuracy, which we have incorporated as appropriate. Appendix IV contains a reprint of FDA’s letter. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, the Administrator of the Environmental Protection Agency, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To evaluate the extent to which federal and state authorities regulate the quality of bottled water to ensure it is safe and the extent to which they regulate the accuracy of labels or claims about the purity and source of bottled water, we reviewed federal and state bottled water regulations. We compared the standard of quality regulations that apply to bottled water with the Environmental Protection Agency’s (EPA) standards under the Safe Drinking Water Act. We interviewed officials in the Food and Drug Administration’s (FDA) Center for Food Safety and Nutrition, Office of Regulatory Affairs, and eight FDA District Offices, among other FDA offices; EPA; nonprofit organizations, such as the Natural Resources Defense Council, the Environmental Working Group, and the Food and Water Watch; and the International Bottled Water Association (IBWA). Our definition of “bottled water” in this report includes any food product that meets FDA’s standard of identity for bottled water. We did not conduct water quality analyses of bottled water to determine if the product met the standard of quality. We also did not conduct a systematic review of source water approval or testing records at bottled water facilities. We also researched bottled water laws and regulations in the 50 states and the District of Columbia. We selected 10 states for in-depth reviews because their standard of quality, testing requirements, or both, differed from FDA standards and from 1 state that adopted FDA’s requirements. To learn more about state regulations and enforcement policies, we held interviews by telephone with regulatory officials in 8 of the 10 states, in person with Ohio and Massachusetts officials, and in writing with Wisconsin officials. On the basis of these discussions, we developed a briefer set of questions on implementing and enforcing bottled water regulations. After we drafted this questionnaire, we asked for comments from state officials in 4 of the 10 states selected for in-depth review. We conducted these pretests to check that (1) questions were clear and unambiguous, (2) terminology was used correctly, (3) the information could be feasibly obtained, and (4) the survey was comprehensive and unbiased. Three of the four pretests were administered over the telephone. Next, we administered our survey by telephone to state officials responsible for bottled water oversight in all of the remaining states and the District of Columbia. (App. III shows the questions that we asked and a summary of the responses that we received.) We made the telephone calls in December 2008 and January 2009. All states responded to our questions. Some state officials were unable to answer all of the questions during our first call; they subsequently provided the information later via telephone or e-mail. We also examined bottled water labels and contacted companies to determine the information they provide consumers about the source, treatment, and quality of their products. We did not evaluate whether label information was false or misleading. To obtain bottled water labels, we asked GAO staff in each of our 11 field offices and at headquarters to collect about 10 labels per office from bottled water that is specific or unique to their region. After removing duplicate bottled water labels and labels that were not for bottled water but for some other beverage, such as “electrolyte-enhanced” waters, we were left with 83 labels for bottled water sold in containers ranging in size from 8 ounces to 1 gallon. This sample does not represent the universe of bottled water available to consumers in the United States. We systematically reviewed the labels and recorded whether contact information was provided—such as a telephone number, Web address, e-mail, or complete postal address—that would allow a consumer to contact the bottled water company and readily obtain more information about the product than what is listed on the label. We also recorded whether the source of the water, treatment method, and any quality test results were included on the label, or whether this information was available by accessing the company’s Web site or by telephoning the company. We used the Web addresses and telephone numbers listed on the label, if available. We contacted 61 companies by telephone and conducted Web site reviews for 47 companies. To determine how authorities in other countries ensure the safety of bottled water, we reviewed how several top exporting countries— including Canada, Fiji, and Turkey as well as the European Union and its member states—regulate bottled water. We were not able to review the laws in all of the top 10 exporting countries because information in English was limited. In addition, we reviewed only the legal requirements in these countries; we were not able to assess how the laws are implemented or enforced. We also analyzed data from FDA databases that track domestic and foreign inspections, import examinations, and recalls. Regarding FDA inspections of domestic and foreign bottled water facilities, as well domestic inspections conducted by states under contract with FDA, we analyzed data from the Field Accomplishments and Compliance Tracking System for fiscal years 2000 through 2008. Regarding FDA reviews of bottled water imports, we analyzed data from the Operational and Administrative System for Import Support for fiscal years 2004 through 2008. In addition, we worked with FDA to obtain all warning letters that had been issued to bottled water facilities for fiscal years 2002 through 2008. Finally, we analyzed data from FDA’s Recall Enterprise System for recalls that were issued for bottled water from November 2002 (when the system began) through fiscal year 2008. We assessed the reliability of these data and found them to be sufficiently reliable for our purposes. To assess the reliability of these data, we reviewed related documentation and worked closely with agency officials to identify any data problems. Because of the variance in how bottled water and other beverages are coded as a product in the Field Accomplishments and Compliance Tracking System, some of our analysis regarding inspections may include other beverage or product types, such as ice or flavored waters. However, our conversations with FDA officials indicated very few entries included these other beverage or product types. To identify the environmental and other impacts of bottled water, we reviewed the following three subtopics: (1) the impact of discarded water bottles on municipal landfill capacity; (2) the effects on U.S. energy demands from the manufacture and transport of plastic bottles for drinking water; and (3) the impacts, if any, on communities and the environment of groundwater extraction for the purposes of bottling water. To address the impact of discarded water bottles on municipal landfill capacity, we interviewed knowledgeable officials from the American Beverage Association and its consultant, Northbridge Environmental Management; the Container Recycling Institute; IBWA; and the National Association of PET Container Resources to obtain information on the quantities of PET plastic water bottles that are produced and recycled. We did not independently verify the accuracy and completeness of the data provided by these organizations. Using figures provided to us by the American Beverage Association and the Container Recycling Institute for the amount of PET plastic water bottle containers produced in 2006—the most recent year for which data were available—and for the national recycling rate in 2006 for all PET containers, provided to us by the National Association of PET Container Resources, we calculated a range of estimates for the quantity of PET plastic water bottles that were discarded in that year. We used these data and figures from EPA’s 2006 national municipal solid waste characterization to calculate how much discarded PET water bottles comprised as a share of the total discarded PET plastic and total discarded municipal solid waste in the United States. To assess the accuracy and completeness of EPA’s municipal solid waste characterization data, we reviewed EPA documentation and interviewed knowledgeable officials from the EPA contractor Franklin Associates (a division of the Eastern Research Group), which prepared the agency’s 2006 national municipal solid waste characterization, Northbridge Environmental Management, and the Solid Waste Association of North America. Finally, we interviewed EPA officials from the Office of Resource Conservation and Recovery and the Director of the Applied Research Foundation of the Solid Waste Association of North America to collect information regarding the impacts of discarded PET plastic water bottle containers in landfills. To identify the effects on U.S. energy demands of the manufacture and transport of bottled water, we interviewed officials from EPA’s Office of Solid Waste and knowledgeable officials from three nonprofit environmental organizations—the Earth Policy Institute, Food and Water Watch, and the Pacific Institute. We identified two studies that focused specifically on bottled drinking water, one by the Earth Policy Institute and a second by the Pacific Institute. We reviewed the scope and methodology of these studies and selected the Pacific Institute’s study for more in-depth evaluation because it was more comprehensive and documented in a peer-reviewed article. Specifically, we assessed the Pacific Institute’s methodology to determine its validity and summarized the studies’ key findings relevant to our objective. To identify the impacts, if any, on communities and the environment of groundwater extraction for bottling water, we reviewed and synthesized information published by the U.S. Geological Survey about the impact of groundwater extraction on aquifers, surface waters, and dependent natural resources. We reviewed newspaper articles, books, journal articles, and public policy reports to identify states where conflicts or litigation over groundwater extraction have taken place. Among the states identified, we selected Maine, Michigan, New Hampshire, and Vermont for more in-depth review. Specifically, we chose Michigan and Vermont because legislation was recently enacted in these states regarding groundwater extraction that included specific provisions related to bottled water production. We chose Maine and New Hampshire because these states recently enacted or amended laws governing groundwater wells or withdrawals that apply to certain bottled water production facilities. In these states, we interviewed officials who oversee groundwater extraction for bottled water to obtain information on groundwater use, on known impacts of groundwater extraction from bottled water production, and on existing regulations of groundwater extraction for bottled water production. We conducted this performance audit from June 2008 to June 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions on our audit objectives. EPA drinking water standard (maximum contaminant level) EPA drinking water standard (maximum contaminant level) EPA drinking water standard (maximum contaminant level) In this table, units are in milligrams per liter, unless otherwise noted. Residual disinfectants and disinfection by-products 15 picocuries per liter (pCi/L) EPA drinking water standard (maximum contaminant level) In this table, units are in milligrams per liter, unless otherwise noted. EPA drinking water standard (maximum contaminant level) In this table, units are in milligrams per liter, unless otherwise noted. 5 EPA promulgated a national primary drinking water regulation for these contaminants on July 1, 1991, but postponed its effective date pending reconsideration of the regulation. 57 Fed. Reg. 22178 (May 27, 1992). To date, EPA has not established a new effective date. In addition to the contact named above, Steve Elstein, Assistant Director; Brian M. Friedman; Nathan A. Morris; Kelly A. Richburg; and Jeanette Soares made key contributions to this report. Also contributing to this report were Mark Braza, Ellen Chu, Erin Lansburgh, and Minette Richardson. In addition, Matthew Drerup, Paige Gilbreath, Susannah Hawthorne, Stephen J. Jue, Foster Kerrison, Patricia Lentini, Robert Marek, Angela Pun, Carrie W. Rogers, Pam Tumler, and Cheryl A. Williams provided assistance in collecting bottled water labels for our review.
|
Over the past decade, per capita consumption of bottled water in the United States has more than doubled. With this increase have come several concerns in recent years about the safety, quality, and environmental impacts of bottled water. The Food and Drug Administration (FDA) regulates bottled water under the Federal Food, Drug, and Cosmetic Act as a food and is responsible for ensuring that domestic and imported bottled water is safe and truthfully labeled. Among other things, GAO (1) evaluated the extent to which FDA regulates and ensures the quality and safety of bottled water; (2) evaluated the extent to which federal and state authorities regulate the accuracy of labels and claims regarding the purity and source of bottled water; and (3) identified the environmental and other impacts of bottled water. GAO reviewed FDA data, reports, and requirements for bottled water; conducted a state survey of all 50 states and the District of Columbia; reviewed bottled water labels; and interviewed FDA officials and key experts. FDA's bottled water standard of quality regulations generally mirror the Environmental Protection Agency's (EPA) national primary drinking water regulations, as required by the Federal Food, Drug, and Cosmetic Act, although the case of DEHP (an organic compound used in the manufacture of polyvinyl chloride plastics) is a notable exception. Specifically, FDA deferred action on DEHP in a final rule published in 1996 and has yet to either adopt a standard or publish a reason for not doing so. GAO also found that FDA's regulation of bottled water, particularly when compared with EPA's regulationof tap water, reveal key differences in the agencies' statutory authorities. Of particular note, FDA does not have the specific statutory authority to require bottlers to use certified laboratories for water quality tests or to report test results, even if violations of the standards are found. Among GAO's other findings, the state requirements to safeguard bottled water often exceed FDA's, but still are often less comprehensive than state requirements to safeguard tap water. FDA and state bottled water labeling requirements are similar to labeling requirements for other foods, but the information provided to consumers is less than what EPA requires of public water systems under the Safe Drinking Water Act. Like other foods, bottled water labels must list ingredients and nutritional information and are subject to the same prohibitions against misbranding. In 2000, FDA concluded that it was feasible for the bottled water industry to provide the same types of information to consumers that public water systems must provide. The agency was not required to conduct rulemaking to require that manufacturers provide such information to consumers, however, and it has not done so. Nevertheless, GAO's work suggests that consumers may benefit from such additional information. For example, when GAO asked cognizant officials in a survey of the 50 states and the District of Columbia, whether their consumers had misconceptions about bottled water, many replied that consumers often believe that bottled water is safer or healthier than tap water. GAO found that information comparable to what public water systems are required to provide to consumers of tap water was available for only a small percentage of the 83 bottled water labels it reviewed, companies it contacted, or company Web sites it reviewed. Among the environmental impacts of bottled water are the effects on U.S. municipal landfill capacity and U.S. energy demands. Regarding impacts on landfill capacity, GAO found that about three-quarters of the water bottles produced in the United States in 2006 were discarded and not recycled, on the basis of figures compiled by an industry trade association and an environmental nonprofit organization. Discarded water bottles, however, represented less than 1 percent of total municipal waste that EPA reported entered U.S. landfills in 2006. Regarding the impact on U.S. energy demands, a recent peer-reviewed article found that the production and consumption of bottled water comprises a small share of total U.S. energy demand but is much more energy-intensive than the production of public drinking water.
|
In fiscal year 1993, GAO made over 1,600 recommendations. This report includes summaries highlighting the impact of GAO’s work and information on the status of all GAO recommendations that have not been fully implemented. This information should help congressional and agency leaders prepare for upcoming appropriations and oversight activities and stimulate further actions to achieve the desired improvements in government operations. Several changes have been made to this year’s report. The printed volume summarizes the impact of GAO’s work and highlights the key open recommendations. This volume also includes a set of computer diskettes with details on all open recommendations. The diskettes have several menu options to help users find information easily. For example, a user may search for an open recommendation by using product numbers, titles, dates, names of federal entities, congressional committees, or any other word or phrase that may appear in the report. Instructions for operating the electronic edition are at the end of this publication. The name and telephone number of the GAO manager to contact for information or assistance about a product is included on the diskettes. Information or questions not related to a specific product or recommendation should be referred to GAO’s Office of Congressional Relations on 202/512-4400. Copies of complete GAO printed products may be ordered by calling 202/512-6000. Please direct comments, questions, or suggestions for improving this report to Lawson “Rick” Gist, Assistant Director, Office of Policy, on 202/512-4478. Given the ongoing reductions in U.S. defense funding, a new relationship is evolving between the government and the defense industry. Our work focused on the major issues related to this changing relationship. We addressed questions such as the following: (1) Is the defense industrial base being restructured to best serve U.S. security interests? (2) Is defense technology being effectively maintained and protected? (3) Is the Department of Defense (DOD) reforming its acquisition process to address long-standing problems? (4) Do DOD contracting policies and practices ensure that public funds are spent properly? Increased emphasis is being placed on ensuring that as defense downsizing takes place, there is an orderly, efficient, and effective restructuring of the defense industrial base. We are examining key industrial base activities, as well as the effectiveness of plans to spend $20 billion over the next 5 years, on defense conversion. We are also examining DOD policies and practices to ensure that as efforts to enhance U.S. competitiveness are promoted, critical defense technologies are adequately protected. DOD and the Congress continue to pursue proposals to reform the defense acquisition system. We increased our efforts to evaluate these proposals to ensure that they achieve the benefits intended at reasonable costs. We also continued our review of DOD contracting practices to ensure that they adequately protect the taxpayer against fraud, waste, and abuse. Our reports and testimonies highlighted areas where DOD and defense contractor controls were not adequate to protect against improper use of government funds. For example, our work reviewing contractor overhead charges found examples of defense contractors’ charging the taxpayers for costs that were not allowable under federal regulations. Our audit work related to contract overpricing also found that a small percentage of defense contractors were responsible for most contract overpricing. In addition, we made several recommendations that, if effectively implemented, could save the taxpayers millions of dollars. For example, in our report on DOD operational test and evaluation, we pointed out that substantial savings were available by consolidating existing testing facilities, and we recommended actions to achieve this objective. The Federal Acquisition Regulation cost principles require defense contractors to identify and exclude unallowable costs from their overhead submissions. At all six contractors we reviewed, contractors did not exclude all unallowable costs. For example, in addition to identifying almost $1 million in costs questioned by the Defense Contract Audit Agency (DCAA) at these six contractors, we identified about $2 million more in overhead costs that were either expressly unallowable or questionable. We concluded that the federal cost principles governing allowability for entertainment, employee morale and welfare, and business meetings costs lacked sufficient clarity to ensure consistent and appropriate application. We also concluded that limited transaction testing by DCAA may have also contributed to the nearly $2 million undetected unallowable or questionable costs. We recommended that, to address these problems, DOD clarify the Federal Acquisition Regulation in some cases and, in other cases, evaluate the cost principles to determine whether additional guidance was needed. We further recommended that DCAA evaluate the extent to which its field offices needed to spend more time in transaction testing. (GAO/NSIAD-93-79) With the defense industry downsizing, defense companies can make significant cost reductions by incorporating modern technologies and innovative management techniques. Of the 24 defense companies we surveyed, 3 had instituted more-efficient techniques and practices. Savings to DOD from such cost reductions can be significant. One of the defense companies reported reducing work-in-process costs by $80 million and passing the savings on to the government. Competitive market forces may not be sufficient to motivate many defense companies to significantly lower costs, however. Significant progress in this area will require DOD efforts to stimulate contractor actions. We recommended that DOD, as part of its efforts to reform the defense acquisition system, identify and eliminate the factors that result in defense contractors not incorporating technologies and management techniques to reduce costs. (GAO/NSIAD-93-125) Acquisition Management: Implementation of the Defense Acquisition Workforce Improvement Act (GAO/NSIAD-93-129) Acquisition Reform: Contractors Can Use Technologies and Management Techniques to Reduce Costs (GAO/NSIAD-93-125) Air Force Procurement: Current Plans May Provide More Ground-Attack Capability Than Needed (GAO/NSIAD-92-137) AV-8B Program: Aircraft Sales to Foreign Government to Fund Radar Procurement (GAO/NSIAD-93-24) Contract Pricing: DCAA’s Audit Coverage Lowered by Lack of Subcontract Information (GAO/NSIAD-92-173) Contract Pricing: Economy and Efficiency Audits Can Help Reduce Overhead Costs (GAO/NSIAD-92-16) Contract Pricing: Unallowable Costs Charged to Defense Contracts (GAO/NSIAD-93-79) Defense Acquisition: U.S.-German Examinations of the MLRS Terminal Guidance Warhead Program (GAO/NSIAD-92-7) Defense Communications: Defense’s Program to Improve Telecommunications Management Is at Risk (GAO/IMTEC-93-15) Defense Contracting: Interim Report on Mentor-Protege Program for Small Disadvantaged Firms (GAO/NSIAD-92-135) Defense Industrial Base: An Overview of an Emerging Issue (GAO/NSIAD-93-68) DOD Contracting: Techniques to Ensure Timely Payments to Subcontractors (GAO/NSIAD-93-136) DOD Procurement: Cost-Per-Copy Service Can Reduce Copying Costs (GAO/NSIAD-90-276) Energy Management: Contract Audit Problems Create the Potential for Fraud, Waste, and Abuse (GAO/RCED-92-41) Energy Management: Systems Contracting Weaknesses Continue (GAO/RCED-93-143) Federal Research: Lessons Learned From SEMATECH (GAO/RCED-92-283) Federal Research: System for Reimbursing Universities’ Indirect Costs Should Be Reevaluated (GAO/RCED-92-203) High Performance Computing: Advanced Research Projects Agency Should Do More to Foster Program Goals (GAO/IMTEC-93-24) International Air and Trade Shows: DOD Increased Participation, but Its Policies Are Not Well-Defined (GAO/NSIAD-93-96) International Procurement: NATO Allies’ Implementation of Reciprocal Defense Agreements (GAO/NSIAD-92-126) Military Coproduction: U.S. Management of Programs Worldwide (GAO/NSIAD-89-117) Minority Contracting: DOD’s Reporting Does Not Address Legislative Goal (GAO/NSIAD-93-167) Multiple Award Schedule Purchases: Changes Are Needed to Improve Agencies’ Ordering Practices (GAO/NSIAD-92-123) Multiple Award Schedule Purchases: Improvements Needed Regarding Publicizing Agencies’ Orders (GAO/NSIAD-92-88) NASA Aeronautics: Impact of Technology Transfer Activities Is Uncertain (GAO/NSIAD-93-137) Nuclear Science: Consideration of Accelerator Production of Tritium Requires R&D (GAO/RCED-92-154) Operation Desert Shield/Storm: Impact of Defense Cooperation Account Funding on Future Maintenance Budgets (GAO/NSIAD-93-179) Procurement: DOD Efforts Relating to Nondevelopmental Items (GAO/NSIAD-89-51) Technology Transfer: Barriers Limit Royalty Sharing’s Effectiveness (GAO/RCED-93-6) Technology Transfer: Federal Efforts to Enhance the Competitiveness of Small Manufacturers (GAO/RCED-92-30) Test and Evaluation: Little Progress in Consolidating DOD Major Test Range Capabilities (GAO/NSIAD-93-64) Test and Evaluation: Reducing Risks to Military Aircraft From Bird Collisions (GAO/NSIAD-89-127) University Research: Controlling Inappropriate Access to Federally Funded Research Results (GAO/RCED-92-104) U.S.-Israel Arrow/Aces Program: Cost, Technical, Proliferation, and Management Concerns (GAO/NSIAD-93-254) Weapons Codevelopment: U.S. National Issues in the MLRS Terminal Guidance Warhead Program (GAO/NSIAD-92-55) Significant challenges face the Department of Defense (DOD) and the National Aeronautics and Space Administration (NASA) in light of the end of the Cold War, pressing domestic problems, and the spiraling budget deficit. While DOD has recognized the need to re-engineer and streamline its operations, it continues to have difficulty in implementing effective and efficient programs. DOD needs to change existing management practices, procedures, and culture to overcome long-standing problems. In addition, as DOD closes and realigns military bases as part of its efforts to downsize and restructure its forces and reduce defense spending, it will face increased spending in other areas, such as in environmental cleanup at bases being closed or designated for closure and the destruction of chemical weapons. A key to managing these emerging, potentially high-cost areas will be in defining the overall costs, alternative management and technology strategies, and proactive steps to avoid similar problems. NASA also faces major restructuring of its programs and activities. The overcommitment of the agency’s likely budget has created funding uncertainty and the need for major restructuring in some of its largest programs. Our major efforts throughout the year focused primarily on matters associated with affordability issues. We reported that, agencywide, NASA’s major space missions over the last 15 years had required substantially more funding than initially estimated and emphasized NASA’s need for an independent cost-estimating capability. We also continued to devote a significant amount of attention to NASA’s attempts to reestablish control over its procurement activities. In this area, we reported on the agency’s efforts to revise its contract for the operation of the Jet Propulsion Laboratory and on the need for NASA to change its policies, procedures, and practices in providing equipment to its contractors. Key areas we focused on in fiscal year 1992 include reducing the defense infrastructure and excess inventory; identifying opportunities to save money and achieve management efficiencies through new processes; quantifying the unfunded liabilities facing DOD such as environmental cleanup; and improving NASA management in the areas of program affordability and contracting. DOD and NASA actions are required to implement the following key recommendations that would result in management improvements, operational efficiencies, and dollar savings. We recommended several actions to improve the implementation of future DOD processes for selecting bases for closure and realignment, including taking advantage of cross-service opportunities that could result in a more efficient realignment of support facilities and additional cost savings. (GAO/NSIAD-93-173) In continuing to monitor DOD’s progress in reducing its inventory, we made several recommendations to avoid unnecessary purchases of supplies and industrial plant equipment. (GAO/NSIAD-93-124 and GAO/NSIAD-93-8) We also recommended that, to improve DOD’s inventory management practices, DOD use a quick response, commercial purchasing process that could maintain a constant flow of inventory without maintaining large inventories. (GAO/NSIAD-93-112) We recommended pilot programs and projects to demonstrate the applicability of commercial practices in two areas—military industrial centers and food distribution. Such actions could improve management and reduce costs by eliminating unnecessary processes and functions. (GAO/NSIAD-93-110 and GAO/NSIAD-9-155) To improve DOD’s management of emerging high-cost issues, such as environmental cleanup, we recommended improving the operations of plants designed to destroy chemical munitions, upgrading of underground storage tanks to avoid costly cleanups, and obtaining better data on the amount of funds paid to DOD contractors for cleanup. (GAO/NSIAD-92-117, GAO/NSIAD-93-50, and GAO/NSIAD-93-77) Our principal concern in the NASA area has been the affordability of its total program. Clearly, the agency’s funding expectations were set too high and it needed to bring the content and the pace of its efforts more reasonably in line with its likely future years’ budgets. In doing so, NASA has to also identify opportunities to function more efficiently and to prepare more-realistic estimates of the likely cost of projects. Our work also helped identify such opportunities and focus NASA’s attention on the need for having an independent capability for developing more-realistic project cost estimates. (GAO/NSIAD-93-73, GAO/NSIAD-93-178, and GAO/NSIAD-93-191) On the basis of our series of eight classified reports on the U.S. strategic nuclear triad, we made five specific recommendations to DOD in our June 10, 1993, unclassified testimony to the Senate Governmental Affairs Committee. To date, DOD has not acted favorably or conclusively on four of those recommendations, as follows: (1) that procurement of the B-2 bomber be terminated with the completion of 15 aircraft, rather than at 20 as requested by the Air Force; (2) that additional operational testing of the B-1B bomber be done to verify essential improvements in reliability and electronic countermeasures and to remove remaining uncertainties concerning range performance; (3) that the cost-effectiveness of the Air Force’s proposed life-service extension of the Minuteman III intercontinental ballistic missile be the subject of additional, rigorous review; and (4) that the Navy continue flight testing for the D-5 submarine-launched ballistic missile at an annual rate sufficient to maintain an understanding of actual missile performance at a high level of confidence. (GAO/T-PEMD-93-5) The Department of Defense has made little progress in implementing the recommendations in our September 1992 report. Defense is at a turning point regarding CIM. The new leadership of the incoming administration is reassessing the overall strategy of the CIM initiative. It is unclear what this reassessment will encompass and when it will be completed. As one of the largest information management initiatives ever undertaken, CIM has great promise—not only for Defense but for other federal agencies and the nation as well. By improving business operations with less resources, Defense can improve its war-fighting capabilities while shifting scarce resources to other national needs. Implementing CIM, however, requires a major cultural change in managing information resources that Defense is finding difficult to implement. Therefore, we believe that it is critical for the Secretary of Defense to take an active role in implementing CIM. (GAO/IMTEC-92-77) Air Force Academy: Gender and Racial Disparities (GAO/NSIAD-93-244) Air Force Appropriations: Funding Practices at the Ballistic Missile Organization (GAO/NSIAD-93-47) Army Maintenance: Savings Possible by Stopping Unnecessary Depot Repairs (GAO/NSIAD-92-176) Biological Warfare: Role of Salk Institute in Army’s Research Program (GAO/NSIAD-92-33) Chemical and Biological Defense: U.S. Forces Are Not Adequately Equipped to Detect All Threats (GAO/NSIAD-93-2) Chemical Weapons Destruction: Issues Affecting Program Cost, Schedule, and Performance (GAO/NSIAD-93-50) Commercial Practices: DOD Could Save Millions by Reducing Maintenance and Repair Inventories (GAO/NSIAD-93-155) Defense Inventory: Applying Commercial Purchasing Practices Should Help Reduce Supply Costs (GAO/NSIAD-93-112) Defense Inventory: Defense Logistics Agency’s Materiel Returns Program (GAO/NSIAD-93-124) Defense Inventory: Depot Packing and Shipping Procedures (GAO/NSIAD-93-3) Defense Inventory: DOD Actions Needed to Ensure Benefits From Supply Depot Consolidation Efforts (GAO/NSIAD-92-136) Defense Inventory: Growth in Air Force and Navy Unrequired Aircraft Parts (GAO/NSIAD-90-100) Defense Inventory: Growth in Ship and Submarine Parts (GAO/NSIAD-90-111) Defense Inventory: More Accurate Reporting Categories Are Needed (GAO/NSIAD-93-31) Defense Transportation: Defense Logistics Agency’s Regional Freight Consolidation Centers (GAO/NSIAD-93-169) Defense Transportation: Ineffective Oversight Contributes to Freight Losses (GAO/NSIAD-92-96) DOD Food Inventory: Using Private Sector Practices Can Reduce Costs and Eliminate Problems (GAO/NSIAD-93-110) DOD Medical Inventory: Reductions Can Be Made Through the Use of Commercial Practices (GAO/NSIAD-92-58) DOD Special Access Programs: Administrative Due Process Not Provided When Access Is Denied or Revoked (GAO/NSIAD-93-162) Environmental Cleanup: Observations on Consistency of Reimbursements to DOD Contractors (GAO/NSIAD-93-77) Environmental Protection: Solving NASA’s Current Problems Requires Agencywide Emphasis (GAO/NSIAD-91-146) Environment, Safety, and Health: Environment and Workers Could Be Better Protected at Ohio Defense Plants (GAO/RCED-86-61) Financial Management: NASA’s Financial Reports Are Based on Unreliable Data (GAO/AFMD-93-3) Hazardous Materials: Upgrading of Underground Storage Tanks Can Be Improved to Avoid Costly Cleanups (GAO/NSIAD-92-117) Hazardous Waste: Management Problems Continue at Overseas Military Bases (GAO/NSIAD-91-231) Information Security: Disposition and Use of Classified Documents by Presidential Appointees (GAO/NSIAD-90-195) Management Review: Follow-Up on the Management Review of the Defense Logistics Agency (GAO/NSIAD-88-107) Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments (GAO/NSIAD-93-173) NASA Aeronautics: Impact of Technology Transfer Activities Is Uncertain (GAO/NSIAD-93-137) NASA Procurement: Agencywide Action Needed to Improve Management of Contract Modifications (GAO/NSIAD-92-87) NASA Procurement: Proposed Changes to the Jet Propulsion Laboratory Contract (GAO/NSIAD-93-178) NASA Property: Improving Management of Government Equipment Provided to Contractors (GAO/NSIAD-93-191) National Aero-Space Plane: A Need for Program Direction and Funding Decisions (GAO/NSIAD-93-207) National Aero-Space Plane: Restructuring Future Research and Development Efforts (GAO/NSIAD-93-71) Navy Inventory: Better Controls Needed Over Planned Program Requirements (GAO/NSIAD-93-151) Nuclear Energy: Environmental Issues at DOE’s Nuclear Defense Facilities (GAO/RCED-86-192) Ozone-Depleting Chemicals: Increased Priority Needed If DOD Is to Eliminate Their Use (GAO/NSIAD-92-21) Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations (GAO/RCED-93-23) Property Disposal: DOD Is Handling Large Amounts of Excess Property in Europe (GAO/NSIAD-93-195) Property Management: DOD Can Increase Savings by Reusing Industrial Plant Equipment (GAO/NSIAD-93-8) Security Clearances: Due Process for Denials and Revocations by Defense, Energy, and State (GAO/NSIAD-92-99) Space Programs: NASA’s Independent Cost Estimating Capability Needs Improvement (GAO/NSIAD-93-73) Space Project Testing: Uniform Policies and Added Controls Would Strengthen Testing Activities (GAO/NSIAD-91-248) Space Station: Improving NASA’s Planning for External Maintenance (GAO/NSIAD-92-271) Technology Development: Future Use of NASA’s Large Format Camera Is Uncertain (GAO/NSIAD-90-142) The U.S. Nuclear Triad: GAO’s Evaluation of the Strategic Modernization Program (GAO/T-PEMD-93-5) The collapse of the Soviet bloc has shifted national priorities from defense to economic concerns. Economic performance will establish America’s place in the world as it moves into the 21st century. America’s “competitiveness”—the nation’s ability to sustain a rising standard of living for its citizens in a complex world economy—will determine how successful this nation will be in the new global economy. International trade and finance policy will be important determinants of America’s success. Trade regimes, access to and development of foreign resources and markets, and competitiveness of U.S. goods and services in the integrated world marketplace are key to the long-term health of the nation’s economy. We have reviewed and reported on a number of these issues over the past year to help gauge the impact of events and the need for policy and management changes. Our information helped the Congress assess a large number of critical issues being considered, such as the North American Free Trade Agreement debate, progress on agricultural trade, implementation of the U.S.-Canada Free Trade Agreement, issues affecting investment in the petroleum sector, intellectual property rights, developments in U.S.-Chilean trade, prospects for East European energy, and the business environment in the United States, Japan, and Germany. In January 1992, we reported that federal export promotion programs lacked organizational and funding cohesiveness. We concluded that, as a result, the U.S. Government did not have reasonable assurances that its export promotion resources, which totaled $2.7 billion in fiscal year 1991, were being used most effectively to emphasize sectors, regions, and programs with the highest potential return. We recommended that, to correct this situation, the Secretary of Commerce, as chair of the 19-member interagency Trade Promotion Coordinating Committee, work with other member agencies and the Director of the Office of Management and Budget to (1) develop a governmentwide strategic plan for carrying out federal export promotion programs and (2) ensure that the budget requests for these programs were consistent with their relative strategic importance. (GAO/NSIAD-92-49) Our February 1992 report about the International Trade Commission (ITC) identified ambiguities in the agency’s governing statute. We found that these ambiguities had created disagreements between Chairs and Commissioners about who had ultimate responsibility for ITC’s administration and adversely affected its operations. We suggested that, to improve management, the Congress replace replacing the ITC’s current statutory administrative override authority with decisionmaking requirements like those found in other independent agencies. Also, we suggested that the Congress clarify the statutory provisions concerning budget responsibilities. (GAO/NSIAD-92-45) Our January 1992 report on Agricultural Trade Offices showed that the Department was not making the best use of its resources. We recommended that the Secretary of Agriculture take a variety of actions to clearly define the role of the Agricultural Trade Offices and evaluate their effectiveness. (GAO/NSIAD-92-65) In May 1993, we recommended that the Federal Reserve require each Federal Reserve Bank to begin charging foreign banks for the costs of examining their U.S. agencies, branches, and representative offices. If the Federal Reserve continues to believe that the assessment of examination charges under Foreign Bank Supervision Enhancement Act of 1991 creates a conflict with U.S. treaty and trade obligations, it should seek an amendment to the act. (GAO/GGD-93-35R) Customs Service: Trade Enforcement Activities Impaired by Management Problems (GAO/GGD-92-123) Export Controls: Issues In Removing Militarily Sensitive Items From the Munitions List (GAO/NSIAD-93-67) Export Promotion: Federal Efforts to Increase Exports of Renewable Energy Technologies (GAO/GGD-93-29) Export Promotion: Federal Programs Lack Organizational and Funding Cohesiveness (GAO/NSIAD-92-49) Export Promotion: Problems in the Small Business Administration’s Programs (GAO/GGD-92-77) Federal Research: Lessons Learned From SEMATECH (GAO/RCED-92-283) Foreign Direct Investment: Assessment of Commerce’s Annual Report and Data Improvement Efforts (GAO/NSIAD-92-107) International Trade: Agricultural Trade Offices’ Role in Promoting U.S. Exports Is Unclear (GAO/NSIAD-92-65) International Trade: Changes Needed to Improve Effectiveness of the Market Promotion Program (GAO/GGD-93-125) International Trade Commission: Administrative Authority Is Ambiguous (GAO/NSIAD-92-45) International Trade: Easing Foreign Visitors’ Arrivals at U.S. Airports (GAO/NSIAD-91-6) Loan Guarantees: Export Credit Guarantee Programs’ Costs are High (GAO/GGD-93-45) North American Free Trade Agreement: Assessment of Major Issues (GAO/GGD-93-137) Nuclear Nonproliferation: Better Controls Needed Over Weapons-Related Information and Technology (GAO/RCED-89-116) Nuclear Nonproliferation: Controls Over the Commercial Sale and Export of Tritium Can Be Improved (GAO/RCED-91-90) Nuclear Nonproliferation: DOE Needs Better Controls to Identify Contractors Having Foreign Interests (GAO/RCED-91-83) Technology Transfer: Barriers Limit Royalty Sharing’s Effectiveness (GAO/RCED-93-6) Technology Transfer: Federal Efforts to Enhance the Competitiveness of Small Manufacturers (GAO/RCED-92-30) With the end of the Cold War, U.S. national security and foreign affairs policies and objectives have come under increased scrutiny in recognition of the changing world order and the corresponding need to reassess U.S. security interests—military, political, and economic. Federal budget constraints have further reinforced the need to reexamine foreign policy objectives and program priorities. Policy and program shifts are under way to (1) downsize U.S. military forces, (2) place greater emphasis on strengthening our economic security, (3) promote democracy around the world, (4) address worldwide arms control and proliferation concerns, and (5) restructure our foreign assistance programs. Much of our work in 1993 has concentrated on what changes are needed to redirect and better manage foreign affairs programs and priorities, as well as on providing increased oversight of program expenditures. Our comprehensive analysis of the Agency for International Development’s (AID) management of economic assistance resources and our recent testimony on the future direction of U.S. assistance have provided the new administration and the Congress with specific recommendations on what needs to be done to restructure U.S. aid. Our assessment of the proposed consolidation of Radio Free Europe/Radio Liberty and the Voice of America—provided to both administration decisionmakers and congressional legislators for their use in considering the consolidation proposal—showed that estimated cost savings may not be as high as anticipated and highlighted other constraints that need to be addressed in making any final decision. Our work on international counternarcotics programs has contributed to the current effort to reassess the U.S. government’s approach and strategy for dealing with the drug problem. We have also assessed how well the U.S. government has, either bilaterally or through multilateral organizations, addressed such critical issues as peacekeeping in Somalia and Cambodia and the implementation of sanctions in Serbia and Haiti. Our testimony on the U.N. peacekeeping operation in Cambodia disclosed significant difficulties in carrying out the operation and weaknesses in U.N. peacekeeping planning and support systems. More generally, our work on U.S. participation in U.N. organizations continued to focus on the need for improved management—a key U.S. government objective vis-a-vis the United Nations. Our examinations of various U.S. bilateral programs identified program management weaknesses and served as a basis for improved congressional oversight. Our review of the commercial foreign military sales program found serious problems that contributed to the program’s termination. Our testimony on Nunn-Lugar funding for arms control efforts in the former Soviet Union focused on slow funds disbursement and corresponding limited progress in achieving the legislation’s intent. Our work on the U.S.-Israeli Arrow antitactical ballistic missile program likewise pointed out problems with the U.S. government’s limited control over program-related U.S. technology and funding and provided specific recommendations to the Department of Defense (DOD) to strengthen program management. On the basis of our recommendation that DOD develop accurate baselines for the Arrow program’s cost, schedule, and technical performance and use them to assess alternatives, the Senate Appropriations Committee has directed DOD to conduct such a study and report the results to the Committee. In June 1993, we reported on the need for AID to address management problems to ensure that AID is adequately meeting its foreign economic assistance responsibilities. We made numerous recommendations dealing with specific problems and recommended that AID play a leadership role in developing a strategic direction for U.S. foreign economic assistance. We further recommended that the AID Administrator bring AID’s management systems into balance with the agency’s decentralized organizational structure and establish a “total work force” planning and management process. (GAO/NSIAD-93-106) We found that DOD did not have valid baseline information on the U.S.-Israel Arrow antitactical ballistic missile program necessary to assess its cost, schedule, and technical performance and to evaluate its cost-effectiveness relative to U.S. alternatives. We further found that the U.S. government had exercised only limited control over U.S. technology and funds in the program. We recommended that DOD develop accurate baselines and use them to assess the cost-effectiveness of U.S. alternatives to Arrow for meeting Israel’s ballistic missile defense needs. We further recommended that DOD ensure that no additional Arrow or related contracts were signed until a series of steps are taken to improve oversight. (GAO/NSIAD-93-254) In a series of reviews of the Department of State’s management of its overseas posts, we found that posts did not have sufficient management controls to ensure full compliance with applicable regulations and minimize their vulnerability to fraud, waste, and abuse. We recommended that posts adopt a more proactive approach in identifying opportunities for management improvement and cost reductions. (GAO/NSIAD-93-88 and GAO/NSIAD-93-190) In July 1992, we reported on the progress of the Voice of America’s (VOA) $1.2 billion program to modernize its broadcast facilities. We found that VOA’s facilities modernization program had been hampered by delays and changes in funding priorities. We recommended that the Director of the U.S. Information Agency require a fully documented cost-benefit analysis before approving further modernization project proposals. (GAO/NSIAD-92-150) We reviewed progress being made and problems being experienced by U.S. and Colombian agencies in implementing U.S. counternarcotics programs in Colombia. We found that the U.S. government lacked data needed to evaluate program effectiveness and that numerous obstacles and budgetary constraints had impeded program implementation. We recommended that the Director of the Office of National Drug Control Policy reevaluate U.S. counternarcotics programs in Colombia and throughout the Andean region. (GAO/NSIAD-93-158) Agency for International Development: The Minority Shipping Program Is Constrained by Program Requirements (GAO/NSIAD-92-304) AID Management: EEO Issues and Protected Group Underrepresentation Require Management Attention (GAO/NSIAD-93-13) AID Management: Strategic Management Can Help AID Face Current and Future Challenges (GAO/NSIAD-92-100) Aid to Kenya: Accountability for Economic and Military Assistance Can Be Improved (GAO/NSIAD-93-57) Aid to Nicaragua: U.S. Assistance Supports Economic and Social Development (GAO/NSIAD-92-203) Aid to Panama: Improving the Criminal Justice System (GAO/NSIAD-92-147) American Samoa: Inadequate Management and Oversight Contribute to Financial Problems (GAO/NSIAD-92-64) Arms Control: U.S. and International Efforts to Ban Biological Weapons (GAO/NSIAD-93-113) AV-8B Program: Aircraft Sales to Foreign Government to Fund Radar Procurement (GAO/NSIAD-93-24) Classified Information: Volume Could Be Reduced by Changing Retention Policy (GAO/NSIAD-93-127) Defense Acquisition: U.S.-German Examinations of the MLRS Terminal Guidance Warhead Program (GAO/NSIAD-92-7) Drug Control: Communications Network Funding and Requirements Uncertain (GAO/NSIAD-92-29) Drug Control: Inadequate Guidance Results in Duplicate Intelligence Production Efforts (GAO/NSIAD-92-153) Drug Control: Revised Drug Interdiction Approach Is Needed in Mexico (GAO/NSIAD-93-152) Drugs: International Efforts to Attack a Global Problem (GAO/NSIAD-93-165) Drug War: Drug Enforcement Administration Staffing and Reporting in Southeast Asia (GAO/NSIAD-93-82) EL Salvador: Efforts to Satisfy National Civilian Police Equipment Needs (GAO/NSIAD-93-100BR) Export Controls: Issues In Removing Militarily Sensitive Items From the Munitions List (GAO/NSIAD-93-67) Financial Management: Fiscal Year 1992 Audit of the Defense Cooperation Account (GAO/NSIAD-93-185) Financial Management: Inadequate Accounting and System Project Controls at AID (GAO/AFMD-93-19) Financial Management: Serious Deficiencies in State’s Financial Systems Require Sustained Attention (GAO/AFMD-93-9) Food Aid: Management Improvements Are Needed to Achieve Program Objectives (GAO/NSIAD-93-168) Foreign Assistance: Accuracy of AID Statistics on Dollars Flowing Back to the U.S. Economy Is Doubtful (GAO/NSIAD-93-196) Foreign Assistance: AID Can Improve Its Management of Overseas Contracting (GAO/NSIAD-91-31) Foreign Assistance: AID’s Private-Sector Assistance Program at a Crossroads (GAO/NSIAD-93-55) Foreign Assistance: AID Strategic Direction and Continued Management Improvements Needed (GAO/NSIAD-93-106) Foreign Assistance: Combating HIV/AIDs in Developing Countries (GAO/NSIAD-92-244) Foreign Assistance: Improvements Needed in AID’s Oversight of Grants and Cooperative Agreements (GAO/NSIAD-93-202) Foreign Assistance: Meeting the Training Needs of Police in New Democracies (GAO/NSIAD-93-109) Foreign Assistance: Promising Approach to Judicial Reform in Colombia (GAO/NSIAD-92-269) Foreign Assistance: Promoting Judicial Reform to Strengthen Democracies (GAO/NSIAD-93-149) Foreign Assistance: U.S. Efforts to Spur Panama’s Economy Through Cash Transfers (GAO/NSIAD-93-56) Foreign Disaster Assistance: AID Has Been Responsive but Improvements Can Be Made (GAO/NSIAD-93-21) Foreign Economic Assistance: Better Controls Needed Over Property Accountability and Contract Close Outs (GAO/NSIAD-90-67) Information Resources Management: Initial Steps Taken But More Improvements Needed in AID’s IRM Program (GAO/IMTEC-92-64) Internal Controls: AID Missions Overstate Effectiveness of Controls for Host Country Contracts (GAO/NSIAD-91-116) International Procurement: NATO Allies’ Implementation of Reciprocal Defense Agreements (GAO/NSIAD-92-126) Military Aid: Stronger Oversight Can Improve Accountability (GAO/NSIAD-92-41) Military Coproduction: U.S. Management of Programs Worldwide (GAO/NSIAD-89-117) Military Sales to Israel and Egypt: DOD Needs Stronger Controls Over U.S.-Financed Procurements (GAO/NSIAD-93-184) Multilateral Foreign Aid: U.S. Participation in the International Fund for Agricultural Development (GAO/NSIAD-93-176) Nuclear Nonproliferation: Better Controls Needed Over Weapons-Related Information and Technology (GAO/RCED-89-116) Nuclear Nonproliferation: Controls Over the Commercial Sale and Export of Tritium Can Be Improved (GAO/RCED-91-90) Nuclear Nonproliferation: DOE Needs Better Controls to Identify Contractors Having Foreign Interests (GAO/RCED-91-83) Nuclear Nonproliferation: Japan’s Shipment of Plutonium Raises Concerns About Reprocessing (GAO/RCED-93-154) Peace Corps: Long-Needed Improvements to Volunteers’ Health Care System (GAO/NSIAD-91-213) Radon Testing in Federal Buildings Needs Improvement and HUD’s Radon Policy Needs Strengthening (GAO/T-RCED-91-48) Security Assistance: Observations on Post-Cold War Program Changes (GAO/NSIAD-92-248) Security Assistance: Observations on the International Military Education and Training Program (GAO/NSIAD-90-215BR) State Department: Management Weaknesses at the U.S. Embassies in Panama, Barbados, and Grenada (GAO/NSIAD-93-190) State Department: Management Weaknesses at the U.S. Embassy in Mexico City, Mexico (GAO/NSIAD-93-88) State Department: Need to Ensure Recovery of Overseas Medical Expenses (GAO/NSIAD-92-277) The Drug War: Colombia Is Undertaking Antidrug Programs, but Impact Is Uncertain (GAO/NSIAD-93-158) UNESCO: Status of Improvements in Management Personnel, Financial, and Budgeting Practices (GAO/NSIAD-92-172) United Nations: U.S. Participation in Peacekeeping Operations (GAO/NSIAD-92-247) Voice of America: Management Actions Needed to Adjust to a Changing Environment (GAO/NSIAD-92-150) Weapons Codevelopment: U.S. National Issues in the MLRS Terminal Guidance Warhead Program (GAO/NSIAD-92-55) During fiscal year 1992, we continued to complete assignments involving operational issues pertaining to the Persian Gulf War. Although the war was of limited duration, it nonetheless highlighted several operational and support problems that required attention. We identified improvements needed in the areas of deployment transportation systems, medical readiness and training, and chemical and biological defense capabilities. We identified improvements needed in dealing with unanticipated hazards associated with U.S. weapons systems and munitions, such as depleted uranium and unexploded submunitions. We also identified lessons learned, applicable to the future, in the areas of airlift capabilities, joint training, and use of reserve personnel. Aside from Gulf War-related issues, our work during the past fiscal year identified improvements under way and still required to bring about better management and more effective use of computer simulation technology to enhance military training and to make improvements in other areas of training, such as the use of troop schools, and training involving National Guard combat units. We identified numerous opportunities to provide improved and more cost-effective logistical support to military operations, such as (1) improving Navy management of backorders, (2) improving shipyard labor estimates, (3) reassessing war reserve requirements, (4) having National Guard units use the Army’s supply system for direct supply operations and reducing the National Guard’s inventory investment, (5) focusing on the systemic causes of problem parts, and (6) examining whether additional land prepositioning of equipment could reduce afloat prepositioning requirements. We also identified continuing equipment shortages facing Army reserve support units and the need to overcoming these shortages a higher priority for reserve units facing deployment time frames comparable to active duty contingency forces. Our review of the operation and maintenance (O&M) budget requests for fiscal year 1994 identified potential reductions and rescissions of about $6.7 billion to the services’ and the Department of Defense’s (DOD) activities. These reductions and rescissions were due to excessive unobligated funds remaining from prior years’ O&M appropriations, changed circumstances since the time the budget requests were submitted, and other factors. In reviewing personnel issues, we (1) provided important objective data to the Congress for use in its deliberations on the issue of homosexuals in the military and (2) identified force-shaping and skill imbalance problems confronting DOD as it downsizes its civilian work force issues germane to congressional decisionmaking in authorizing the use of financial separation incentives. We also identified significant differences in the cost of producing military officers between the three types of commissioning programs and needed management improvements to produce a more cost-effective mix of officer personnel. Prepositioning of equipment and support items is intended to enable the United States to respond to distant contingencies more rapidly than if they had to be deployed from the United States. The Gulf War highlighted the importance of afloat prepositioning; its costs, however, are about four times that of land prepositioning. While the U.S. Transportation Command continues to investigate additional afloat prepositioning, we have recommended that the Secretary of Defense determine whether additional land prepositioning could reduce afloat prepositioning requirements. (GAO/NSIAD-93-39) The Congress continues to have a high degree of interest in the afloat prepositioning issue, particularly as it affects the potential for Army-Marine Corps joint operations. DOD has been delayed in responding to recommendations in our report on Officer Commissioning Programs about the need for management improvements to produce a more cost-effective personnel mix from the commissioning sources. (GAO/NSIAD-93-37) A part of the delay is attributable apparently to delays in the appointment process for relevant assistant secretaries. The Congress has supported increased use of simulation technology but has had some concerns about DOD’s management in this area. A focal point for improved interservice coordination and management initiatives has been the Defense Modeling and Simulation Office, created in 1991. But, action to provide permanent staffing for this office is still incomplete, with only one permanent position currently provided. (GAO/NSIAD-93-122) Aerial Refueling Initiative: Cross-Service Analysis Needed To Determine Best Approach (GAO/NSIAD-93-186) Air Force Supply: Improvements Needed in Management of Air Mobility Command’s Forward Supply System (GAO/NSIAD-93-10) Army Force Structure: Need to Determine Changed Threat’s Impact on Reserve Training Divisions (GAO/NSIAD-92-182) Army Housing: Overcharges and Inefficient Use of On-Base Lodging Divert Training Funds (GAO/NSIAD-90-241) Army Housing: Overcharges for On-Base Lodging Have Not Been Repaid (GAO/NSIAD-93-188) Army Inventory: Current Operating and War Reserve Requirements Can Be Reduced (GAO/NSIAD-93-119) Army Inventory: Divisions’ Authorized Levels of Demand-Based Items Can be Reduced (GAO/NSIAD-93-9) Army Logistics: Better Approach Needed to Identify Systemic Causes of Problem Parts (GAO/NSIAD-93-86) Army Maintenance: Strategy Needed to Integrate Military and Civilian Personnel Into Wartime Plans (GAO/NSIAD-93-95) Army Reserve Components: Accurate and Complete Data Are Needed to Monitor Full-Time Support Program (GAO/NSIAD-92-70) Army Training: Commanders Lack Guidance and Training for Effective Use of Simulations (GAO/NSIAD-93-211) Army Training: Expenditures for Troop Schools Have Not Been Justified (GAO/NSIAD-93-172) Army Training: Long-standing Control Problems Hinder the CAPSTONE Program (GAO/NSIAD-92-261) Contract Maintenance: Improvements Needed in Air Force Management of Interim Contractor Support (GAO/NSIAD-92-233) Defense Relocation Assistance: Service Information Systems Operating, but Not Yet Interactive (GAO/NSIAD-92-186) Depot Maintenance: Requirement to Update Maintenance Analyses Should Be Modified (GAO/NSIAD-93-163) Desert Shield/Storm: Air Mobility Command’s Achievements and Lessons for the Future (GAO/NSIAD-93-40) Disaster Assistance: DOD’s Support for Hurricanes Andrew and Iniki and Typhoon Omar (GAO/NSIAD-93-180) DOD Commercial Transportation: Savings Possible Through Better Audit and Negotiation of Rates (GAO/NSIAD-92-61) DOD Service Academies: Improved Cost and Performance Monitoring Needed (GAO/NSIAD-91-79) Household Goods: Competition Among Commercial Movers Serving DOD Can Be Improved (GAO/NSIAD-90-50) Military Afloat Prepositioning: Wartime Use and Issues for the Future (GAO/NSIAD-93-39) Military Aircraft: Policies on Government Officials’ Use of 89th Military Airlift Wing Aircraft (GAO/NSIAD-92-133) Military Downsizing: Balancing Accessions and Losses Is Key to Shaping the Future Force (GAO/NSIAD-93-241) Military Health Care: Recovery of Medical Costs From Liable Third Parties Can Be Improved (GAO/NSIAD-90-49) National Guard: Using the Army’s Supply System Will Reduce the Guard’s Inventory Investment (GAO/NSIAD-93-25) Naval Academy: Gender and Racial Disparities (GAO/NSIAD-93-54) Naval Air Operations: Interservice Cooperation Needs Direction From Top (GAO/NSIAD-93-141) Naval Reserves: The Frigate Trainer Program Should Be Canceled (GAO/NSIAD-92-114) Navy Housing: Transient Lodging Operations Need Effective Management Control (GAO/NSIAD-92-27) Navy Maintenance: Improved Labor Estimates Can Reduce Shipyard Costs (GAO/NSIAD-93-199) Navy Supply: Improved Backorder Management Will Reduce Material Costs (GAO/NSIAD-93-131) Officer Commissioning Programs: More Oversight and Coordination Needed (GAO/NSIAD-93-37) Operation Desert Shield: Problems in Deploying by Rail Need Attention (GAO/NSIAD-93-30) Operation Desert Storm: Army Not Adequately Prepared to Deal With Depleted Uranium Contamination (GAO/NSIAD-93-90) Operation Desert Storm: Full Army Medical Capability Not Achieved (GAO/NSIAD-92-175) Operation Desert Storm: Improvements Required in the Navy’s Wartime Medical Care Program (GAO/NSIAD-93-189) Operation Desert Storm: Limits on the Role and Performance of B-52 Bombers in Conventional Conflicts (GAO/NSIAD-93-138) Simulation Training: Management Framework Improved, but Challenges Remain (GAO/NSIAD-93-122) Strategic Sealift: Part of the National Defense Reserve Fleet Is No Longer Needed (GAO/NSIAD-92-3) The Department of Defense (DOD) faces many critical issues as the nation moves toward building and supporting a smaller yet effective fighting force that can respond to post-Cold War national security needs. Our reports and testimonies have been used extensively by the Congress in its oversight of force structure, active-reserve mix, forward presence, roles and missions, and intelligence issues. We aided the Congress in evaluating DOD’s downsizing plans by analyzing the assumptions underlying force structure decisions and assessing alternative ways to accomplish missions. For example, our reviews of DOD’s Mobility Requirements Study showed that other assumptions were as compelling as those used in the study and that changing the assumptions might reduce requirements for sealift capability. We also reported to the Congress that the Air Force’s plans to build force projection composite wings in the United States were based on limited analysis of cost and other factors and would have significant limitations operating in peacetime and wartime. Our report on Navy carrier battle groups increased congressional awareness of less costly options to satisfy many of the carrier battle groups’ traditional roles without unreasonably increasing the risk that U.S. national security would be threatened. For example, we found that a less expensive carrier force could be achieved by relying more heavily on increasingly capable surface combatants and amphibious assault ships to provide forward presence. Our reviews of Army force structure issues contributed to congressional debate of several key aspects of the Army’s force reduction plans. We reported that better coordination between the reserve components in the selection of units for inactivation might have provided more assurance that readiness of the Army’s total force was maximized and that individual states were not disproportionately affected by the combined National Guard and Army Reserve inactivations. Our reviews of Desert Storm activities increased awareness of support force shortages, problems encountered in mobilizing reserve forces for the war, and the effect of the limitations of the President’s Selected Reserve callup authority. Our testimony and report on problems encountered in managing the withdrawal of personnel and equipment from Europe increased congressional awareness that the pace of the withdrawal was exceeding the Army’s ability to manage it. We assisted in congressional efforts to reduce unnecessary overlap and duplication among the services through our work—examining service roles and functions. In August 1993, we reported that the depth of analysis of many functions included in the review by the Chairman of the Joint Chiefs of Staff roles and functions was insufficient for proposing significant reductions in overlapping functions. We also identified several opportunities for additional reductions and consolidations that would enhance the economy and the efficiency of DOD operations, such as reassessing Army and Marine Corps requirements for light forces, consolidating certain test capabilities, and reassessing the composition of nuclear forces. With the emergence of DOD as a major player in the war on drugs, we have assessed how well it uses its intelligence assets to support the drug law enforcement community. This work has been used extensively by the House Committee on Government Operations. The Chairs of the major authorizing and appropriating committees and the various special and select committees on drug issues have also requested our assistance. We have issued several reports on this issue and have made recommendations affecting the Defense Intelligence agency, the Office of National Drug Control Policy, the drug law enforcement community, and the Director of the Central Intelligence Agency. Regarding Army Reserve Forces, we recommended that the Secretary of the Army, in refining the Army’s reserve force reduction plans, formalize coordination procedures among the National Guard Bureau; the Office of the Chief, Army Reserve; the Forces Command; and U.S. Army Reserve Command officials and better document the reasons why specific units are selected for inactivation or reduction. (GAO/NSIAD-93-145) Regarding Navy tactical aviation force structure, we recommended that the Secretary of Defense direct the Secretary of the Navy to revalidate the need for another strike and fighter aircraft by demonstrating that there was or would be a military threat that it could not meet with the present weapons systems and force structure. (GAO/NSIAD-93-144) Air Force Organization: More Assessment Needed Before Implementing Force Protection Composite Wings (GAO/NSIAD-93-44) Army Force Structure: Future Reserve Roles Shaped by New Strategy, Base Force Mandates, and Gulf War (GAO/NSIAD-93-80) Army Reserve Forces: Applying Features of Other Countries’ Reserves Could Provide Benefits (GAO/NSIAD-91-239) Army Reserve Forces: Process for Identifying Units for Inactivation Could Be Improved (GAO/NSIAD-93-145) DOD’s Mobility Requirements: Alternative Assumptions Could Affect Recommended Acquisition Plan (GAO/NSIAD-93-103) Financial Systems: Weaknesses Impede Initiatives to Reduce Air Force Operations and Support Costs (GAO/NSIAD-93-70) Mine Warfare: Consolidation at Ingleside Has Not Been Justified (GAO/NSIAD-93-147) Naval Aviation: Consider All Alternatives Before Proceeding With the F/A-18E/F (GAO/NSIAD-93-144) Navy Acquisition: AN/BSY-1 Combat System Operational Evaluation (GAO/NSIAD-93-81) Navy Carrier Battle Groups: The Structure and Affordability of the Future Force (GAO/NSIAD-93-74) Navy Maintenance: Public/Private Competition for F-14 Aircraft Maintenance (GAO/NSIAD-92-143) Navy Torpedo Program: MK-48 ADCAP Propulsion System Upgrade Not Needed (GAO/NSIAD-92-191) Operation Desert Storm: Army Had Difficulty Providing Adequate Active and Reserve Support Forces (GAO/NSIAD-92-67) Overseas Allowances: Improvements Needed in Administration (GAO/NSIAD-90-46) POW/MIA Affairs: Issues Related to the Identification of Human Remains From the Vietnam Conflict (GAO/NSIAD-93-7) Reserve Forces: Aspects of the Army’s Equipping Strategy Hamper Reserve Readiness (GAO/NSIAD-93-11) Roles and Functions: Assessment of the Chairman of the Joint Chiefs of Staff Report (GAO/NSIAD-93-200) U.S. Corps of Engineers: Better Management Needed for Mobilization Support (GAO/NSIAD-93-116) The United States Armed Forces’ technologically advanced weapons systems have been seen as a major factor in our military success in the Persian Gulf War. This technological superiority has always been emphasized as the strength we needed to meet the numerically superior Warsaw Pact. The need to maintain this edge, even after the collapse of the Soviet Union and the dissolution of the Warsaw Pact, has been a constant theme of the services and the Department of Defense (DOD). But, the long-range cost of acquiring the advanced systems that the Services see as needed is staggering—especially in a period of shrinking defense budgets. DOD is proposing to spend several hundred billion dollars through the 1990s on the development and procurement of weapon systems and related items. In response to the planned system developments and congressional interest in reducing unneeded expenditures, we have continued several bodies of work, evaluating the requirements for and the economy, the efficiency and the effectiveness of planned acquisitions of major air; sea; ground; space; missile; electronic warfare; and command, control, communication, and intelligence systems. In addition, to assist the Appropriations and Armed Services Committees, we have conducted specific budget analyses that identified over $2 billion in potential reductions in the Fiscal Year 1994 Procurement and Research, Development, Test, and Evaluation Budgets. We supported congressional deliberations on the B-2 and identified over $100 million in program savings that could be achieved by the Air Force because of changes to the production schedule. On the basis of our suggestions, the Air Force took actions to achieve these savings. Our work on the C-17 was instrumental in the congressional decision to limit fiscal year 1993 production to six rather than the requested eight aircraft because of cost increases, schedule slippage, and technical problems identified during testing. That decision resulted in a $658 million reduction in the fiscal year 1993 C-17 production budget. Our work on the Airborne Self-Protection Jammer resulted in the Congress directing that the Navy not obligate funds for procurement of the system. The Navy, in turn, terminated the program—a projected saving of $975.2 million over the next 10 years. Our work on Military Satellite Communications pointed out opportunities for saving billions of dollars by taking advantage of modern technology and by using alternative satellite architectures based on common bus designs—standard satellite platforms capable of carrying various payloads. We provided information to the Congress on several ground systems, including the heavy equipment transporter and the family of medium tactical vehicles. In both cases, we noted problems that could significantly affect the success of the programs. As a result of our work, some members of the Congress have called on the Army to cease production of the heavy equipment transporter until the problems we found have been corrected. We have also continued our work on missiles, providing information and recommending improvements in the management of the acquisition of a number of systems, including the Tri-Service Standoff Attack Missile and the Advanced Cruise Missile. In February 1993, we reported on the Apache manufacturer’s oversight of its subcontractors, noting that ineffective oversight by the manufacturer and the Army contributed to past problems with parts for the Apache. We recommended that the Secretary of Defense not commit billions of dollars in production funds for the Longbow Apache program until the oversight of subcontractors was adequate to ensure satisfactory performance. (GAO/NSIAD-93-108) In July 1993, we reported that DOD could save billions by adopting alternative satellite architectures based on common bus design and by inserting modern technology into its existing communication satellite systems. We recommended that the Secretary of Defense (1) not make any decisions regarding replenishment of existing military satellite communications systems until a coordinated process was established to insert modern technology into the architecture and (2) reassess the dual common bus alternative as a means of inserting modern technology to preclude continuation of customized satellites. (GAO/NSIAD-93-216) In July 1993, we reported that the heavy equipment transporter had not shown that it could adequately accomplish its mission or that it was suitable for fielding. We recommended that the Secretary of Defense require the Army to stop conditionally accepting heavy equipment transporter tractors and trailers until the heavy equipment transporter showed that it could meet its intended mission and reliability and maintainability requirements. (GAO/NSIAD-93-228) In August 1993, we reported on the Army’s procurement of medium tactical trucks—the 2.5-ton and 5-ton payload classes. The original procurement plan called for replacing over 120,000 trucks over 15 years at a cost of over $17 billion. We found that the current plan to stretch out the procurement over 30 years was not practical. We recommended that the Secretary of the Army reassess the cost-effectiveness of the 30-year acquisition strategy and reconsider alternatives, especially the M939A2 alternative. We also recommended that the Secretary of the Army not proceed to full-rate production of the medium tactical vehicles until the reassessment was complete. ADP Procurement: Prompt Navy Action Can Reduce Risks to SNAP III Implementation (GAO/IMTEC-92-69) Air Force ADP: Lax Contract Oversight Led to Waste and Reduced Competition (GAO/IMTEC-93-3) Antiarmor Weapons Acquisitions: Assessments Needed to Support Continued Need and Long-Term Affordability (GAO/NSIAD-93-49) Apache Helicopter: Tests Results for 30-Millimeter Weapon System Inconclusive (GAO/NSIAD-93-134) Army Acquisition: Effective Subcontractor Oversight Needed Before Longbow Apache Production (GAO/NSIAD-93-108) Army Acquisition: Medium Truck Program Is Not Practical and Needs Reassessment (GAO/NSIAD-93-232) Army Acquisition: More Testing Needed to Solve Heavy Equipment Transporter System Problems (GAO/NSIAD-93-228) Army Acquisition: Palletized Load System Acquisition Quantity Overstated (GAO/NSIAD-92-163) Ballistic Missile Defense: Information on Directed Energy Programs for Fiscal Years 1985 Through 1993 (GAO/NSIAD-93-182) Comanche Helicopter: Program Needs Reassessment Due to Increased Unit Cost and Other Factors (GAO/NSIAD-92-204) Communications Acquisition: Army Still Needs to Determine Battlefield Communications Capability (GAO/NSIAD-93-33) Defense Support Program: Ground Station Upgrades Not Based on Validated Requirements (GAO/NSIAD-93-148) Desert Shield/Storm: Air Mobility Command’s Achievements and Lessons for the Future (GAO/NSIAD-93-40) Drug Control: Communications Network Funding and Requirements Uncertain (GAO/NSIAD-92-29) Drug Control: Heavy Investment in Military Surveillance Is Not Paying Off (GAO/NSIAD-93-220) Drug Control: Inadequate Guidance Results in Duplicate Intelligence Production Efforts (GAO/NSIAD-92-153) Electronic Warfare: Inadequate Testing Led to Faulty SLQ-32s on Ships (GAO/NSIAD-93-272) Electronic Warfare: Laser Warning System Production Should Be Limited (GAO/NSIAD-93-14) Embedded Computer Systems: Software Development Problems Delay the Army’s Fire Direction Data Manager (GAO/IMTEC-92-32) ICBM Modernization: Minuteman III Guidance Replacement Program Has Not Been Adequately Justified (GAO/NSIAD-93-181) Javelin Antitank Weapon: Quantity and Identification Capability Need to Be Reassessed (GAO/NSIAD-92-330) Military Communications: Joint Tactical Information Distribution System Issues (GAO/NSIAD-93-16) Military Satellite Communications: Milstar Program Issues and Cost-Saving Opportunities (GAO/NSIAD-92-121) Military Satellite Communications: Opportunity to Save Billions of Dollars (GAO/NSIAD-93-216) National Aero-Space Plane: A Need for Program Direction and Funding Decisions (GAO/NSIAD-93-207) Navy Shipbuilding: Allegations of Mischarging at Bath Iron Works (GAO/NSIAD-91-85) Software Tools: Defense Is Not Ready to Implement I-CASE Departmentwide (GAO/IMTEC-93-27) Tactical Intelligence: Joint STARS Needs Current Cost and Operational Effectiveness Analysis (GAO/NSIAD-93-117) Undersea Surveillance: Navy Continues to Build Ships Designed for Soviet Threat (GAO/NSIAD-93-53) Dramatic events on both the international and national scene—the end of the Cold War, worldwide concerns about nuclear proliferation, new mandates to clean up and restore rather than continuing to build a nuclear weapons complex, passage of the Energy Policy Act of 1992, and the advent of an administration focused on improving our economy through the application of science and technology initiatives—have shifted the mission, altered the landscape, and posed new and significant challenges for the Department of Energy (DOE). Our work in recent years has played a major role in both exposing and proposing actions to deal with these and related issues and has led to savings of over $5 billion. Beginning in the late 1980s, we issued a number of reports questioning various aspects of DOE’s new production reactor program. Initially, we focused on the lack of necessary information related to the cost, the benefits, and the schedule of building two multibillion dollar tritium production reactors. By early 1991, however, the need for additional nuclear weapons—and consequently tritium—was decreasing and we questioned the need and the strategy for building a new tritium production reactor. In September 1992, the Secretary of Energy informed the Congress that, because of nuclear weapons stockpile levels and the resulting effects on the need for tritium, the new production reactor program would be deferred and reactor design and construction efforts would be brought to a prompt and orderly closure. By not building a new reactor to produce tritium, DOE will save at least $3.5 billion. We have continued to review the broad range of technical and management issues critical to the successful and safe cleanup of the nuclear weapons complex. Our efforts resulted in recommendations to (1) improve the $50 billion vitrification program at the Hanford Site, (2) implement cost-effective improvements in well drilling and ground water monitoring that could lead to over $100 million in savings, and (3) improve DOE’s Environmental Restoration Management Contracting and cleanup contractor indemnification approaches. We also continued to address the issue of improving the safety and the health of cleanup workers with a comprehensive report on DOE’s Site Resident Program—its principal method for conducting independent oversight—and a review of the Tiger Team Program. Finally, we identified $320 million in potential savings in DOE’s fiscal year 1994 cleanup program budget. We have consistently made recommendations to improve DOE’s management and oversight of its contractors, which operate DOE’s nuclear weapons complex and other facilities at a cost of over $16 billion annually. The recommendations have ranged from identifying better ways to administer the contractor performance award fee process to reducing the number of nonstandard contract clauses, which have allowed contractors to virtually ignore DOE direction. We have also recommended improved controls over funds obligated to major contractors, resulting in savings in fiscal year 1993 of $334 million. Additional savings totaling over $1.3 billion are expected by the end of fiscal year 1994. We have also recommended ways DOE can improve its information resource management (IRM), and our ongoing management review is highlighting additional reform measures to improve overall contractor accountability. We have recommended that the Congress consider improvements that would lead to more comprehensive national energy planning. We have also provided the Congress with a comprehensive analysis of the factors that affect crude oil and petroleum product prices during shock and nonshock periods, and we pointed out other nations’ policies for reducing oil and coal use in their industrial and transportation sectors. In addition, our work was instrumental in the development of several key provisions of the Energy Policy Act of 1992, including (1) added requirements for octane labeling for alternative motor fuels, along with additional state authority to enforce such requirements, and (2) federal certification of training programs for mechanics converting gasoline vehicles to operate on alternative fuels. On the broader science and technology front, our work on the Small Business Innovation Research program—in which 11 agencies, including DOE, participate—was instrumental in congressional action leading to the re-authorization of the program but with more emphasis on commercialization in the private sector. In addition, our work pointed up abuses in charges for indirect, or overhead, costs by universities for federally funded research activities and led to a number of significant changes in the process for negotiating indirect cost rates, with other changes still under consideration by the Office of Management and Budget (OMB). We also alerted the Congress about potential conflicts of interest at universities and other organizations carrying out federal research activities, and we pointed out that the Superconducting Supercollider was over budget, behind schedule, and would cost in excess of $11 billion. In April 1993, we reported that now is an opportune time for the Congress to consider strengthening national energy policy planning. DOE has long struggled to develop useful energy plans and new requirements to integrate important issues, such as global warming policy options, which make it important to give DOE sufficient time to prepare comprehensive plans for the Congress to consider. DOE officials agree that more time is needed to prepare comprehensive plans and are working with the Congress to change reporting time frames. (GAO/RCED-93-29) In August 1993, we reported that DOE still had significant and pervasive management problems and was failing to properly manage and maintain its vast nuclear weapons complex, which is beset with major environmental contamination. Much of DOE’s problem stems from loose controls over its contractors, which have long dominated departmental activities while eluding effective governmental oversight. We believe that DOE’s leadership should establish a long-term strategy for correcting its management problem. Although DOE officials agree that their problems are significant and are making changes in DOE’s organizational structure and contracting practices, fundamental problems in communication and work force skills will make reform difficult to achieve in the short run. (GAO/RCED-93-72) In September 1992, we recommended that DOE, among other things, more closely link its IRM planning with strategic mission planning; give managers more authority to plan for their information needs on a departmentwide basis; and identify IRM activities as a material internal control weakness, under the Federal Managers’ Financial Integrity Act, until IRM resources are being applied efficiently and are applied in accordance with laws, regulations, and policies. Although DOE agreed to implement all of our recommendations and has declared its IRM program deficiencies a material control weakness, it has not fully implemented the remaining recommendations. (GAO/IMTEC-92-10) In August 1991, we made recommendations to DOE and OMB that could reduce the cost of DOE’s support services contracts by $50 million to $100 million annually. DOE has taken some recommended actions, such as conducting cost comparisons before contracting for support services. In June 1993, however, the agencies agreed that efforts to achieve cost-effectiveness in DOE’s support services contracts should be included as part of Secretary O’Leary’s DOE-wide Contract Reform Task Force. Results of the task force are not expected until December 1993. Consequently, additional time is needed to determine if DOE’s and OMB’s actions will be adequate. (GAO/RCED-91-186) In November 1992, we recommended that DOE take actions to ensure that its contractors are adequately analyzing, correcting, and validating security deficiencies at the facilities they operate for DOE. DOE has begun taking action to implement the recommendations; additional time, however, will be required to determine if those actions will be adequate. (GAO/RCED-93-10) In March 1993, we recommended that DOE postpone construction of the Hanford vitrification plant and renegotiate the Tri-Party Agreement with the state and the Environmental Protection Agency to establish a more realistic program schedule. In addition, we recommended that DOE develop life-cycle cost estimates for the program and report these to the Congress. DOE agreed with these recommendations and has deferred construction for 6 months to allow time to consider changes to the Tri-Party Agreement. In addition, DOE is developing life-cycle cost estimates for the program. (GAO/RCED-93-99) In May 1993, we recommended that DOE take a number of steps to strengthen a key headquarters program that maintains representatives at DOE sites to independently monitor field office and contractor performance in protecting workers’ safety and health. DOE has accepted our recommendations and has begun to implement them. But, additional time is needed to determine whether DOE’s actions will be adequate. (GAO/RCED-93-85) In July 1993, we recommended that DOE develop a consistent policy for indemnifying its contractors against liabilities that could arise from the cleanup of the nation’s weapons complex and that the policy should reflect existing statutory requirements for indemnifying Superfund cleanup contractors. The report was released September 8, 1993, and DOE has not yet had a chance to respond to our recommendations. (GAO/RCED-93-167) In May 1993, we reported that DOE’s investigation of Yucca Mountain, Nevada, as a potential site for disposal of highly radioactive nuclear waste will, at its present pace, take at least 5 to 13 years longer than planned and cost more than DOE has estimated. Furthermore, DOE’s initiatives to mitigate the probable delay, including establishing a revolving fund to ensure higher annual project funding, did not adequately address the disconnection between disposal program policy and funding priorities. We endorsed the Nuclear Waste Technical Review Board’s earlier call for an independent review of the program. In conjunction with this, we recommended that the Congress defer consideration of changing the method for funding the disposal program until, among other things, an independent review of the program has been completed and appropriate legislative, policy, and/or programmatic changes to the program have been implemented. (GAO/RCED-93-124) In June 1993, we recommended that in reviewing agreements for nuclear cooperation, the Congress may wish to consider the impact of an agreement’s terms on the Congress’s opportunities for oversight and on U.S nonproliferation goals. As of late 1993, no proposed agreements for nuclear cooperation had come before the Congress for review. (GAO/RCED-93-154) In April 1993, because of the inconsistent way in which the Nuclear Regulatory Commission evaluates its nuclear materials licensing programs in achieving the goal of adequately protecting the public from radiation, we recommended that the Chair establish (1) common performance indicators in order to obtain comparable information to evaluate the effectiveness of both the agreement-state and Nuclear Regulatory Commission (NRC)-regulated state program in meeting NRC’s goal and (2) specific criteria and procedures for suspending or revoking an agreement-state program using the new performance indicators. Once NRC ensures the effectiveness of the NRC-regulated state program using the new performance indicators, it should take aggressive action to suspend or revoke any agreement-state program that is incompatible or inadequate with the performance indicators. NRC agreed with our recommendations and is implementing them. (GAO/RCED-93-90) In April 1993, we recommended that NRC establish common performance indicators for evaluating how the NRC-regulated and agreement-state programs regulate nuclear materials and develop specific criteria and procedures for suspending or revoking an agreement-state program. NRC has accepted our recommendations and intends to implement a new evaluation program beginning in 1994 using performance indicators and to develop specific written procedures for terminating agreements with the agreement states. But, additional time will be needed to determine whether NRC’s actions will be adequate. (GAO/RCED-93-90) In December 1992, we recommended that federal agencies operating royalty-sharing programs under the Federal Technology Transfer Act of 1986 take various actions to motivate scientists at federal laboratories to seek patents and licenses for their inventions. These include establishing an annual threshold of income to adequately reward federal inventors for their work and making more effective use of the royalties returned to federal laboratories. While some agencies have responded with positive steps, others are have not, and congressional actions may be needed to further encourage these steps. (GAO/RCED-93-6) In August 1992, we reported that the government had been charged millions of dollars for unallowable, questionable, or improperly allocated indirect costs for federally sponsored research at universities. We recommended that OMB designate a single agency to negotiate indirect cost rates and examine ways to more directly involve the university community in evaluating alternative methods for reimbursing universities for indirect costs. OMB has taken various actions to tighten the process and plans to further study other issues, including the issue of designating one agency to negotiate rates with universities. (GAO/RCED-92-203) In May 1992, we reported that growing interactions between universities and businesses increased the potential for conflicts-of-interest or other relationships that might give a business an unfair advantage in commercializing the results of federally funded research. We recommended that the Department of Health and Human Services (HHS) and the National Science Foundation (NSF) require that their grantees have procedures in place to manage potential conflicts. The National Institute of Health Revitalization Act of 1993 (Public Law 103-43) requires that HHS promulgate a financial conflict-of-interest regulation by December 1993. Meanwhile, OMB is seeking to establish uniform requirements for HHS, NSF, and all other federal agencies. (GAO/RCED-92-104) In our September 1992 “wrap-up” report, we concluded that SEMATECH had shown that a government-industry research and development consortium could help improve a U.S. industry’s technological position. In the report, we identified eight lessons learned from the SEMATECH experience that the Congress may want to consider in authorizing support for future consortia. Among them, the Congress may wish to consider specific criteria for determining when federal support for SEMATECH—and any other future consortia—should appropriately be terminated. (GAO/RCED-92-283) Biotechnology: Managing the Risks of Field Testing Genetically Engineered Organisms (GAO/RCED-88-27) Cleanup Technology: Better Management for DOE’s Technology Development Program (GAO/RCED-92-145) Department of Energy: Better Information Resources Management Needed to Accomplish Missions (GAO/IMTEC-92-53) Department of Energy: Cleaning Up Inactive Facilities Will Be Difficult (GAO/RCED-93-149) Department of Energy: Management Problems Require a Long-Term Commitment to Change (GAO/RCED-93-72) DOE Management: Better Planning Needed to Correct Records Management Problems (GAO/RCED-92-88) DOE Management: Consistent Cleanup Indemnification Policy Is Needed (GAO/RCED-93-167) DOE Management: Impediments to Environmental Restoration Management Contracting (GAO/RCED-92-244) Electricity Regulation: Factors Affecting the Processing of Electric Power Applications (GAO/RCED-93-168) Energy Conservation: Appliance Standards and Labeling Programs Can Be Improved (GAO/RCED-93-102) Energy Information: Department of Energy Security Program Needs Effective Information Systems (GAO/IMTEC-92-10) Energy Management: Contract Audit Problems Create the Potential for Fraud, Waste, and Abuse (GAO/RCED-92-41) Energy Management: DOE Has Improved Oversight of Its Work for Others Program (GAO/RCED-93-111) Energy Management: Systems Contracting Weaknesses Continue (GAO/RCED-93-143) Energy Management: Using DOE Employees Can Reduce Costs for Some Support Services (GAO/RCED-91-186) Energy Policy: Changes Needed to Make National Energy Planning More Useful (GAO/RCED-93-29) Environment, Safety, and Health: Environment and Workers Could Be Better Protected at Ohio Defense Plants (GAO/RCED-86-61) Federal Research: Lessons Learned From SEMATECH (GAO/RCED-92-283) Federal Research: System for Reimbursing Universities’ Indirect Costs Should Be Reevaluated (GAO/RCED-92-203) Fossil Fuels: Improvements Needed in DOE’s Clean Coal Technology Program (GAO/RCED-92-17) Fossil Fuels: Ways to Strengthen Controls Over Clean Coal Technology Project Costs (GAO/RCED-93-104) Hydroelectric Dams: Issues Surrounding Columbia River Basin Juvenile Fish Bypasses (GAO/RCED-90-180) Natural Gas: Factors Affecting Approval Times for Construction of Natural Gas Pipelines (GAO/RCED-92-100) Natural Gas: FERC’s Compliance and Enforcement Programs Could Be Further Enhanced (GAO/RCED-93-122) Nuclear Energy: Consequences of Explosion of Hanford’s Single-Shell Tanks Are Understated (GAO/RCED-91-34) Nuclear Energy: Environmental Issues at DOE’s Nuclear Defense Facilities (GAO/RCED-86-192) Nuclear Health and Safety: More Attention to Health and Safety Needed at Pantex (GAO/RCED-91-103) Nuclear Health and Safety: More Can Be Done to Better Control Environmental Restoration Costs (GAO/RCED-92-71) Nuclear Materials: Nuclear Arsenal Reductions Allow Consideration of Tritium Production Options (GAO/RCED-93-189) Nuclear Nonproliferation: Better Controls Needed Over Weapons-Related Information and Technology (GAO/RCED-89-116) Nuclear Nonproliferation: Controls Over the Commercial Sale and Export of Tritium Can Be Improved (GAO/RCED-91-90) Nuclear Nonproliferation: DOE Needs Better Controls to Identify Contractors Having Foreign Interests (GAO/RCED-91-83) Nuclear Nonproliferation: Japan’s Shipment of Plutonium Raises Concerns About Reprocessing (GAO/RCED-93-154) Nuclear Regulation: Better Criteria and Data Would Help Ensure Safety of Nuclear Materials (GAO/RCED-93-90) Nuclear Regulation: NRC’s Decommissioning Procedures and Criteria Need to Be Strengthened (GAO/RCED-89-119) Nuclear Safety: Potential Security Weaknesses at Los Alamos and Other DOE Facilities (GAO/RCED-91-12) Nuclear Science: Consideration of Accelerator Production of Tritium Requires R&D (GAO/RCED-92-154) Nuclear Science: Monitoring Improved, but More Planning Needed for DOE Test and Research Reactors (GAO/RCED-92-123) Nuclear Security: DOE Needs a More Accurate and Efficient Security Clearance Program (GAO/RCED-88-28) Nuclear Security: DOE’s Progress on Reducing Its Security Clearance Work Load (GAO/RCED-93-183) Nuclear Security: Improving Correction of Security Deficiencies at DOE’s Weapons Facilities (GAO/RCED-93-10) Nuclear Security: Safeguards and Security Planning at DOE Facilities Incomplete (GAO/RCED-93-14) Nuclear Waste: Changes Needed in DOE User-Fee Assessments to Avoid Funding Shortfall (GAO/RCED-90-65) Nuclear Waste: Development of Casks for Transporting Spent Fuel Needs Modification (GAO/RCED-92-56) Nuclear Waste: DOE’s Management of Single-Shell Tanks at Hanford, Washington (GAO/RCED-89-157) Nuclear Waste: DOE’s Repository Site Investigations, a Long and Difficult Task (GAO/RCED-92-73) Nuclear Waste: Hanford Tank Waste Program Needs Cost, Schedule, and Management Changes (GAO/RCED-93-99) Nuclear Waste: Hanford’s Well-Drilling Costs Can Be Reduced (GAO/RCED-93-71) Nuclear Waste: Improvements Needed in Monitoring Contaminants in Hanford Soils (GAO/RCED-92-149) Nuclear Waste: Operation of Monitored Retrievable Storage Facility Is Unlikely by 1998 (GAO/RCED-91-194) Nuclear Waste: Pretreatment Modifications at DOE Hanford’s B Plant Should Be Stopped (GAO/RCED-91-165) Nuclear Waste: Questionable Uses of Program Funds at Lawrence Livermore Laboratory (GAO/RCED-92-157) Nuclear Waste: Status of Actions to Improve DOE User-Fee Assessments (GAO/RCED-92-165) Nuclear Waste: Yucca Mountain Project Behind Schedule and Facing Major Scientific Uncertainties (GAO/RCED-93-124) Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations (GAO/RCED-93-23) Safety and Health: Key Independent Oversight Program at DOE Needs Strengthening (GAO/RCED-93-85) Technology Transfer: Barriers Limit Royalty Sharing’s Effectiveness (GAO/RCED-93-6) Technology Transfer: Federal Efforts to Enhance the Competitiveness of Small Manufacturers (GAO/RCED-92-30) Trans-Alaska Pipeline: Regulators Have Not Ensured That Government Requirements Are Being Met (GAO/RCED-91-89) University Research: Controlling Inappropriate Access to Federally Funded Research Results (GAO/RCED-92-104) Over the last 20 years, our nation has spent about $1 trillion to comply with environmental protection mandates. By the end of this decade, our nation will spend almost $160 billion annually to address environmental protection. Despite this investment and the resulting improvements, many environmental expectations remain unfulfilled. As a result of the federal budget deficit, the Environmental Protection Agency’s (EPA) operating budget has remained relatively flat. Yet, the agency must manage strengthened requirements to existing regulations and new mandates imposed on the regulated community. Despite increasing pressures to be accountable for solutions, EPA’s funding situation has caused it to rely heavily on states and local governments to implement and monitor environmental regulations. But, these governments also face budget constraints that limit their ability to meet these responsibilities. For example, states have not sufficiently funded enforcement of asbestos removal and disposal and localities have not been able to finance the billions of dollars needed to invest in the infrastructure to meet existing and new drinking water and wastewater treatment requirements. Our work has been in the forefront in highlighting our nation’s recurring environmental problems and recommending ways in which the Congress and EPA can effectively address those concerns. In an attempt to seek a more realistic balance between environmental expectations and available resources, we have continued to recommend that EPA develop priorities among these competing demands based on risk to human health and the environment. We recommended that, in testing both chemicals and pesticides, EPA give priority to regulating the high-risk compounds first. We also recommended that EPA work with the Congress to adequately fund those programs that it has acknowledged as high risks to the public. To highlight possible new sources of funding, we reported that carefully designed pollution taxes—levied on pollution emissions or on harmful products or substances—in some cases could be an alternative to regulation that would reduce pollution and raise revenues to invest in further environmental protection. Our work highlighted numerous delays in state implementation of the Clean Air Act (CAA). For example, we noted that EPA had delayed issuing final rules for the operating permit program, which in turn has delayed state programs. (GAO/RCED-93-59) Limited EPA and state resources have also hampered progress. We recommended that EPA help states obtain the necessary legislative authority to assess fees to cover costs and present its own realistic long-term resource estimates to the Congress. We further pointed out that states were late in submitting their CAA implementation plans to EPA and that the agency had been slow in approving them. (GAO/RCED-93-113) We recommended that EPA address these delays and delegate plan approval authority to EPA regional administrators. Evidence now suggests that inhaling asbestos fibers can cause cancer and other serious respiratory illnesses. Many older buildings that contain asbestos materials need to be renovated, and disturbing the materials may release asbestos fibers into the environment, posing a health threat. EPA is required to develop regulations to ensure worker safety when removing or disposing of asbestos materials. About 39 percent, or 14,000, of federally-owned buildings contain “friable” asbestos, a greater danger because simple hand pressure causes it to crumble, releasing fibers. We reported that the agencies in these buildings had not implemented asbestos maintenance programs. These agencies, along with the Occupational Safety and Health Administration (OSHA), are beginning to respond to our recommendations that OSHA clarify its requirements to safely manage asbestos removal and that agencies ensure compliance. (GAO/RCED-93-9) Generally, EPA delegated overall responsibility to monitor and enforce compliance with asbestos regulations to state and local agencies. Once again, states reported that they did not have the resources and EPA regional offices had inconsistently implemented the program. (GAO/RCED-92-83) EPA has begun to respond to our recommendations that to better use resources, it define through performance standards the minimum monitoring level for states while still protecting the public’s health. Several of our reports highlighted serious shortcomings in EPA’s drinking water protection program. In 1990, we reported that many water systems, particularly smaller systems, violated monitoring requirements and drinking water standards and that enforcement actions had done little to stop violators or increase compliance. In 1992, we reported funding shortages at the federal, state, and water system levels to correct problems and disparities between EPA’s low funding and the high health and environmental risks from drinking water contamination, according to EPA’s Science Advisory Board. (GAO/RCED-92-184) This year, we found that the pollution and funding problems persist and reported that EPA was not prepared to take over where state programs fail. We recommended that EPA work with the Congress to better fund the program at a level equal to the high risk to human health and the environment. (GAO/RCED-93-96 and GAO/RCED-93-144) We continue to highlight the significant gap between available resources and the nation’s wastewater treatment needs. We reviewed the state revolving funds program and found that it will not come close to helping the states meet their needs. (GAO/RCED-92-35) We recommended that EPA improve the financial skills of its regional staff to better help states, use models to better estimate program needs, including nonpoint-source pollution, and work with the Congress to build a strategy to close the resource gap. We found continuing concerns about EPA’s ability to assess and safely control chemical substances. (GAO/RCED-91-136) EPA had no clear objectives and direction for the testing program, reviewed relatively few of the potentially dangerous chemicals, and had issued few regulations since program inception in 1977. We recommended that EPA develop criteria and a methodology for the review process and ensure that EPA regulates chemicals on the basis of the level of risk they impose. While EPA has established a guide to decide whether a chemical presents a significant risk, the agency has not followed with criteria and a methodology to determine if this level of risk warrants regulation. In 1972 , EPA was given the formidable task to reassess all older pesticides on the basis of current scientific standards including those pertaining to cancer, reproductive disorders and birth defects. Disappointed with EPA’s progress, the Congress, in 1988, provided funds for additional resources and mandated that the reassessment be essentially completed by 1998. Our review of the program disclosed that despite increased progress, EPA may not complete this process until 2006. More importantly, EPA has not assessed the high-risk pesticides—those used in high volumes or on food—and they will continue to be used until a risk assessement is completed. We recommended that EPA concentrate on reregistering these pesticides first. (GAO/RCED-93-94) EPA, in the past, assumed low exposure to lawn care pesticides but now is concerned they may persist in the environment, resulting in higher exposure. We recommended that before re-registering these pesticides, EPA fully explore the health effects from such exposure and give priority to developing the test and assessment guidelines to do so. (GAO/RCED-93-80) Both EPA and the Food and Drug Administration (FDA) face challenges in controlling the entry of illegal pesticides into the U.S. environment through imported foods. For example, Mexico accounts for nearly one-half of all the fresh and frozen fruits and vegetables that the United States currently imports, and this would increase under an approved North American Free Trade Agreement. Since Mexico does not have a system to test for pesticide residues in food and allows the use of some pesticides that the United States does not, we recommended that EPA and FDA work with Mexican officials to resolve these differences and deal with new changes to pesticide regulations. (GAO/RCED-92-140) Our work also showed that about one-third of pesticide-tainted shipments of imported food ended up on grocery shelves. We reported similar findings in 1979 and again in 1986. We recommended that FDA take stronger prevention actions, including targeting repeat offenders for penalties, applying more-stringent control over suspect shipments, and using its program resources more effectively. We recommended that, to better achieve this, the Congress authorize the agency to pursue civil administrative penalties against violators. (GAO/RCED-92-205) EPA’s 1992 data showed that owners and operators of hazardous waste treatment, storage, and disposal facilities had begun cleaning up only 5 percent of the more than 3,400 sites that were potentially threatening human health and the environment and that EPA had scarce resources to oversee the cleanup. EPA began a new cleanup approach called stabilization that more quickly mitigates the threats from waste facilities. We recommended that the agency improve its data management system to measure its progress in stabilizing facilities, recover its oversight costs from site owners and operators, and update the program’s long-term cost and schedule estimates for the Congress. (GAO/RCED-93-15) The Congress, in 1976, tried to encourage new markets for recovered materials by directing federal agencies to purchase items composed of such materials. The agencies have made little progress. EPA has been slow to define which recovered products agencies should purchase and which purchasing practices they should follow. The Department of Commerce has done little to promote commercial use of proven recovery technology. Until recently, program leaders have not encouraged agencies to buy more recovered products. Consequently, we recommended that (1) the Congress clarify the authority agencies have to give products containing recovered materials price preferences while avoiding unreasonable prices, (2) EPA complete its strategy for developing procurement guidelines, (3) Commerce work to stimulate the demand for recovered materials, and (4) the Office of Management and Budget ensure that agencies set goals and make progress and incorporate program requirements into other governmentwide procurement policies. (GAO/RCED-93-58) Under Superfund, parties that contaminate sites either clean up the sites or reimburse the government for its cleanup costs. We found that EPA had a low record of cost recovery; was not measuring the success of its negotiations with responsible parties; and was not documenting reasons for important negotiation decisions, such as less than full settlements. (GAO/RCED-91-144 and GAO/HRD-93-10) Although EPA has proposed some regulatory changes to increase recoveries, to date the agency has not implemented our recommendations. We recommended that, to increase cost recovery, the Congress permit EPA to recover greater amounts of interest and require it to measure how well it performed with settlements. Although property and casualty insurance companies are concerned that their potentially high liability for cleanup costs could render them insolvent, we found they were not complying with Securities and Exchange Commission (SEC) requirements to disclose “material” environmental liabilities to investors. (GAO/RCED-93-108) We recommended the SEC Chair revise agency guidance to specify that insurance companies routinely disclose in their annual reports (1) the number and type of environmental claims and (2) an estimated range or minimum amount of associated claims costs and expenses. The federal government now owns many abandoned hazardous waste sites and faces significant cleanup funding liabilities. We reported that EPA and other federal agencies had not met statutory deadlines to determine whether federally-owned hazardous waste sites were so badly contaminated that agencies should conduct the cleanup under EPA’s Superfund program. (GAO/RCED-93-119) The backlog of sites not evaluated is growing and could take EPA a decade to assess, increasing risks and costs. EPA has not yet implemented our recommendation that it develop a plan to complete the evaluation of federal facilities. In reviewing cleanup at Department of Defense (DOD) waste sites, we recommended that DOD more quickly test for leaks in its underground storage tanks, upgrade existing tanks, permanently close tanks that were out of service for more than a year, and determine if they had caused contamination. (GAO/NSIAD-92-117) As a result, DOD has begun to reevaluate its guidance on storage tank maintenance. We also found problems with DOD’s management of contractor costs for hazardous waste cleanups. DOD does not routinely obtain and review the contractors’ cleanup cost projections, and it has paid contractors profits as part of their reimbursement for cleanup costs. We questioned some of DOD’s practices in directly paying contractors. These are independent DOD contractors that are responsible for the site contamination and are required to clean up the sites by EPA and the states. We recommended that DOD review its contractor reimbursement practices and implement revised guidance to correct concerns. (GAO/NSIAD-93-77) EPA does not apply its policy to penalize a company that significantly violates environmental laws and regulations an amount equal to the benefits the company would realize by not complying with these controls. Reasons include different enforcement philosophies, budgetary pressures to settle cases quickly, and concerns about jeopardizing local businesses. We recommended a number of actions that EPA could take to better oversee state and regional penalty practices and to better ensure accountability for following the agency’s penalty policies. (GAO/RCED-91-166) Subsequently, we made recommendations to correct insufficient or inconsistent program controls and the low priority placed on data quality assurance, which prevented EPA and the states from effectively ensuring that (1) dischargers reported accurate compliance monitoring data and (2) the states identified all facilities subject to regulation. (GAO/RCED-93-21) Air Pollution: Difficulties in Implementing a National Air Permit Program (GAO/RCED-93-59) Air Pollution: Impact of White House Entities on Two Clean Air Rules (GAO/RCED-93-24) Air Pollution: State Planning Requirements Will Continue to Challenge EPA and the States (GAO/RCED-93-113) Air Pollution: Unresolved Issues May Hamper Success of EPA’s Proposed Emissions Program (GAO/RCED-92-288) Asbestos in Federal Buildings: Federal Efforts to Protect Employees From Potential Exposure (GAO/RCED-93-9) Asbestos Removal and Disposal: EPA Needs to Improve Compliance With Its Regulations (GAO/RCED-92-83) Biological Warfare: Role of Salk Institute in Army’s Research Program (GAO/NSIAD-92-33) Cleanup Technology: Better Management for DOE’s Technology Development Program (GAO/RCED-92-145) Coast Guard: Coordination and Planning for National Oil Spill Response (GAO/RCED-91-212) Coast Guard: Inspection Program Improvements Are Under Way to Help Detect Unsafe Tankers (GAO/RCED-92-23) Coast Guard: Oil Spills Continue Despite Waterfront Facility Inspection Program (GAO/RCED-91-161) Department of Energy: Cleaning Up Inactive Facilities Will Be Difficult (GAO/RCED-93-149) Disinfectants: EPA Lacks Assurance They Work (GAO/RCED-90-139) DOE Management: Impediments to Environmental Restoration Management Contracting (GAO/RCED-92-244) Drinking Water: Consumers Often Not Well-informed of Potentially Serious Violations (GAO/RCED-92-135) Drinking Water: Inadequate Regulation of Home Treatment Units Leaves Consumers at Risk (GAO/RCED-92-34) Drinking Water: Key Quality Assurance Program Is Flawed and Underfunded (GAO/RCED-93-97) Drinking Water Program: States Face Increased Difficulties in Meeting Basic Requirements (GAO/RCED-93-144) Drinking Water: Projects That May Damage Sole Source Aquifers Are Not Always Identified (GAO/RCED-93-4) Drinking Water: Safeguards Are Not Preventing Contamination From Injected Oil and Gas Wastes (GAO/RCED-89-97) Drinking Water: Stronger Efforts Needed to Protect Areas Around Public Wells From Contamination (GAO/RCED-93-96) Drinking Water: Widening Gap Between Needs and Available Resources Threatens Vital EPA Program (GAO/RCED-92-184) Environmental Cleanup: Observations on Consistency of Reimbursements to DOD Contractors (GAO/NSIAD-93-77) Environmental Enforcement: EPA Cannot Ensure the Accuracy of Self-Reported Compliance Monitoring Data (GAO/RCED-93-21) Environmental Enforcement: EPA Needs a Better Strategy to Manage Its Cross-Media Information (GAO/IMTEC-92-14) Environmental Enforcement: Penalties May Not Recover Economic Benefits Gained by Violators (GAO/RCED-91-166) Environmental Liability: Property and Casualty Insurer Disclosure of Environmental Liabilities (GAO/RCED-93-108) Environmental Protection Agency: Plans in Limbo for Consolidated Headquarters Space (GAO/GGD-93-84) Environmental Protection Agency: Protecting Human Health and the Environment Through Improved Management (GAO/RCED-88-101) Environmental Protection: EPA’s Plans to Improve Longstanding Information Resources Management Problems (GAO/AIMD-93-8) Environmental Protection: Meeting Public Expectations With Limited Resources (GAO/RCED-91-97) Environmental Protection: Solving NASA’s Current Problems Requires Agencywide Emphasis (GAO/NSIAD-91-146) Environment, Safety, and Health: Environment and Workers Could Be Better Protected at Ohio Defense Plants (GAO/RCED-86-61) EPA’s Superfund TAG Program: Grants Benefit Citizens But Administrative Barriers Remain (GAO/T-RCED-93-1) Financial Audit: EPA’s Financial Statements for Fiscal Years 1988 and 1987 (GAO/AFMD-90-20) Fossil Fuels: Improvements Needed in DOE’s Clean Coal Technology Program (GAO/RCED-92-17) Fossil Fuels: Ways to Strengthen Controls Over Clean Coal Technology Project Costs (GAO/RCED-93-104) General Services Administration: Efforts to Communicate About Asbestos Abatement Not Always Effective (GAO/GGD-92-28) Guidelines Needed for EPA’s Tolerance Assessments of Pesticide Residues in Food (GAO/T-RCED-89-35) Hazardous Materials: Upgrading of Underground Storage Tanks Can Be Improved to Avoid Costly Cleanups (GAO/NSIAD-92-117) Hazardous Waste: Data Management Problems Delay EPA’s Assessment of Minimization Efforts (GAO/RCED-91-131) Hazardous Waste Exports: Data Quality and Collection Problems Weaken EPA Enforcement Activities (GAO/PEMD-93-24) Hazardous Waste: Impediments Delay Timely Closing and Cleanup of Facilities (GAO/RCED-92-84) Hazardous Waste: Limited Progress in Closing and Cleaning Up Contaminated Facilities (GAO/RCED-91-79) Hazardous Waste: Management Problems Continue at Overseas Military Bases (GAO/NSIAD-91-231) Hazardous Waste: Much Work Remains to Accelerate Facility Cleanups (GAO/RCED-93-15) Hazardous Waste: New Approach Needed to Manage the Resource Conservation and Recovery Act (GAO/RCED-88-115) Hazardous Waste: U.S. and Mexican Management of Hazardous Waste From Maquiladoras Hampered by Lack of Information (GAO/T-RCED-92-22) Improvements Needed in the Environmental Protection Agency’s Testing Programs for Radon Measurement Companies (GAO/T-RCED-90-54) Indoor Air Pollution: Federal Efforts Are Not Effectively Addressing a Growing Problem (GAO/RCED-92-8) International Environment: Strengthening the Implementation of Environmental Agreements (GAO/RCED-92-188) Lawn Care Pesticides: Reregistration Falls Further Behind and Exposure Effects Are Uncertain (GAO/RCED-93-80) Medical Waste Regulation: Health and Environmental Risks Need to Be Fully Assessed (GAO/RCED-90-86) Natural Resources Restoration: Use of Exxon Valdez Oil Spill Settlement Funds (GAO/RCED-93-206BR) Nonagricultural Pesticides: Risks and Regulation (GAO/RCED-86-97) Nonhazardous Waste: Environmental Safeguards for Industrial Facilities Need to Be Developed (GAO/RCED-90-92) Nuclear Energy: Environmental Issues at DOE’s Nuclear Defense Facilities (GAO/RCED-86-192) Nuclear Health and Safety: More Can Be Done to Better Control Environmental Restoration Costs (GAO/RCED-92-71) Nuclear Waste: Changes Needed in DOE User-Fee Assessments to Avoid Funding Shortfall (GAO/RCED-90-65) Nuclear Waste: Development of Casks for Transporting Spent Fuel Needs Modification (GAO/RCED-92-56) Nuclear Waste: DOE’s Repository Site Investigations, a Long and Difficult Task (GAO/RCED-92-73) Nuclear Waste: Improvements Needed in Monitoring Contaminants in Hanford Soils (GAO/RCED-92-149) Nuclear Waste: Operation of Monitored Retrievable Storage Facility Is Unlikely by 1998 (GAO/RCED-91-194) Nuclear Waste: Status of Actions to Improve DOE User-Fee Assessments (GAO/RCED-92-165) Occupational Safety and Health: Penalties for Violations Are Well Below Maximum Allowable Penalties (GAO/HRD-92-48) Ozone-Depleting Chemicals: Increased Priority Needed If DOD Is to Eliminate Their Use (GAO/NSIAD-92-21) Pesticide Monitoring: FDA’s Automated Import Information System Is Incomplete (GAO/RCED-92-42) Pesticides: A Comparative Study of Industrialized Nations’ Regulatory Systems (GAO/PEMD-93-17) Pesticides: Adulterated Imported Foods Are Reaching U.S. Grocery Shelves (GAO/RCED-92-205) Pesticides: Better Data Can Improve the Usefulness of EPA’s Benefit Assessments (GAO/RCED-92-32) Pesticides: Comparison of U.S. and Mexican Pesticide Standards and Enforcement (GAO/RCED-92-140) Pesticides: EPA Could Do More to Minimize Groundwater Contamination (GAO/RCED-91-75) Pesticides: EPA’s Formidable Task To Assess and Regulate Their Risks (GAO/RCED-86-125) Pesticides: Export of Unregistered Pesticides Is Not Adequately Monitored by EPA (GAO/RCED-89-128) Pesticides: Food Consumption Data of Little Value to Estimate Some Exposures (GAO/RCED-91-125) Pesticides: Information Systems Improvements Essential for EPA’s Reregistration Efforts (GAO/IMTEC-93-5) Pesticides: Issues Concerning Pesticides Used in the Great Lakes Watershed (GAO/RCED-93-128) Pesticides: Need To Enhance FDA’s Ability To Protect the Public From Illegal Residues (GAO/RCED-87-7) Pesticides: Pesticide Reregistration May Not Be Completed Until 2006 (GAO/RCED-93-94) Pollution From Pipelines: DOT Lacks Prevention Program and Information for Timely Response (GAO/RCED-91-60) Radioactive Waste: EPA Standards Delayed by Low Priority and Coordination Problems (GAO/RCED-93-126) Radon Testing in Federal Buildings Needs Improvement and HUD’s Radon Policy Needs Strengthening (GAO/T-RCED-91-48) Solid Waste: Federal Program to Buy Products With Recovered Materials Proceeds Slowly (GAO/RCED-93-58) Superfund: Backlog of Unevaluated Federal Facilities Slows Cleanup Efforts (GAO/RCED-93-119) Superfund: Cleanups Nearing Completion Indicate Future Challenges (GAO/RCED-93-188) Superfund: EPA Action Could Have Minimized Program Management Costs (GAO/RCED-93-136) Superfund: EPA Cost Estimates Are Not Reliable or Timely (GAO/AFMD-92-40) Superfund: EPA Needs to Better Focus Cleanup Technology Development (GAO/T-RCED-92-92) Superfund: More Settlement Authority and EPA Controls Could Increase Cost Recovery (GAO/RCED-91-144) Superfund: Problems With the Completeness and Consistency of Site Cleanup Plans (GAO/RCED-92-138) Sustainable Agriculture: Program Management Accomplishments and Opportunities (GAO/RCED-92-233) Toxic Chemicals: EPA’s Toxic Release Inventory Is Useful but Can Be Improved (GAO/RCED-91-121) Toxic Substances: EPA’s Chemical Testing Program Has Not Resolved Safety Concerns (GAO/RCED-91-136) Water Pollution: Pollutant Trading Could Reduce Compliance Costs If Uncertainties Are Resolved (GAO/RCED-92-153) Water Pollution: Serious Problems Confront Emerging Municipal Sludge Management Program (GAO/RCED-90-57) Water Pollution: State Revolving Funds Insufficient to Meet Wastewater Treatment Needs (GAO/RCED-92-35) Water Pollution: Stronger Efforts Needed by EPA to Control Toxic Water Pollution (GAO/RCED-91-154) Workplace Accommodation: EPA’s Alternative Workspace Process Requires Greater Managerial Oversight (GAO/GGD-92-53) With the third largest civilian agency budget in the federal government, the U.S. Department of Agriculture (USDA) affects the lives of all Americans and millions of people around the world. USDA oversees a food and agriculture sector of major importance to the nation’s economy, accounting for 17 percent of the gross national product, 20 million jobs, and 10 percent of export dollars. To carry out its missions in 1992, USDA spent about $60 billion. USDA controlled assets of about $140 billion and employed or paid the salaries of about 124,000 full-time staff in about 15,000 locations worldwide. In 1993, congressional committees made extensive use of our work and, on the basis of our recommendations, USDA has made a number of program changes resulting in more effective use of federal food and agriculture funds. One of our major efforts in 1993 was providing information and support for ongoing congressional and administration efforts to reinvent government. As a result of our management reviews of USDA, and subsequent followup, Senate and House committees held hearings on streamlining USDA and its field structure. Our work contributed to helping the Clinton administration frame the debate for reforming, streamlining, and reinventing USDA and, more specifically, to the Secretary’s proposals to create a single farm service agency and significantly reduce the field office structure. Similarly, it formed the basis of several congressional initiatives toward this same end. Our analyses of farm program flexibility provisions adopted in the 1990 farm bill contributed to a $1 billion annual savings. Analysis of the Farmers Home Administration’s (FmHA) farm loan programs warned the Congress that billions of federal dollars were at risk if current lending practices continued. Analyses of the crop insurance and disaster assistance programs contributed to congressional debate and proposed reform legislation. Analyses of milk-pricing mechanisms, livestock markets, and safety and quality of agriculture products and food resulted in hearings, proposed legislation, and agency actions to improve the overall fairness of the markets and attention to the need to reform the fragmented food inspection system. Our analyses of milk, imported cheese, bottled water, imported Canadian meat, animal drugs, and salmonella in eggs resulted in program improvements. The Food and Drug Administration began a training program for reviewers of data used to support regulatory decisions. USDA revised its inspection of imported Canadian meat. Our comprehensive evaluation of the federal food safety inspection system found that it was inconsistent, inefficient, and ineffective. We advised the Congress that after nearly a century of piecemeal fixes, a major restructuring of the fragmented inspection system—to a uniform, risk-based system—was needed. The USDA of 1993 is largely the product of 1930s programs, structures, and management systems. It is an organizational behemoth of more than 110,000 employees that is ill suited to confronting today’s issues. A fundamental reevaluation and wholesale revamping is in order. The starting point in this reinvention process is consensus agreement on USDA’s primary mission. In a series of reports on the management of USDA, we noted structural problems that, if addressed, could lead to greater efficiency, effectiveness, and cost savings. A key issue is the independence of the major component agencies of USDA, each established in response to a separate legislative mandate. Because these agencies have historically established their own information, financial, and human resources management systems to address legislative mandates, efficiencies have not been achieved departmentwide. With these systems, the Department is data rich, but information poor, making it difficult for the Secretary to make informed decisions. Furthermore, weaknesses in financial management systems and internal and accounting controls substantially increase the risk of mismanagement, fraud, waste, and abuse in Department programs. Because each component agency has its own systems, USDA has difficulty in dealing with issues that cut across its traditional, production-based organizational structure. For example, nine USDA agencies and offices have responsibilities for biotechnology issues. Numerous conflicts among these agencies have blocked development of a single strategy in this important area. Nowhere is the struggle to get a handle on the structural and management systems problems more apparent than in the operations of USDA’s farm service agencies’ field offices. Multiple agencies operate independent field offices all over the country, often right next door to each other. Weaknesses in information systems—different and incompatible hardware, software, telecommunications, and data bases—are important obstacles to any reform of the farm service agencies. We made a number of recommendations specific to departmental structures and strategies that would result in needed improvement. We also recommended that farm agencies’ field structure be given a major overhaul; management of cross-cutting agricultural issues be improved; management systems—financial, informational, and human resource—be strengthened; and USDA be revitalized to meet new challenges and increased responsibilities in nutrition, international trade, and resource conservation issues. Recent progress toward streamlining USDA field structure is very encouraging, and cost savings should be significant. In September 1993, Agriculture Secretary Mike Espy announced a plan to close some farm agency offices and consolidate farm agencies into a single farm agency. He also announced a plan to streamline headquarters. Further details of the Department’s reorganization plan are expected. The Under Secretary of Agriculture has said that the reorganization will take a few years. But the fundamental problem remains: How to revitalize USDA so that it is efficient and effective into the 21st century. To achieve this goal, a thorough review of USDA’s mission and a corresponding restructuring is now necessary. The Congress and the administration need to develop a consensus on USDA’s mission. Once developed, the mission statement must be continually reassessed and updated to address changing conditions, and the Secretary will need to develop measurable departmental goals. (GAO/IMTEC-93-20, GAO/RCED-91-49, GAO/RCED-91-41, and GAO/RCED-91-9) We believe that USDA commodity programs need to move toward a global market orientation. Our work focused on revamping the agricultural farm, export, and market development programs to help make them more competitive in the global marketplace. Our analyses of farm program flexibility provisions adopted in the 1990 farm bill contributed to $900 million savings in 1992 and $1.25 billion a year in fiscal years 1993-95. The Congress may wish to re-evaluate USDA’s support programs in several areas because current subsidies provide incentives to serve relatively rigid government objectives rather than leverage flexible sector development of new products, services, and markets. We have reviewed the sugar, peanuts, wool and mohair, honey, and dairy programs and recommended that they be reviewed for possible policy changes. Regarding the sugar program, we recommended that the Congress consider legislation that would move the sugar industry toward a more open market. As part of this transition, the market price for sugar should be lowered. The Congress should gradually lower the loan rate for sugar and direct USDA to adjust import quotas accordingly. On the peanut program, economic studies and our analysis show that the peanut program adds $314 million to $513 million each year to consumers’ costs of buying peanuts. At the same time, USDA spends tens of millions of dollars each year to run the peanut program, make mandatory payments to producers, and cover the high cost of peanut products it buys under various food assistance programs. Finally, the program, by boosting the volume of U.S. peanuts available for export, may be lowering prices paid for peanuts abroad. (GAO/RCED-93-84 and GAO/RCED-93-18) FmHA is not adequately protecting the multibillion-dollar federal investment in farmer loan programs. In April 1992, we reported that, as of September 1990, more than two-thirds of the outstanding $19.5 billion direct loan portfolio was at risk because it was held by delinquent borrowers or borrowers whose debts had been restructured in response to past repayment difficulties. This high level of risk existed even though FmHA had forgiven $4.5 billion in direct loan debt during 1989 and 1990. By June 1992, FmHA forgave an additional $3.1 billion in direct loan obligations. As these large loan losses have accumulated, FmHA has evolved into a continuous source of subsidized credit for nearly half of its borrowers. Ironically, the financial condition of some of these borrowers has actually deteriorated because of repeated loan servicing, which has increased their debt and reduced their equity. A number of factors have contributed to FmHA’s problems. Although some of these factors—such as the decline of the agricultural economy—are beyond the immediate control of either the Congress or FmHA, two are not. First, lending officials in FmHA’s field offices often fail to follow the agency’s own standards for making loans, servicing loans, and managing property. For example, FmHA requires that a borrower’s existing debts be verified during the loan approval process. But, during the first three quarters of fiscal year 1992, FmHA internal reviews showed that debts had not been verified for 20 percent of the loans sampled. Second, certain FmHA or congressionally authorized loan-making, loan-servicing, and property management policies themselves increase the agency’s vulnerability to loss. More specifically: Borrowers who have defaulted on past loans are free to obtain new loans. Under a congressionally directed policy, borrowers may obtain new FmHA direct loans for operating expenses without demonstrating the ability to repay their existing debt. Private lenders may use guaranteed loans to refinance existing customers’ debts, thereby shifting to the federal government most of the risk of their loans to financially stressed borrowers. The servicing policies of the Agricultural Credit Act of 1987 run counter to principles of sound financial management. The act allows debt write-down for borrowers whose loans are restructured and debt writeoff for those whose loans do not qualify for restructuring. These policies have cost taxpayers billions of dollars over the last several years and provide incentives for farmers to default intentionally on their loans in order to qualify for debt reduction. Legislation requiring FmHA to sell acquired properties at fixed prices, rather than to the highest bidder, and to a selected group of potential purchasers, often the previous owners, has limited FmHA’s return on these properties and increased its holding costs. We have made numerous recommendations to the Congress and the Secretary of Agriculture that are aimed at (1) improving compliance with loan standards and (2) strengthening policies and program design for direct loans, guaranteed loans, and acquired farm property. More broadly, we have called for the Congress to clarify FmHA’s role and mission, noting that FmHA’s attempts to operate simultaneously as a fiscally prudent lender and as a temporary assistance agency have not worked. (GAO/RCED-93-28 and GAO/RCED-92-86) The federal system to ensure the safety and the quality of the nation’s food—at an annual cost of $1 billion a year—is inefficient and outdated and does not adequately protect the consumer against food-borne illness. The food safety inspection system suffers from overlapping and duplicative inspections, poor coordination, inefficient allocation of resources, and outdated inspection procedures. As many as 12 different agencies administering over 35 different laws oversee food safety inspection. More specifically: Food establishments may be inspected by more than one federal agency because they process foods that are regulated under different federal laws or because they participate in voluntary inspection or grading service programs. Agencies do not always notify one another when they identify problems or, once a referral is made, do not always promptly investigate identified problems. As a result, unsanitary plants continue operations and firms market contaminated foods. Federal agencies responsible for food safety and quality inspections could use their resources more efficiently by basing inspection frequencies on risk. The federal meat and poultry inspection system follows procedures no longer appropriate for today’s food safety risks. The system relies on inspectors’ sense of sight, smell, and touch to ensure wholesome product. But, inspectors cannot see, smell, or feel microbial pathogens, which are widely regarded as the principal risk associated with meat and poultry. We have not estimated the cost savings that would result from a revamped food safety system and the elimination of inefficient inspection practices. Instead, we have emphasized the need to modernize inspection procedures and tie resource allocation to health risks. We believe that improved effectiveness, efficiency, and uniformity could be realized by creating a single food safety agency to administer a uniform set of food safety laws based on the principle that the objective of an inspection system is to protect the public from the most serious health risks associated with food-borne hazards. (GAO/T-RCED-93-22, GAO/RCED-92-209, and GAO/RCED-92-152) Billions of federal dollars are spent every year for rural America, but these funds are not addressing the problems of rural areas in a coherent, responsive manner. In a report on rural development, we noted several basic problems in the federal approach to rural America. First, many of the rural federal assistance programs target agriculture, which is no longer the principal economic base of most rural communities. Today, rural America depends on a broad mix of manufacturing and service industries, as well as agriculture. In 1990, only about 22 percent of the nation’s approximately 2,400 rural counties relied on agriculture as an economic base and only about 6 percent of the rural population lived on farms. Therefore, by focusing on agriculture, the federal government may be missing opportunities for rural development in areas with potentially much greater payoff. Second, many federal programs that could benefit rural communities do not because they require coordination of expertise and resources, that are often not available in rural communities. A given problem—for example, in obtaining water and sewer funds for an industrial park—may involve multiple agencies, such as the Economic Development Administration, the Department of Housing and Urban Development, and FmHA. Each agency has its own forms and regulations. While urban areas may respond to this complexity of federal programs by dedicating human resources to each agency, rural communities do not have that luxury. Third, federal programs do not adequately distinguish among communities of different population densities. For example, many federal programs define small communities as those with fewer than 50,000 people—that is, communities of 49,000 are considered identical to those of 1,000 for program benefits and/or mandates. The needs and the administrative capabilities of a community of 49,000, however, may be vastly different from those of a community of 1,000. And, finally, federal programs focus on process rather than effectiveness—they tend to measure effectiveness by numbers served or dollars spent rather than by achievement of program goals. This hinders rural areas in using resources efficiently. (GAO/RCED-92-197) Agriculture Payments: Effectiveness of Efforts to Reduce Farm Payments Has Been Limited (GAO/RCED-92-2) Biotechnology: Managing the Risks of Field Testing Genetically Engineered Organisms (GAO/RCED-88-27) Commodity Programs: Should Farmers Grow Income-Supported Crops on Federal Land? (GAO/RCED-92-54) Crop Insurance Program: Nationwide Computer Acquisition Is Inappropriate at This Time (GAO/IMTEC-93-20) Dairy Cooperatives: Role and Effects of the Capper-Volstead Antitrust Exemption (GAO/RCED-90-186) Data Collection: Opportunities to Improve USDA’s Farm Costs and Returns Survey (GAO/RCED-92-175) Early Intervention: Federal Investments Like WIC Can Produce Savings (GAO/HRD-92-18) Farmers Home Administration: Billions of Dollars in Farm Loans Are at Risk (GAO/RCED-92-86) Farmers Home Administration: Final Resolution of Farm Loan or Servicing Appeals (GAO/RCED-93-28) FDA Regulations: Sustained Management Attention Needed to Improve Timely Issuance (GAO/HRD-92-35) Federal Dairy Programs: Insights Into Their Past Provide Perspectives on Their Future (GAO/RCED-90-88) Financial Audit: Forest Service’s Financial Statements for Fiscal Year 1988 (GAO/AFMD-91-18) Food Safety and Quality: FDA Can Improve Monitoring of Imported Cheese (GAO/RCED-92-210) Food Safety and Quality: FDA Needs Stronger Controls Over the Approval Process for New Animal Drugs (GAO/RCED-92-63) Food Safety and Quality: FDA Strategy Needed to Address Animal Drug Residues in Milk (GAO/RCED-92-209) Food Safety and Quality: Innovative Strategies May Be Needed to Regulate New Food Technologies (GAO/RCED-93-142) Food Safety and Quality: Limitations of FDA’s Bottled Water Survey and Options for Better Oversight (GAO/RCED-92-87) Food Safety and Quality: Salmonella Control Efforts Show Need for More Coordination (GAO/RCED-92-69) Food Safety and Quality: Stronger FDA Standards and Oversight Needed for Bottled Water (GAO/RCED-91-67) Food Safety and Quality: Uniform, Risk-based Inspection System Needed to Ensure Safe Food Supply (GAO/RCED-92-152) Food Safety and Quality: USDA Improves Inspection Program for Canadian Meat, but Some Concerns Remain (GAO/RCED-92-250) Food Safety: Building a Scientific, Risk-Based Meat and Poultry Inspection System (GAO/T-RCED-93-22) Foreign Farm Workers in U.S.: Department of Labor Action Needed to Protect Florida Sugar Cane Workers (GAO/HRD-92-95) Forest Service Needs to Improve Efforts to Protect the Government’s Financial Interests and Reduce Below-Cost Timber Sales (GAO/T-RCED-91-42) Freedom of Information: FDA’s Program and Regulations Need Improvement (GAO/HRD-92-2) Guidelines Needed for EPA’s Tolerance Assessments of Pesticide Residues in Food (GAO/T-RCED-89-35) International Trade: Agricultural Trade Offices’ Role in Promoting U.S. Exports Is Unclear (GAO/NSIAD-92-65) Milk Pricing: New Method for Setting Farm Milk Prices Needs to Be Developed (GAO/RCED-90-8) Peanut Program: Changes Are Needed to Make the Program Responsive to Market Forces (GAO/RCED-93-18) Pesticides: Adulterated Imported Foods Are Reaching U.S. Grocery Shelves (GAO/RCED-92-205) Pesticides: Comparison of U.S. and Mexican Pesticide Standards and Enforcement (GAO/RCED-92-140) Pesticides: Food Consumption Data of Little Value to Estimate Some Exposures (GAO/RCED-91-125) Rangeland Management: BLM Efforts to Prevent Unauthorized Livestock Grazing Need Strengthening (GAO/RCED-91-17) Rural Development Administration: Patterns of Use in the Business and Industry Loan Guarantee Program (GAO/RCED-92-197) Social Security: Need for Better Coordination of Food Stamp Services for Social Security Clients (GAO/HRD-92-92) Sugar Program: Changing Domestic and International Conditions Require Program Changes (GAO/RCED-93-84) Sustainable Agriculture: Program Management Accomplishments and Opportunities (GAO/RCED-92-233) Truck Transport: Little Is Known About Hauling Garbage and Food in the Same Vehicles (GAO/RCED-90-161) U.S. Department of Agriculture: Farm Agencies’ Field Structure Needs Major Overhaul (GAO/RCED-91-9) U.S. Department of Agriculture: Improving Management of Cross-Cutting Agricultural Issues (GAO/RCED-91-41) U.S. Department of Agriculture: Interim Report on Ways to Enhance Management (GAO/RCED-90-19) U.S. Department of Agriculture: Need for Improved Workforce Planning (GAO/RCED-90-97) U.S. Department of Agriculture: Strengthening Management Systems to Support Secretarial Goals (GAO/RCED-91-49) Federal housing and community development efforts focus on two related goals: (1) providing safe, affordable, and decent housing to all Americans and (2) supporting and revitalizing economically depressed communities. While our nation is generally considered to have the best-housed people in the world, a host of economic and social problems have thus far denied full attainment of our national housing goals. These problems are reflected in the widening gap between the demand for and the supply of affordable low-income housing; declining rates of homeownership, particularly among younger families; and continued problems with homelessness. Contributing to these problems is the spread of economically distressed communities and their attendant high unemployment rates and low family incomes. To deal with these conditions, the federal government has established a broad array of programs, in several federal agencies, primarily the Department of Housing and Urban Development (HUD), the Department of Veterans Affairs (VA) (through its housing and homelessness programs), the Department of Agriculture (through the Farmers Home Administration [FmHA]), the Department of Commerce (through the Economic Development Administration and the Minority Business Development Agency), the Small Business Administration (SBA), and the Federal Emergency Management Agency (FEMA) (through its disaster assistance and homelessness programs). Together, these agencies had budget authority for over $30 billion in fiscal year 1993. We reviewed the effectiveness of federal efforts to help low- and moderate-income individuals and families purchase homes through direct, insured, and guaranteed loans. Key agencies are the Federal Housing Administration (FHA) and the Government National Mortgage Association (GNMA)—both part of HUD, VA, and FmHA. We testified on FHA’s single-family mortgage insurance fund. We found that the fund was not actuarially sound as of September 1991 and probably did not have adequate capital reserves as of October 1992, although both were required by law. We also found, however, that the fund’s actuarial position appeared to be improving. In the multifamily area, we testified on impediments to disposition of properties owned by HUD. HUD owned about 27,000 units in 1992 and had initiated foreclosure on another 42,000 units. We discussed the reasons for this growing inventory and disclosed that the federal government’s costs were the same for holding the inventory or selling it. We concluded that the current situation resulted in HUD’s being a landlord for a huge inventory of properties, a role it was never intended to play nor adequately staffed to fulfill. Also, we reviewed VA’s Home Loan Guaranty Program, under which it partially guarantees home loans made to veterans by private sector mortgage lenders. Specifically, we estimated the costs, under different economic scenarios, to the federal government of guaranteeing VA’s home mortgage loans and compared our estimates with the administration’s estimates. We concluded that the costs to the federal government for fiscal years 1992 and 1993 would probably be about $300 million lower than the administration estimated. Consequently, program costs are overstated; VA received more in appropriations than it needs to cover these costs; and the federal budget deficit for those years has been increased unnecessarily, although federal borrowing has not been affected. In addition, we reviewed certain activities of GNMA, which is a secondary mortgage market organization that guarantees securities issued by its approved mortgage originators and backed by pools of mortgage loans insured by HUD’s FHA or guaranteed by VA. We provided information on how GNMA has evolved to accomplish its mission, identified recent management problems experienced by GNMA in overseeing its issuers, examined GNMA’s response to these problems, and recommended that GNMA be granted greater staffing flexibility to manage its growing workload and respond to changing markets. Finally, we reported on weaknesses, and recommended improvements, in efforts by HUD, VA, and FmHA to protect children from the potential hazards of lead-based paint poisoning when they sell single-family homes to the public. (The agencies typically acquire such homes when borrowers are unable to repay home mortgages that an agency insured, guaranteed, or provided.) In response to our earlier report on ways VA could reduce foreclosure costs in its guaranteed home loan program, VA developed and tested procedures to determine the most cost-effective way of selling properties. These procedures were expected to be implemented nationwide in late 1993. Also, as a result of our earlier report on fraud in the federal multifamily housing mortgage program in New York, the U.S. Attorney has secured four indictments. One real estate developer charged in the case agreed to plead guilty to the charges and pay at least $500,000 in restitution. Our work on low-income housing covered such issues as the demographic characteristics of certain housing assistance recipients, funding problems facing one program, efforts to integrate housing assistance with supportive services, and alternative methods of developing public housing. It focused primarily on HUD programs but also dealt with a FmHA program and certain other agencies’ efforts. We continued our work on HUD’s section 8 rental assistance program, which provides housing subsidies that allow about 2.8 million low-income households to obtain decent and affordable housing from private owners. HUD provides these subsidies through over 40,000 contracts with state and local agencies and private owners. We provided information on the demographic characteristics of elderly and nonelderly assistance recipients, the quality of the housing units rented by elderly recipients, and the proportion of income that elderly and nonelderly recipients pay for rent. We also analyzed the estimated budget authority that will be needed to renew expiring section 8 contracts for the next 5 years and ways to even out the growth in budget authority to renew contracts. We also continued our work on self-sufficiency by reporting on HUD’s Family Self-Sufficiency Program; this program was established to promote the development of strategies to coordinate housing assistance with public and private supportive services in order to enable lower-income families to achieve economic independence and self-sufficiency. We discussed the program’s status and actions by HUD to coordinate its efforts with other federal agencies. Moreover, we analyzed and testified on two programs, controlled by public housing agencies, that are designed to develop housing for low-income households—the Low-Income Housing Tax Credit Program, which is supported by federal tax expenditures, and the Public Housing Development Program, which is supported by federal grants. We provided information on the characteristics of the tenants served and the projects developed, analyzed costs to the federal government, and described public housing authorities’ experiences with both programs. In addition, we reviewed the rural single-family housing program administered by FmHA, through which home loans are provided to rural residents who cannot afford to become homeowners through private financing. We found that rural counties in and around metropolitan areas received a disproportionately high share of program funds and, conversely, that remote rural areas received a disproportionately low share of program funds. Finally, we reported on the status of the 19 programs authorized by the McKinney Act to help the homeless. These programs are implemented by HUD, VA, FEMA, and other federal agencies. As a result of our earlier work on the difficulties of housing persons with mental disabilities with the elderly, the Congress—in the Housing and Community Development Act of 1992—authorized public housing agencies to provide public housing designated for only the elderly. In the same act, the Congress authorized FHA to develop and conduct risk-sharing credit enhancement demonstration programs and authorized the creation of a task force to begin developing a national data base on the performance of multifamily housing loans. Both actions are consistent with, and largely attributable to, our testimonies of these subjects. In response to our work on the need for better guidance on FmHA rural rental housing, FmHA issued regulations in July 1993 and instructions in August 1993 to correct the problems we had identified. Moreover, in response to our testimony on excessive profits and program abuses in FmHA’s rural rental housing program, FmHA increased, from 3 percent to 5 percent, the amount of equity a developer must commit to financing a project. This will effectively increase, by at least $10 million, the funding available to finance such rural housing units. Finally, in response to our earlier report on single-room occupancy projects for the homeless, HUD issued regulations in 1993 that should help ensure that such projects are financially feasible. These regulations also eliminate the required use of public housing agency waiting lists for selecting residents. Our work on community development considered three types of federal help to rebuild and strengthen communities: (1) financial and other assistance to small business; (2) economic development; and (3) disaster assistance, to aid communities and residents devastated by natural disasters. In the small business area, we reviewed three SBA efforts. We found that its minority business development program continued to be troubled. (We had previously reported in January 1992 on this program, which is designed to assist small businesses that are owned and controlled by socially and economically disadvantaged persons to develop into viable competitors in the commercial marketplace.) Also, we reported on the financial health of small business investment companies, which help small businesses by providing financing to start, maintain, and expand operations. We focused primarily on why these investment companies experienced substantial losses and had been liquidated by SBA. Finally, we examined a provision that authorized nonprofit agencies employing persons with disabilities to compete for small business set-aside contracts awarded by federal agencies. We found that the nonprofit agencies seldom sought such contracts, receiving less than 1 percent of all federal contracts set aside for small businesses during the period reviewed and recommended that the Congress change the law to remove impediments to program participation. In the economic development area, we examined the benefits provided by small-issue industrial development bonds. These bonds, issued by state and local government authorities, are intended to help finance the creation and the expansion of manufacturing facilities. The federal government forgoes tax revenue of about $2 billion a year because the interest on such bonds is tax exempt. We concluded that, while these bonds were being used (as required) to finance manufacturing projects, it is questionable whether they achieved other public benefits attributed to them, such as creating jobs, assisting startup companies, and providing aid to economically distressed areas. In the aftermath of Hurricane Andrew, which leveled much of south Florida in August 1992, we reported and testified on improvements needed in the nation’s response to catastrophic disasters. The response to Hurricane Andrew marked the first use of the Federal Response Plan, a cooperative agreement signed by FEMA, 25 other federal agencies, and the American Red Cross. But, the response to Hurricane Andrew revealed inadequacies in the plan. Separately, we reported on actions needed to prevent recurring funding shortfalls in the disaster relief fund, which finances disaster responses. In response to our 1991 and 1993 reports on disaster assistance, FEMA has taken a number of actions. In response to the former report, FEMA and HUD developed procedures for providing rental housing assistance vouchers to disaster victims. In response to the latter report, FEMA established a Damage Assessment Task Force to identify the capabilities needed to perform assessments and to develop a process for generating assistance requirements. FEMA also issued interim guidelines for assessing damage caused by disaster, directing that assessments be made promptly, and the level and type of assistance needed be specified. Moreover, FEMA enhanced states’ flexibility in using FEMA grant funds and established a training and exercise work group to develop new training strategies and exercise programs. Because of the questions that surround whether industrial development bonds are achieving the public benefits attributed to them and in view of the tax revenue foregone, we suggested that the Congress either not reauthorize the provision or, as part of a reauthorization, specify requirements to better direct these bonds toward achieving public benefits that would not otherwise result. (GAO/RCED-93-106) The federal response to Hurricane Andrew revealed weaknesses in the multiagency Federal Response Plan. Improvements are needed regarding the explicit authority provided to federal agencies before areas are declared disaster areas; FEMA’s efforts to prepare state and local governments for disasters; and the reliance placed on the Department of Defense to provide food, shelter, and other items on a massive scale. (GAO/T-RCED-93-4, GAO/T-RCED-93-13, GAO/T-RCED-93-20, and GAO/RCED-93-186) Lead poisoning is one of the most common health problems for our nation’s children, with potentially significant effects on intelligence and behavior. Lead-based paint is the most widespread source of exposure to lead for children. Although lead-based paint was not permitted to be used in residential housing after 1978, such paint is still found in many units built earlier. Improvements are needed in federal efforts to protect children in two different circumstances. First, when HUD, VA, and FmHA sell single-family homes to the public (totaling about 100,00 a year) they need to do a better job of identifying and treating lead-based paint hazards. (GAO/RCED-93-38) Second, about 400,000 children live in federally assisted public housing, and about 60 percent of all public housing units were built before 1978 and may be occupied by families with children. HUD needs to provide greater protection for these children. Also, we suggested that the Congress establish a deadline for HUD and public housing agencies to abate certain lead-paint hazards. (GAO/RCED-93-138) For its Home Loan Guaranty Program, VA (like other agencies that operate federal credit programs) estimates the subsidy cost associated with the portfolio of new loans it guarantees each year. To do so, it uses an economic model developed by the Office of Management and Budget (OMB). We found that the costs to the federal government will probably be about $300 million lower than the administration estimated and that VA, therefore, had unnecessarily increased the budget deficit. We recommended that VA and OMB work together to improve their economic model and submit revised subsidy cost estimates. (GAO/RCED-93-173) Agency for International Development: The Minority Shipping Program Is Constrained by Program Requirements (GAO/NSIAD-92-304) Disaster Assistance: Federal, State, and Local Responses to Natural Disasters Need Improvement (GAO/RCED-91-43) Disaster Management: Recent Disasters Demonstrate the Need to Improve the Nation’s Response Strategy (GAO/T-RCED-93-4) Disaster Management: Recent Disasters Demonstrate the Need to Improve the Nation’s Response Strategy (GAO/T-RCED-93-20) Disaster Relief Fund: Actions Still Needed to Prevent Recurrence of Funding Shortfall (GAO/RCED-93-60) Export Promotion: Problems in the Small Business Administration’s Programs (GAO/GGD-92-77) Farmers Home Administration: Billions of Dollars in Farm Loans Are at Risk (GAO/RCED-92-86) Government National Mortgage Association: Greater Staffing Flexibility Needed to Improve Management (GAO/RCED-93-100) Homelessness: Access to McKinney Act Programs Improved but Better Oversight Needed (GAO/RCED-91-29) Homelessness: Action Needed to Make Federal Surplus Property Program More Effective (GAO/RCED-91-33) Homelessness: Federal Personal Property Donations Provide Limited Benefit to the Homeless (GAO/RCED-91-108) Homelessness: Single Room Occupancy Program Achieves Goals, but HUD Can Increase Impact (GAO/RCED-92-215) Homeownership: Appropriations Made to Finance VA’s Housing Program May Be Overestimated (GAO/RCED-93-173) Housing Programs: VA Can Reduce Its Guaranteed Home Loan Foreclosure Costs (GAO/RCED-89-58) Industrial Development Bonds: Achievement of Public Benefits Is Unclear (GAO/RCED-93-106) Lead-Based Paint Poisoning: Children in Public Housing Are Not Adequately Protected (GAO/RCED-93-138) Lead-Based Paint Poisoning: Children Not Fully Protected When Federal Agencies Sell Homes to Public (GAO/RCED-93-38) Public and Assisted Housing: Linking Housing and Supportive Services to Promote Self-Sufficiency (GAO/RCED-92-142BR) Public Housing: Housing Persons With Mental Disabilities With the Elderly (GAO/RCED-92-81) Radon Testing in Federal Buildings Needs Improvement and HUD’s Radon Policy Needs Strengthening (GAO/T-RCED-91-48) Rental Housing: Housing Vouchers Cost More Than Certificates but Offer Added Benefits (GAO/RCED-89-20) Rural Development Administration: Patterns of Use in the Business and Industry Loan Guarantee Program (GAO/RCED-92-197) Small Business: Problems Continue With SBA’s Minority Business Development Program (GAO/RCED-93-145) Technology Transfer: Federal Efforts to Enhance the Competitiveness of Small Manufacturers (GAO/RCED-92-30) Urban Poor: Tenant Income Misreporting Deprives Other Families of HUD-Subsidized Housing (GAO/HRD-92-60) Natural resources on federal lands are second only to tax receipts in generating revenues for the federal government, totaling almost $7 billion in fiscal year 1992. But, fiscal year 1993 budget authorities for the three agencies primarily responsible for managing and protecting these resources—the Department of the Interior, the Department of Agriculture’s Forest Service, and the U.S. Army Corps of Engineers—were more than double the revenues generated the year before—about $16.6 billion. Each year, the federal government acquires additional lands to conserve natural resources and expands the infrastructure of facilities constructed to provide access to or make use of the natural resources on federal lands. Yet, our work over the last several years has shown that the condition of the federal lands continues to deteriorate and that the existing infrastructure on these lands—approaching $200 billion in value—is in a growing state of disrepair. At the same time, agency staff are being asked to assume increasing responsibilities and to perform more duties. As a result, existing maintenance and reconstruction standards are being compromised and tradeoffs are being made among important yet competing work priorities. The Congress and the administration now face a difficult choice. They must find new sources of funding for the agencies responsible for managing natural resources or find ways for these agencies to operate more efficiently, or they must make further cutbacks in the agencies’ services or standards for maintaining facilities and lands. In March 1989, we recommended that the Congress eliminate the law’s patenting provision allowing valuable federal lands to pass into private ownership or, should the Congress decide not to eliminate this provision, amend the law to require that the federal government obtain fair market value for the land patented. Both the authorizing committees as well as the administration are considering comprehensive mining law revision. (GAO/RCED-89-72) In our October 1989 report and our September 1991 testimony on abuses of federal water subsidies, we recommended that the Congress amend reclamation law to limit federally subsidized water to leased or owned land being operated as one farm. This provision was deleted at the last moment from the 1992 omnibus water bill and has not been reintroduced in this Congress. (GAO/RCED-90-6 and GAO/T-RCED-91-90) In April 1991 testimony, we stated that the federal government was not recovering timber sale preparation and administration expenses, resulting in below-cost timber sales, and recommended that the Forest Service do so. We also made three additional recommendations. The Forest Service is considering our recommendations in developing a below-cost policy scheduled for implementation in 1994. (GAO/T-RCED-91-42) Abandoned Mine Reclamation: Interior May Have Approved State Shifts to Noncoal Projects Prematurely (GAO/RCED-91-162) Bureau of Reclamation: Central Valley Project Cost Allocation Overdue and New Method Needed (GAO/RCED-92-74) Bureau of Reclamation: Unauthorized Recreation Facilities at Two Reclamation Projects (GAO/RCED-93-115) Coastal Barriers: Development Occurring Despite Prohibition Against Federal Assistance (GAO/RCED-92-115) Drinking Water: Widening Gap Between Needs and Available Resources Threatens Vital EPA Program (GAO/RCED-92-184) Endangered Species: Factors Associated With Delayed Listing Decisions (GAO/RCED-93-152) Federal Land Management: The Mining Law of 1872 Needs Revision (GAO/RCED-89-72) Federal Land Management: Unauthorized Activities Occurring on Hardrock Mining Claims (GAO/RCED-90-111) Federal Lands: Improvements Needed in Managing Short-Term Concessioners (GAO/RCED-93-177) Federal Timber Sales: Process for Appraising Timber Offered for Sale Needs to Be Improved (GAO/RCED-90-135) Financial Management: BIA Has Made Limited Progress in Reconciling Trust Accounts and Developing a Strategic Plan (GAO/AFMD-92-38) Forest Service: Little Assurance That Fair Market Value Fees Are Collected From Ski Areas (GAO/RCED-93-107) Forest Service Needs to Improve Efforts to Protect the Government’s Financial Interests and Reduce Below-Cost Timber Sales (GAO/T-RCED-91-42) Forest Service Timber Sales Program: Questionable Need for Contract Term Extensions and Status of Efforts to Reduce Costs (GAO/T-RCED-92-58) Hydroelectric Dams: Issues Surrounding Columbia River Basin Juvenile Fish Bypasses (GAO/RCED-90-180) Indian Programs: BIA and Indian Tribes Are Taking Action to Address Dam Safety Concerns (GAO/RCED-92-50) Mineral Resources: Federal Helium Purity Should Be Maintained (GAO/RCED-92-44) Mineral Resources: Interior’s Use of Oil and Gas Development Contracts (GAO/RCED-91-1) Mineral Resources: Meeting Federal Needs for Helium (GAO/RCED-93-1) Mineral Revenues: Progress Has Been Slow in Verifying Offshore Oil and Gas Production (GAO/RCED-90-193) National Park Service: Scope and Cost of America’s Industrial Heritage Project Need to Be Defined (GAO/RCED-93-134) Natural Gas Pipelines: Greater Use of Instrumented Inspection Technology Can Improve Safety (GAO/RCED-92-237) Natural Resources Restoration: Use of Exxon Valdez Oil Spill Settlement Funds (GAO/RCED-93-206BR) Rangeland Management: BLM Efforts to Prevent Unauthorized Livestock Grazing Need Strengthening (GAO/RCED-91-17) Rangeland Management: BLM’s Hot Desert Grazing Program Merits Reconsideration (GAO/RCED-92-12) Rangeland Management: BLM’s Range Improvement Project Data Base Is Incomplete and Inaccurate (GAO/RCED-93-92) Rangeland Management: Improvements Needed in Federal Wild Horse Program (GAO/RCED-90-110) Rangeland Management: Interior’s Monitoring Has Fallen Short of Agency Requirements (GAO/RCED-92-51) Reclamation Law: Changes to Excess Land Sales Will Generate Millions in Federal Revenues (GAO/RCED-90-100) Trans-Alaska Pipeline: Regulators Have Not Ensured That Government Requirements Are Being Met (GAO/RCED-91-89) Water Resources: Corps Lacks Authority for Water Supply Contracts (GAO/RCED-91-151) Water Resources: Federal Efforts to Monitor and Coordinate Responses to Drought (GAO/RCED-93-117) Water Subsidies: Basic Changes Needed to Avoid Abuse of the 960-Acre Limit (GAO/RCED-90-6) Water Subsidies: Views on Proposed Reclamation Reform Legislation (GAO/T-RCED-91-90) Wetlands: The Corps of Engineers’ Administration of the Section 404 Program (GAO/RCED-88-110) Wilderness Preservation: Problems in Some National Forests Should Be Addressed (GAO/RCED-89-202) Wildlife Management: Problems Being Experienced With Current Monitoring Approach (GAO/RCED-91-123) Wildlife Protection: Enforcement of Federal Laws Could Be Strengthened (GAO/RCED-91-44) The U.S. transportation sector is being increasingly looked to as a key component in efforts to improve the economy; maintain and enhance U.S. competitiveness in the global marketplace; and serve the growing needs of businesses, industries, and the American public. Comprising diverse elements ranging from air, land, water, and mass transit to pipeline and marine safety; employing about one-tenth of America’s work force; and involving, in one way or another, about $1 in every $6 of the nation’s Gross Domestic Product, the transportation sector provides facilities and services and carries out activities that touch everyone’s life. Although the U.S. transportation system is the world’s finest, the transportation sector faces many challenges. Among these are reducing the number of transportation-related fatalities and injuries; restoring the obsolete and deteriorated portions of the transportation infrastructure; and relieving the increasingly congested aviation, highway, and waterway systems. In addition, transportation-related environmental effects and the need to strengthen U.S. transportation to remain competitive globally are of widespread concern. And demands are increasing for more and better public transit and rail service. At the same time, despite increased federal funding over the past several years for transportation activities, severe fiscal constraints require increased reliance on private resources and more efficient use of public resources to meet transportation needs. As detailed below, this issue area’s work, which has included an increasing emphasis on the transportation sector’s international and intermodal aspects, has influenced the Congress and the Department of Transportation (DOT) and its agencies to take many actions to improve transportation safety and the efficiency and the effectiveness of transportation policies and programs. Our work over the past several years relating to highway, vehicle, and driver safety has contributed significantly in guiding DOT and congressional actions that helped the nation achieve in 1992 the lowest highway death toll in 31 years. Fewer people—39,235—died in highway crashes in 1992 than in any year since 1961. This decline is especially striking because, compared with 1961, the U.S. population was 38 percent higher in 1992, more than twice as many vehicles were on the road, and Americans drove almost three times as many miles annually. Among the issues addressed in our past work that contributed to the decline in highway fatalities were minimum drinking laws for drivers under 21; need for more-stringent laws for seat belt and motorcycle helmet use and improved enforcement of these laws; need for passive restraints in light trucks and vans; and improvements in monitoring and regulating commercial vehicles, commercial drivers, and motor vehicles. Our work also influenced congressional decisions and agency actions related to railroad and pipeline safety. For example, regarding railroad hours of service, our work showed that the Congress did not need to change the law setting 12 hours as the maximum number of hours that engineers may legally work. We did, however, point out that Amtrak needed to improve its training for employees who inspect and maintain rail equipment. Regarding pipeline safety, our conclusion that the potential for pipeline incidents (that is, ruptures and leakages) in the nation’s aging natural gas pipelines could be reduced by using instrumented internal inspection technology (smart pigs) was instrumental in the enactment of legislation requiring the Secretary of Transportation to issue regulations prescribing the circumstances under which safety inspections of pipelines must be conducted with instrumented devices. In other safety-related actions based on our work, DOT expanded its regulation of hazardous materials to include truck, rail, and air shipments of about 520 marine pollutants when packaged in bulk containers, and the Federal Highway Administration (FHWA) developed an action plan to improve the timeliness of its compliance reviews for motor carriers that were previously rated less than satisfactory during safety reviews. The Federal Aviation Administration (FAA) implemented several of our recommendations for improving aviation safety. It (1) completed all required air taxi inspections during fiscal year 1992 and issued guidance to inspectors on the surveillance required for financially distressed airlines; (2) established a tracking system for individuals within or returning to the airline industry who contributed materially to emergency revocations of carrier operating certificates; (3) issued written guidance to improve administration of FAA’s airline self-audit and safety-violation-reporting programs and provided additional training to keep inspectors abreast of current policies related to the two programs; and (4) to help ensure successful deployment of the Traffic Alert/Collision Avoidance System, agreed to fully verify and validate all future significant modifications, effectively involve users and other interested parties in testing modifications, and address all users’ concerns. Regarding international aviation safety issues, we reported on FAA actions needed to harmonize domestic and international aircraft design standards. The export of transport aircraft is the largest positive influence on the U.S. balance of trade. We found that both domestic and foreign aircraft manufacturers could save millions if FAA and European authorities eliminated duplicative tests and analyses as well as differences in their requirements. Boeing and McDonnell Douglas officials confirmed our findings on the eventual cost savings that could result from standardizing aircraft design standards. In addition, we examined FAA’s initiative to ensure that foreign governments comply with international safety standards, and we reviewed FAA’s oversight of U.S.-registered aircraft operated overseas and the actions needed when such aircraft are returned to domestic operation through lease or sales agreements. In prior years, our air traffic control modernization reviews focused on systemic problems in FAA’s process of budgeting for and acquiring new systems. Over the past year, our focus shifted to reviewing mission needs analyses that FAA conducts as the starting point for modernization projects and FAA’s effectiveness in considering alternative systems to allow aircraft to conduct precision approaches to airports. In line with our recommendations, FAA’s new acquisition policies emphasize the need for a thorough mission analysis based on quantitative data as the first step in acquiring new air traffic control systems. This emphasis will reduce the risk of FAA’s acquiring new systems that are not the most appropriate and cost-effective solutions to its problems. FAA also included goals in a draft of its air traffic control modernization plan in accordance with our recommendation. As a result, the Congress, the executive branch, and users of the air traffic control system will be better able to gauge FAA’s true progress in the modernization program. To improve financial, budgetary, and management activities, DOT and its agencies took a number of actions in line with our recommendations. The Coast Guard established formal training requirements for managers of its major systems acquisition projects. It also prescribed a process for approval of units’ morale, welfare, and recreation budgets; established a deadline for budget approval; and required that a morale, welfare, and recreation user survey be administered every 3 years, with the results being used in developing and operating morale, welfare, and recreation activities. FAA restructured its Facilities and Equipment budget, which funds air traffic control modernization, to more closely link budgets for modernization projects with their actual progress, thus giving decisionmakers in the executive branch and the Congress better information for their budget decisions. In addition, for its safety indicator program, FAA convened a task force of users to help develop the indicators, established a detailed implementation plan, and developed a data validation and standardization process. In other actions, the Maritime Administration improved its management controls over repossessed vessels; the Federal Transit Administration (FTA) issued clear guidance on intercity bus activities that are eligible for rural transit grants; and DOT improved the timing and content of its written statements on GAO’s recommendations. Furthermore, DOT completed departmentwide implementation of its Departmental Accounting and Financial Information System (DAFIS), eliminating all systems that duplicated DAFIS’s fund control features. It also completed its 5-year Chief Financial Officer and Information Resource Management Plans, which document DOT’s strategy for integrating DAFIS with other systems and its strategy and objectives for addressing shortcomings with DAFIS. Also in line with our work on financial and budgetary matters, the Congress extended a 2.5-cent portion of the gasoline tax due to expire September 30, 1995, and redirected the revenues from that portion of gasoline tax from the General Fund to the Highway Trust Fund. This action will provide sufficient revenues, according to current projections, to cover the solvency problem that we had projected would occur in the Highway Trust Fund. In addition, to enable a more accurate assessment of the Highway Trust Fund’s financial status and to provide an early warning of potential shortfalls in the Fund because of lower fuel tax revenues, the Department of the Treasury commenced quarterly, rather than annual, reporting of pertinent data on the Fund. On the basis of our analyses of highway demonstration projects, the House Transportation Appropriations Subcommittee proposed rescinding about $64 million for projects that (1) were complete but for which authorizations were unspent, (2) were authorized in 1987 but for which no funds had been obligated, and (3) were a low state priority and for which only a small percentage of authorized funding had been obligated. Our analyses were also cited by the National Performance Review in discussing its recommendation that funding for all highway demonstration projects be rescinded. Our work resulted in over $200 million in budgetary and other savings in the last year or so. In direct response to our analyses of FAA’s fiscal year 1993 budget request, the House and Senate Appropriations Committees reduced funding for 19 projects under FAA’s Facilities and Equipment request by a total of $183.7 million. Many of the reductions were based on project delays that removed the need for funding in fiscal year 1993. Following our recommendations that community housing shortages be documented, that all alternatives to meeting the shortages be analyzed, and that the need for planned housing projects be reevaluated every 2 years until funding becomes available, the Coast Guard improved its procedures for acquiring housing and terminated one planned project estimated to cost $4.3 million. In addition, after we reported delays in the Coast Guard’s meeting Marine Safety Network System project milestones and remaining uncertainties that could substantially affect the system’s cost and implementation schedule, the Senate Appropriations Committee decided not to fund the agency’s 1993 budget request of $12.8 million for Marine Safety Network-related activities. After DOT stepped up its investigations as a result of our finding that airlines had not always complied with consumer protection rules on disclosure of contractual terms, nine major airlines were assessed financial penalties totaling about $389,000 for violations of consumer notice regulations. DOT expected that this action would provide a strong incentive for future compliance. On the basis of our finding that federal law did not prohibit owners from abandoning vessels in the nation’s waterways, the Congress included such a prohibition in the Abandoned Barge Act of 1992. As a result, both the risk of pollutants’ being spilled from abandoned vessels and the amount of federal funds spent for cleaning up the spills and barges should be reduced. Our review of the use of traditional transportation control measures, such as mass transit, ridesharing, and synchronization of traffic lights, to improve mobility and reduce congestion showed that, while such measures have had a modest or an incremental impact on reducing hydrocarbon and carbon monoxide emissions, further research on their effectiveness could enhance the prospects for implementing them. We also found a strong consensus that market-based measures, such as imposition of regional gasoline taxes and motor vehicle emissions fees, might be more effective in reducing automobile use but that they are economically and politically less acceptable because such measures would directly increase driving costs. Our review of developments in intermodal freight transportation and its potential for relieving the nation’s highways of some of the freight burden contributed to initiatives by the Secretary of Transportation for promoting efficient intermodal freight transportation. In addition, our work on highway-financing strategies, transit needs projections, involvement of disadvantaged business enterprises in highway contracting, and FHWA oversight of rental rates for highway construction equipment contributed to improving transportation investment decisionmaking and better oversight to ensure that federal highway dollars are spent effectively. In line with our suggestions in testimony, the House Transportation Appropriations Subcommittee developed economically based investment criteria that it used to screen proposed highway and transit demonstration projects to help ensure that the projects were good investments of federal transportation dollars. Through a series of testimonies, we also contributed to congressional deliberations on federal involvement in and support for high-speed ground transportation projects. Our work on air fares at concentrated airports, trucking undercharges, the intercity bus industry, and charter bus regulations contributed to understanding the impacts of deregulation on the nation’s transportation industries. Our testimonies on options for addressing the airline industry’s financial and competition problems contributed to the deliberations of the Congress and the National Commission to Ensure a Strong Competitive Airline Industry and to development of the Commission’s recommendations. Our work on relaxing foreign investment restrictions gave both the Congress and the Commission information on potential benefits and costs and was reflected in the Commission’s recommendations. Despite the many actions and initiatives taken by the Congress, DOT, and its agencies in response to our recommendations, some important recommendations remain open and warrant priority attention. In a report on foreign governments’ capabilities to provide adequate oversight of foreign carriers that fly into the United States, we recommended that FAA (1) require its field offices to perform comprehensive inspections of foreign carriers when FAA finds that the home governments do not comply with international standards and/or becomes aware of serious safety problems, (2) give priority to assessing the oversight capabilities of those countries that FAA determines have one or more carriers with serious safety problems, and (3) promptly notify all relevant field offices of serious safety concerns about foreign carriers. (GAO/RCED-93-42) In reporting on the extent to which the U.S. aircraft fleet meets FAA’s flammability standards for cabin interiors, we recommended that FAA determine whether to issue a regulation mandating a specific date for all aircraft in the domestic fleet to comply with the latest flammability standards. (GAO/RCED-93-37) In a report on FAA’s oversight of U.S.-registered aircraft, we recommended that FAA (1) require owners of such aircraft to notify FAA when they change from a foreign to a U.S. lessee and inspect the aircraft when they enter the United States, particularly if they are from countries that do not meet international safety standards; (2) develop a system to ensure that foreign corporations accumulate at least 60 percent of the flight hours for their U.S.-registered aircraft in the United States; and (3) accelerate implementation of the proposed regulations for increasing aircraft registration fees. (GAO/RCED-93-135) In a series of five reports, we recommended several improvements in FTA’s oversight of mass transit grants. The reports had documented inadequacies in FTA’s oversight; serious deficiencies in grantees’ financial, technical, procurement, inventory, and other management controls; noncompliance with federal requirements; and improper expenditures of grant funds. FTA plans to implement most of our recommendations, which, if properly executed, should help better safeguard transit funds from risk of fraud, waste, abuse, and mismanagement. FTA, however, has not completed this effort and will have to be persistent to ensure that it does not lose momentum. (GAO/RCED-91-107, GAO/RCED-92-7, GAO/RCED-92-38, GAO/RCED-92-53, and GAO/RCED-93-8) We reported that many of the 25 FAA mission need statements we had examined, which established the basic justification for acquisition projects that could cost $5 billion, contained assertions that were not supported by analysis and facts. We recommended that FAA approve only those statements and projects that were well supported with analytical evidence of current and projected needs. FAA needs to ensure that future mission need statements are adequately supported to justify costly capital investments. (GAO/RCED-93-55) In our report on precision landing systems, we examined the costs, the benefits, and the capabilities of various alternative technologies. We recommended that FAA prepare a mission need statement for precision landing systems in general based on a runway-by-runway determination of which system, or mix of systems, provides the most benefits at the lowest costs to both FAA and system users. This is especially important considering that airlines may be required to make substantial investments in some of these technologies at a time when they are experiencing severe financial difficulties. (GAO/RCED-93-33) We recommended that, to help ensure that transit needs projections better reflect future costs, FTA include (1) operating needs for the nation’s transit systems, (2) vehicle replacement needs for the entire human service operator fleet, and (3) operators’ cost estimates for Americans With Disabilities Act compliance. Further, we recommended that FTA develop new projection methods that were more reflective of potential costs; include standard data requirements for transit needs projections in planning and management system regulations; and, when determining Bureau of Transportation Statistics activities, consider transit needs data requirements. (GAO/RCED-93-61) Our report on urban transportation planning recommended that the Secretary of Transportation develop criteria and related measures for comparing highway and mass transit projects that (1) consider such factors as mobility, environmental quality, safety, cost-effectiveness, and social and economic objectives and (2) identify how these criteria and measures may be applied by transportation planners and decisionmakers. These criteria will help states develop effective solutions, regardless of mode, to deal with congestion and air quality problems. (GAO/RCED-92-112) Our report on transportation control measures recommended that the Secretary of Transportation and the Administrator of the Environmental Protection Agency (1) require local areas to assess the impact of implemented transportation control measures on reducing emissions and (2) cooperate in gathering and disseminating this information to states and localities in ozone and carbon monoxide nonattainment areas. Such information would help localities better plan for the use of transportation control measures and perhaps justify stronger disincentives to automobile use if transportation control measures prove ineffective. (GAO/RCED-93-169) In our report on Amtrak’s training programs for employees who inspect and maintain rail equipment, we recommended that Amtrak (1) identify the basic skills its employees need and develop training by which they might acquire them and (2) determine the costs associated with providing improved training. Amtrak is assessing its training needs, but it has not determined the level of funding it needs to provide the improved training. (GAO/RCED-93-68) The Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991 emphasized the link between traffic congestion and urban air pollution and the need to address both problems through local planning efforts. Our 1992 report identified several obstacles to achieving ISTEA’s goals in these areas and recommended that the Department of Transportation report to the Congress midway through the reauthorization cycle (FY 1995) on its activities to overcome these obstacles. We noted in particular the need to perform and widely disseminate evaluations of the effectiveness of transportation demand management measures in reducing both congestion and pollution. (GAO/PEMD-93-2) Aging Aircraft: FAA Needs Comprehensive Plan to Coordinate Government and Industry Actions (GAO/RCED-90-75) Air Traffic Control: FAA Can Better Forecast and Prevent Equipment Failures (GAO/RCED-91-179) Air Traffic Control: FAA Needs to Justify Further Investment in Its Oceanic Display System (GAO/IMTEC-92-80) Air Traffic Control: FAA’s Implementation of Modernization Projects in the Field (GAO/RCED-89-92) Air Traffic Control: Justifications for Capital Investments Need Strengthening (GAO/RCED-93-55) Air Travel: Passengers Could Be Better Informed of Their Rights (GAO/RCED-91-156) Aircraft Certification: Limited Progress on Developing International Design Standards (GAO/RCED-92-179) Aircraft Certification: New FAA Approach Needed to Meet Challenges of Advanced Technology (GAO/RCED-93-155) Aircraft Maintenance: FAA Needs to Follow Through on Plans to Ensure the Safety of Aging Aircraft (GAO/RCED-93-91) Airline Competition: Impact of Changing Foreign Investment and Control Limits on U.S. Airlines (GAO/RCED-93-7) Airport Safety: New Radar That Will Help Prevent Accidents Is 4 Years Behind Schedule (GAO/T-RCED-91-78) Airspace System: Emerging Technologies May Offer Alternatives to the Instrument Landing System (GAO/RCED-93-33) Airspace Use: FAA Needs to Improve Its Management of Special Use Airspace (GAO/RCED-88-147) Amtrak Safety: Amtrak Should Implement Minimum Safety Standards for Passenger Cars (GAO/RCED-93-196) Amtrak Training: Improvements Needed for Employees Who Inspect and Maintain Rail Equipment (GAO/RCED-93-68) Aviation Research: FAA Could Enhance Its Program to Meet Current and Future Challenges (GAO/RCED-92-180) Aviation Safety: Increased Oversight of Foreign Carriers Needed (GAO/RCED-93-42) Aviation Safety: Limited Success Rebuilding Staff and Finalizing Aging Aircraft Plan (GAO/RCED-91-119) Aviation Safety: New Regulations for Deicing Aircraft Could Be Strengthened (GAO/RCED-93-52) Aviation Safety: Problems Persist in FAA’s Inspection Program (GAO/RCED-92-14) Aviation Safety: Progress Limited With Self-Audit and Safety Violation Reporting Programs (GAO/RCED-92-85) Aviation Safety: Slow Progress in Making Aircraft Cabin Interiors Fireproof (GAO/RCED-93-37) Aviation Safety: Unresolved Issues Involving U.S.-Registered Aircraft (GAO/RCED-93-135) Charter Bus Service: Local Factors Determine Effectiveness of Federal Regulation (GAO/RCED-93-162) Coast Guard: Abandoned Vessels Pollute Waterways and Cost Millions to Clean Up and Remove (GAO/RCED-92-235) Coast Guard: Acquisition Program Staff Were Funded Improperly (GAO/RCED-93-123) Coast Guard: Additional Actions Needed to Improve Cruise Ship Safety (GAO/RCED-93-103) Coast Guard: Better Process Needed to Justify Closing Search and Rescue Stations (GAO/RCED-90-98) Coast Guard: Coordination and Planning for National Oil Spill Response (GAO/RCED-91-212) Coast Guard: Housing Acquisition Needs Have Not Been Adequately Justified (GAO/RCED-92-159) Coast Guard: Inspection Program Improvements Are Under Way to Help Detect Unsafe Tankers (GAO/RCED-92-23) Coast Guard: Magnitude of Alcohol Problems and Related Maritime Accidents Unknown (GAO/RCED-90-150) Coast Guard: Management of the Research, Development, Test and Evaluation Program Needs Strengthening (GAO/RCED-93-157) Coast Guard: Oil Spills Continue Despite Waterfront Facility Inspection Program (GAO/RCED-91-161) Coast Guard: Reorganization Unlikely to Increase Resources or Overall Effectiveness (GAO/RCED-90-132) Computer Reservation Systems: Action Needed to Better Monitor the CRS Industry and Eliminate CRS Biases (GAO/RCED-92-130) Contract Award Practices: Metropolitan Washington Airports Authority Generally Observes Competitive Principles (GAO/RCED-93-63) Defense Transportation: Ineffective Oversight Contributes to Freight Losses (GAO/NSIAD-92-96) DOD Commercial Transportation: Savings Possible Through Better Audit and Negotiation of Rates (GAO/NSIAD-92-61) FAA Budget: Key Issues Need to Be Addressed (GAO/T-RCED-92-51) FAA Information Resources: Agency Needs to Correct Widespread Deficiencies (GAO/IMTEC-91-43) FAA Staffing: Improvements Needed in Estimating Air Traffic Controller Requirements (GAO/RCED-88-106) Highway Contracting: Disadvantaged Business Eligibility Guidance and Oversight Are Ineffective (GAO/RCED-92-148) Highway Safety: Safety Belt Use Laws Save Lives and Reduce Costs to Society (GAO/RCED-92-106) Highway Trust Fund: Strategies for Safeguarding Highway Financing (GAO/RCED-92-245) International Aviation: Implications of Ratifying Montreal Aviation Protocol No. 3 (GAO/RCED-91-45) International Trade: Easing Foreign Visitors’ Arrivals at U.S. Airports (GAO/NSIAD-91-6) Mass Transit: Federal Participation in Transit Benefit Programs (GAO/RCED-93-163) Mass Transit Grants: If Properly Implemented, FTA Initiatives Should Improve Oversight (GAO/RCED-93-8) Mass Transit Grants: Improved Management Could Reduce Misuse of Funds in UMTA’s Region IX (GAO/RCED-92-7) Mass Transit Grants: Noncompliance and Misspent Funds by Two Grantees in UMTA’s New York Region (GAO/RCED-92-38) Mass Transit Grants: Risk of Misspent and Ineffectively Used Funds in FTA’s Chicago Region (GAO/RCED-92-53) Mass Transit Grants: Scarce Federal Funds Misused in UMTA’s Philadelphia Region (GAO/RCED-91-107) Mass Transit Grants: UMTA Needs to Improve Procurement Monitoring at Local Transit Authority (GAO/RCED-89-94) Mass Transit: Needs Projections Could Better Reflect Future Costs (GAO/RCED-93-61) Motor Vehicle Regulations: Regulatory Cost Estimates Could Be Improved (GAO/RCED-92-110) Motor Vehicle Safety: Key Issues Confronting the National Advanced Driving Simulator (GAO/RCED-92-195) Motor Vehicle Safety: NHTSA Should Resume Its Support of State Periodic Inspection Programs (GAO/RCED-90-175) Natural Gas Pipelines: Greater Use of Instrumented Inspection Technology Can Improve Safety (GAO/RCED-92-237) Oil Spill Prevention: Progress Made in Developing Alaska Demonstration Programs (GAO/RCED-93-178) Pollution From Pipelines: DOT Lacks Prevention Program and Information for Timely Response (GAO/RCED-91-60) Railroad Safety: DOD Can Improve the Safety of On-Base Track and Equipment (GAO/RCED-91-135) Railroad Safety: FRA’s Staffing Model Cannot Estimate Inspectors Needed for Safety Mission (GAO/RCED-91-32) Railroad Safety: More FRA Oversight Needed to Ensure Rail Safety in Region 2 (GAO/RCED-90-140) Railroad Safety: New Approach Needed for Effective FRA Safety Inspection Program (GAO/RCED-90-194) Telecommunications: Concerns About Competition in the Cellular Telephone Service Industry (GAO/RCED-92-220) Telecommunications: FCC’s Handling of Formal Complaints Filed Against Common Carriers (GAO/RCED-93-83) Telecommunications: FCC’s Oversight Efforts to Control Cross-Subsidization (GAO/RCED-93-34) Traffic Congestion: Activities to Reduce Travel Demand and Air Pollution Are Not Widely Implemented (GAO/PEMD-93-2) Transportation Infrastructure: Oversight of Rental Rates for Highway Construction Equipment Is Inadequate (GAO/RCED-93-86) Transportation Infrastructure: The Nation’s Highway Bridges Remain at Risk From Earthquakes (GAO/RCED-92-59) Transportation Infrastructure: Urban Transportation Planning Can Better Address Modal Trade-offs (GAO/RCED-92-112) Truck Safety: The Safety of Longer Combination Vehicles Is Unknown (GAO/RCED-92-66) Truck Transport: Little Is Known About Hauling Garbage and Food in the Same Vehicles (GAO/RCED-90-161) Urban Transportation: Reducing Vehicle Emissions With Transportation Control Measures (GAO/RCED-93-169) In the post-cold war period, a nation’s security will be increasingly tied to its ability to achieve overall levels of productivity that can sustain a rising standard of living for its people in a complex world economy. We have responsibility for the two cabinet departments—Education and Labor—charged with providing America a future and current work force that maintains its status as “a preeminent economic superpower.” With an increasingly competitive global marketplace serving as the backdrop, the Clinton administration faces great challenges in the education and employment training arenas. Our educational system has not kept pace with the demands of a changing economy. International competition, rapid technological innovations, and workplace restructuring are creating worker dislocation and major shifts in the skill demands for workers. There are longstanding Education Department managerial problems and a myriad of uncoordinated Labor programs. To produce high-quality products and services that are competitive in a global economy, the nation must have a highly skilled work force. Our work, therefore, has focused on the quality and the financing of the education and the training of the nation’s population, beginning with preschool through the secondary grades and continuing through college, including basic and remedial education, vocational and occupational skills training, and education for the handicapped. It also focuses on employment-related programs and policies affecting the nation’s work force, such as improving transitions to employment by labor force entrants and workers dislocated from their previous jobs; enforcing regulations intended to provide safe and healthful workplaces, fair compensation for work performed, and protection against employment discrimination; and providing leadership in encouraging productive labor-management relations. During the last few years, our work has contributed significantly to legislation and congressional debate. It has also resulted in significant monetary benefits and improvements of operations and programs in key Education and Labor Department areas, such as education reform, higher education, work force competitiveness, and program management. Examples follow. Our work in elementary and secondary education has played an integral part in the debate on key federal efforts focused on improving the nation’s schools. Our report and testimony on systemwide school reform have been used by congressional members and staff as the Congress debates how to redirect federal education efforts from seeking only to ensure access and remediation for at-risk students to improving the nation’s education system for all students. Our report on the funds allocation formula for the Chapter 1 Program—the single largest federal elementary and secondary education program—provided a reasoned, objective look at one of the most contentious issues facing the 103rd Congress. Likewise, our analysis of Census data showing changes in demographics of schoolage children in the last decade provided objective information relative to not only federal fund allocation but also to the challenges schools face today, such as increased poverty. Numerous members of the Congress and their staffs requested and received briefings on these studies. On the basis of our work, the Congress made major revisions in the Carl D. Perkins Vocational Education Act, such as revisions improving allocation of program funds and increasing access to program improvement activities. The Congress also required the Department of the Interior’s Bureau of Indian Affairs (BIA) to develop a plan to overcome deficiencies in identifying and providing services to handicapped Indian preschool students. After reviewing the plan, the Congress reassigned BIA’s responsibilities to the states and tribes and provided the tribes with the funding BIA had been receiving to provide these services. Our work on the Rehabilitation Services Administration’s (RSA) guidance to states regarding the Order of Selection provision of the Rehabilitation Act of 1973 resulted in several key agency changes. That provision requires states to give priority to serving individuals with the most severe disabilities when states do not have enough resources to serve all eligible applicants. In response to our recommendations, RSA developed and issued a new Order of Selection policy and guidance to state rehabilitation agencies, plans to monitor state Order of Selection implementation decisions, and is collecting and disseminating information about various states’ successful implementation of Order of Selection procedures. Our work contributed significantly to many important changes made through the recently completed 1993 Budget Reconciliation Act (Public Law 103-66) and to the Higher Education Act of 1965—the key legislation responsible for providing financial assistance to postsecondary students. For example, during congressional deliberations about potential expansion of the Direct Student Loan Demonstration Program, we focused attention on the Department of Education’s problems in administering the Guaranteed Student Loan Program and questioned the Department’s ability to concurrently operate two major student loan programs. Our concerns contributed to the debate and a compromise proposal to phase in a limited direct lending program over 5 years in lieu of full implementation. In addition, a number of cost reduction changes were made to the Guaranteed Student Loan Program, such as requiring that the proceeds for parents’ loans be disbursed in more than one installment during the school year, implementing risk-sharing for lenders and guaranty agencies, and eliminating the minimum interest rate yield lenders can make when financing their loan portfolio with tax exempt securities. The Congressional Budget Office estimated that these types of changes could save more than $1 billion over 4 years. Information from our work on the current “nonsystem” of federally funded employment and training programs—150 programs in 14 departments and independent agencies with $24 billion in fiscal year 1993 funding—is being used to address the need to streamline our nation’s systems to assist the unemployed. This comprehensive overview data had not been compiled prior to our work and continues to be cited extensively by the Congress and the administration. We, among others, have stated that a national employment training strategy is needed, and the administration appears to be moving in this direction. An administration proposal to combine several programs to assist dislocated workers and develop one-stop career centers draws heavily on our work. Our work on the Employment Service and the Unemployment Insurance System has likewise raised the issue of whether some of the nation’s principal programs were designed for a different era and whether their role needs to be re-evaluated. A series of our studies addressing the needs of the nation’s youth to guide and facilitate their movement from school into the work force has helped the Congress and the administration focus on this issue. We found that, even though American high schools direct most of their resources toward preparing students for college, few incoming high school freshman—about 15 percent—go on to graduate and then obtain a 4-year college degree within 6 years of leaving high school. A substantial number of the remaining 85 percent wander between different education and employment experiences, many seemingly ill prepared for the workplace. Our latest report on comprehensive school-to-work transition strategies has had a significant impact on the administration’s proposal to foster state and local school-to-work opportunities. Our work on the Job Training Partnership Act provided the impetus for the Congress to undertake major revisions to the legislation, which were enacted in 1992. These revisions should begin to have such effects as better targeting of services; eliminating abuses in on-the-job training contracts; improving program evaluation, oversight, and data collection; increasing services for older workers; improving federal monitoring of racial and gender bias in services provided to participants; and saving an estimated $150 million. Prompted in part by our reports and testimony, the Congress raised the maximum penalties for violations of workplace safety and health regulations and child labor laws, which we believe will provide a more effective deterrent to potential violators. In addition, these changes will result in $198 million in increased government revenues in fiscal year 1993. Our report on legislative and administrative options for improving workers’ safety and health led to a comprehensive reexamination of the Occupational Safety and Health Administration’s (OSHA) authorizing legislation. Senate and House legislators drew heavily on the options we had identified, incorporating most of them in bills that were introduced in the last two sessions of the Congress. In our report on education issues for the new administration to consider, we recommended that there needed to be a series of administrative or legislative corrective actions, or both, to improve the Department of Education’s information and financial management systems. Such systems provide needed data and protect the federal government’s financial interests from waste, fraud, and mismanagement. The Department is redesigning its core financial management systems and requested funds for this effort in its 1994 budget. It has also taken steps to improve cash management and has established an information management committee to address data collection and dissemination problems. (GAO/OCG-93-18TR) In our report on the Department of Education’s longstanding management problems, we recommended a multiphased approach to addressing those problems. For example, we recommended that the Secretary articulate a strategic management vision for the Department and adopt a strategic management process for setting goals and priorities such as the National Education Goals, measuring progress toward those goals, and ensuring accountability for attaining them. We also recommended that the Department continue to build on its initial steps taken over the last 2 years to enhance management by implementing a departmentwide strategic management process; identifying good management practices and supporting their adoption in other appropriate parts of the Department; rewarding managers for good leadership; filling technical and policymaking leadership positions with people with appropriate skills; and creating strategic information, financial, and human resources management plans that are integrated with the Department’s overall strategic management process. The Secretary’s Reinvention Coordinating Council has been meeting weekly to establish a framework to implement such initiatives as the National Goals legislation. The Department has also begun implementing a strategic planning process, refining its financial management strategic plan, and redesigning its core financial management systems. (GAO/HRD-93-47) In a report on vocational education, we recommended that the Secretary of Education provide leadership to complete development of a national vocational education data system in cooperation with affected organizations, such as the Council of Chief State Schools Officers, and with the assistance of the National Center for Education Statistics. The reauthorized Perkins Act required Education to establish a national system, and the Department has contracted for a national data needs study that will make recommendations to develop and implement the system. (GAO/HRD-89-55) In our report on within-school discrimination under title VI of the Civil Rights Act of 1964, we recommended that the Secretary of Education issue title VI regulations that identify procedures schools should follow for assigning students to classes on the basis of academic ability or achievement level. The Department believes that expansion of title VI regulations is unnecessary. We disagree with the Department because current regulations do not provide state and local education agencies with needed standards on ability-based student assignments. (GAO/HRD-91-85) In our report on OSHA’s policies and procedures for confirming abatement of hazards, we recommended that OSHA make changes to improve its ability to detect employers who fail to correct safety and health hazards found during inspections. The Secretary of Labor and the Office of Management and Budget (OMB) have approved OSHA’s drafting a regulation to make this change. Labor plans to submit a draft regulation to OMB for review by March 1994. (GAO/HRD-91-35) In our report on the accuracy of employer injury and illness records, we recommended that OSHA improve its procedures for detecting recordkeeping violations, such as failing to report injuries, through its enforcement activities. OSHA revised its operating procedures accordingly. It has since postponed implementation of these revisions, however. OSHA issued a draft directive in March 1993 to replace its 1987 directive and now says it does not expect to implement final revised procedures until March 1995. (GAO/HRD-89-23) In our report on unemployment insurance trust fund reserves, we recommended that if the Congress wished to restore the self-financing feature of the program and to minimize the potential for significant state borrowing in recessions, it should require states to build adequate reserves during periods of low unemployment. No action has been taken on this recommendation. (GAO/HRD-88-55) After reviewing standards set to interpret students’ performance on the National Assessment of Educational Progress (NAEP), we found many technical flaws that made the results of doubtful validity. We recommended that the new standards be withdrawn by the NAEP governing board, that they not be used in reporting NAEP results, and that the governing board also take a number of specific steps to ensure that it does not adopt technically unsound policies or approve technically flawed results. (GAO/PEMD-93-12) Our 8-year followup evaluation, using unique computer-matched wage and service data, showed there are only modest long-term outcomes of the state-federal program that provides services to assist persons with disabilities to become employed, more independent, and integrated into the community. We also found unexplained disparities in the extent of services purchased for clients of different races. We recommended that the Secretary of Education find out why these disparities exist, strengthen evaluation in a number of ways, and take steps to establish the National Commission on Rehabilitation Services, authorized in 1992 to review the program in depth, before the next reauthorization. (GAO/PEMD-93-19) Adolescent Drug Use Prevention: Common Features of Promising Community Programs (GAO/PEMD-92-2) Apprenticeship Training: Administration, Use, and Equal Opportunity (GAO/HRD-92-43) Chapter 1 Accountability: Greater Focus on Program Goals Needed (GAO/HRD-93-69) Department of Education: Longstanding Management Problems Hamper Reforms (GAO/HRD-93-47) Dislocated Workers: Improvements Needed in Trade Adjustment Assistance Certification Process (GAO/HRD-93-36) Dislocated Workers: Worker Adjustment and Retraining Notification Act Not Meeting Its Goals (GAO/HRD-93-18) Educational Achievement Standards: NAGB’s Approach Yields Misleading Interpretations (GAO/PEMD-93-12) Education Issues (GAO/OCG-93-18TR) Employment Service: Improved Leadership Needed for Better Performance (GAO/HRD-91-88) Environment, Safety, and Health: Environment and Workers Could Be Better Protected at Ohio Defense Plants (GAO/RCED-86-61) Federal Employment: Displaced Federal Workers Can Be Helped by Expanding Existing Programs (GAO/GGD-92-86) Federal Prisons: Inmate and Staff Views on Education and Work Training Programs (GAO/GGD-93-33) Financial Audit: Guaranteed Student Loan Program’s Internal Controls and Structure Need Improvement (GAO/AFMD-93-20) Financial Management: Education’s Student Loan Program Controls Over Lenders Need Improvement (GAO/AIMD-93-33) Foreign Farm Workers in U.S.: Department of Labor Action Needed to Protect Florida Sugar Cane Workers (GAO/HRD-92-95) Impact Aid: Most School Construction Requests Are Unfunded and Outdated (GAO/HRD-90-90) Minimum Wages and Overtime Pay: Change in Statute of Limitations Would Better Protect Employees (GAO/HRD-92-144) National Labor Relations Board: Action Needed to Improve Case-Processing Time at Headquarters (GAO/HRD-91-29) Occupational Safety and Health: Assuring Accuracy in Employer Injury and Illness Records (GAO/HRD-89-23) Occupational Safety and Health: OSHA Action Needed to Improve Compliance With Hazard Communication Standard (GAO/HRD-92-8) Occupational Safety and Health: OSHA Policy Changes Needed to Confirm That Employers Abate Serious Hazards (GAO/HRD-91-35) Occupational Safety and Health: Penalties for Violations Are Well Below Maximum Allowable Penalties (GAO/HRD-92-48) Occupational Safety and Health: Worksite Safety and Health Programs Show Promise (GAO/HRD-92-68) Remedial Education: Modifying Chapter 1 Formula Would Target More Funds to Those Most in Need (GAO/HRD-92-16) Stafford Student Loans: Prompt Payment of Origination Fees Could Reduce Costs (GAO/HRD-92-61) Student Testing: Current Extent and Expenditures, With Cost Estimates for a National Examination (GAO/PEMD-93-8) Systemwide Education Reform: Federal Leadership Could Facilitate District-Level Efforts (GAO/HRD-93-97) Targeted Jobs Tax Credit: Employer Actions to Recruit, Hire, and Retain Eligible Workers Vary (GAO/HRD-91-33) The Changing Work Force: Comparison of Federal and Nonfederal Work/Family Programs and Approaches (GAO/GGD-92-84) Transition From School to Work: Linking Education and Worksite Training (GAO/HRD-91-105) Transition From School to Work: States Are Developing New Strategies to Prepare Students for Jobs (GAO/HRD-93-139) Unemployment Insurance: Trust Fund Reserves Inadequate (GAO/HRD-88-55) Vocational Education: Opportunity to Prepare for the Future (GAO/HRD-89-55) Vocational Rehabilitation: Better VA Management Needed to Help Disabled Veterans Find Jobs (GAO/HRD-92-100) Vocational Rehabilitation: Evidence for Federal Program’s Effectiveness Is Mixed (GAO/PEMD-93-19) Vocational Rehabilitation: VA Needs to Emphasize Serving Veterans With Serious Employment Handicaps (GAO/HRD-92-133) Welfare to Work: Implementation and Evaluation of Transitional Benefits Need HHS Action (GAO/HRD-92-118) Welfare to Work: JOBS Participation Rate Data Unreliable for Assessing States’ Performance (GAO/HRD-93-73) Welfare to Work: States Move Unevenly to Serve Teen Parents in JOBS (GAO/HRD-93-74) Within-School Discrimination: Inadequate Title VI Enforcement by the Office for Civil Rights (GAO/HRD-91-85) The Department of Defense (DOD) and the Department of Veterans Affairs (VA) operate two of the largest centrally managed health care systems in the world, spending more than $25 billion annually through 500 facilities and a network of private providers. In addition, the Health Care Financing Administration (HCFA) is administering the multibillion dollar Medicare and Medicaid programs that finance health care provided to the nations’s elderly, disabled, and economically disadvantaged. Rising health care costs and substantial budget deficits have prompted increased congressional concerns about whether these agencies are delivering quality health care to their beneficiaries as efficiently and cost effectively as possible. The downsizing of military forces and the potential transfer of beneficiaries from DOD systems to VA systems has also prompted a concern about the structure of DOD and VA health delivery systems. Our objectives in this issue area are to (1) identify ways that VA and DOD health care systems can operate more effectively and efficiently; (2) identify and assess opportunities for restructuring VA and DOD health care delivery systems to enhance universal access to health care; and (3) improve the quality of health care processes in VA, DOD, Medicare/Medicaid, and Public Health Service programs. During fiscal year 1993, we continued making progress toward achieving of these objectives. For example, our work on DOD’s efforts to reform the military health services system has identified lessons learned that can provide useful information and insight for how DOD should best proceed in implementing managed health care throughout its system. The recommendations we made, which DOD is implementing, will increase and improve accountability, budgeting, resource allocation, training, information systems, and contracting. In addition, DOD is working on establishing a uniform health care benefits package that will be more equitable for all beneficiaries. Our continued evaluation of the management of mental health care under Comprehensive Health and Medical Program of the Uniform Service (CHAMPUS) showed that DOD was making progress but still needed to adopt stronger controls to prevent duplicate and erroneous payments to providers of psychiatric care. We identified problems in the way CHAMPUS was controlling access to home health care under two demonstration projects and potential duplication between the home care program and DOD’s managed care program. Although the Congress authorized a permanent CHAMPUS home care benefit, DOD plans to address our concerns in implementing the program and include coordination of the home care program under the managed care program. We are continuing a comprehensive effort to address the restructuring of federal health care delivery systems and beneficiary eligibility reforms. This effort will provide a better understanding of, and federal options for, a system of universal access to quality health care. As part of this effort, GAO testified three times on the problems VA will likely face in competing under national health reforms. We suggested that construction of most new VA capacity be suspended until health reforms take shape. This was to avoid spending funds constructing facilities that may be obsolete before they are completed. GAO identified a series of problems in the management of VA construction projects that result in projects that are too big, take too long to construct, and cost too much. In addition, we suggested that VA consider setting up demonstration projects in several communities that currently lack VA hospitals. These demonstrations would center around VA outpatient clinics, with contracts for inpatient care. Further, we have demonstrated that VA medical centers’ outpatient care eligibility and rationing decisions could be improved and should ensure that eligible veterans receive more consistent treatment at VA facilities. Also as a result of our work, VA should be in a position to improve its policies and practices for ensuring consistent access to treatment by all eligible veterans. Our reviews of VA site selections in east central Florida and northern California resulted in reevaluations of VA decisions to build new medical centers in Orlando, Florida, and Davis, California. Based on our work, VA decided to build the facilities in Brevard County, Florida, and Fairfield, California. Both facilities will be constructed as joint ventures with the Air Force, resulting in significant cost savings. Although VA ultimately decided to proceed with construction of a new hospital in Hawaii, the revised plans do provide for greater sharing of such services as laundry and dietetics. The recommendation should remain open, however, because the facility has not received construction funding and there continues to be little justification for construction of the facility. Our work on locality-based pay for VA nurses identified significant internal control weaknesses that provide little assurance that the innovative pay system will improve recruitment and retention of nurses or that the pay rates are reasonable. Although VA has taken some action on our recommendations, more needs to be done. VA issued several directives to improve health care services to women veterans. The Congress held several hearings on the subject to help ensure that changes are made. During fiscal year 1993, we issued a capping report on our quality assurance work at VA and reported that VA Medical Centers are not correcting quality assurance problems identified by either GAO or the VA Inspector General. Our work at the Salem, Virginia, Medical Center showed that improvements are needed in the psychiatric care provided by the facility. VA concurred with our recommendations to correct these conditions. But, medical centers must take the necessary action to resolve these problems. In the past, this has not been consistently done. Apart from our work in VA, we completed several reviews in areas that impact on quality of care outside the federal sector. Specifically, we reported that work hours of resident physicians can be legislatively mandated by amending the Medicare conditions of participation, but that several factors must be taken into consideration before such action is taken. We also (1) issued a fact sheet on how utilization review organizations perform their work; (2) evaluated the effectiveness of HealthPASS, a managed care program for certain Philadelphia Medicaid recipients; and (3) reviewed HCFA’s evaluation of the Joint Commission on Accreditation of Healthcare Organizations’ application for home health care deemed status. Our review of HealthPASS resulted in recommendations to the Secretary of Health and Human Services that, if implemented, will improve the quality of care provided to recipients and give HealthPASS members better access to other Medicaid programs. As we recommended in our report on VA’s verification of a veteran’s reported income, the Congress extended VA’s authority to use tax records in determining veteran’s copayment liability. We also recommended that VA implement an income-verification system as soon as possible after such authority was extended. (GAO/HRD-92-159) In another report, we recommended that VA should only use private health care when the needed services are not available at VA facilities or a veteran’s geographic inaccessibility makes it more economical to use private care. (GAO/HRD-92-109) We recommended that VA’s plans to consolidate and automate mail-service pharmacies (1) assume maximum use of 90-day supplies when dispensing maintenance drugs prescribed at a stabilized dose, (2) select the most cost-efficient locations for the mail-service pharmacies, and (3) ensure compatibility of prescription handling and automatic data processing equipment throughout VA facilities to maximize efficiency. (GAO/HRD-92-30) In our report on federal ethics requirements at VA medical centers, we recommended that VA revise policies governing types of employment activities that medical center managers may engage in and establish stronger procedures for enforcing federal ethics requirements. (GAO/HRD-93-39) In our report on variabilities in VA’s outpatient care eligibility and rationing decisions, we recommended that VA develop better guidance to medical centers so that clinicians may achieve more consistent application of statutory eligibility requirements or propose to the Congress alternative eligibility criteria that produce greater consistency of eligibility determinations. (GAO/HRD-93-106) In our report on the establishment of the Hawaii Medical Center, we recommended that VA reconsider its decision to build additional acute care beds at Tripler Army Medical Center’s E-Wing and that VA and DOD develop a joint venture agreement to give VA autonomy over care provided to veterans at Tripler. We also recommended that VA use Tripler’s E-Wing to accommodate its planned nursing home. (GAO/HRD-92-41) In our report on the quality-of-care provided by some VA psychiatric hospitals, we recommended that each hospital director be held responsible for making certain that quality-of-care problems are identified and resolved. (GAO/HRD-92-17) We recommended that VA regional directors be required to have inspection teams ensure that every medical center in their region is complying with quality assurance requirements and that problems GAO and the Inspector General identified have been corrected. (GAO/HRD-93-20) In our report on the Salem VA Medical Center, we recommended that psychiatric care provided at the medical center be reviewed and necessary actions be taken to ensure that care meets medical center bylaws. (GAO/HRD-93-108) In our report on HealthPASS, we recommended that the Secretary of Health and Human Services (1) require the Pennsylvania Department of Public Welfare to make necessary arrangements to share the names of HealthPASS members with the Women, Infants, and Children Program and (2) direct the State of Pennsylvania to include in its contract with Healthcare Management Alternatives, Inc., a requirement to query nationwide information banks to improve the identification of potentially problematic physicians in the HealthPASS program. (GAO/HRD-93-67) In our report on the management of CHAMPUS mental health benefits, we recommended that the Secretary of Defense (1) establish a cost-based reimbursement system similar to Medicare and (2) adopt the hospital annual index used in the Medicare and CHAMPUS prospective payment system to reimburse psychiatric residential treatment centers. (GAO/HRD-93-34) In our report on fraud and abuse in psychiatric hospitals, we recommended that DOD adopt procedures for visiting and inspecting psychiatric hospitals to determine whether problems involving unnecessary hospitals stays and quality of care have been corrected and to ensure that DOD contractors improve their claims payment systems to minimize payments for unauthorized hospital stays and to avoid duplicate payments. (GAO/HRD-93-92) In our testimony on DOD’s managed health care initiatives, we recommended that, as DOD continues to contract for health care services, it (1) carefully determine when contracting will and will not be appropriate; (2) carefully determine the size of its procurements to ensure sufficient competition; and (3) take appropriate safeguards to ensure high quality and accessible care that protects beneficiaries and the government against poor contractor performance. (GAO/T-HRD-93-21) Composite Health Care System: Outpatient Capability Is Nearly Ready for Worldwide Deployment (GAO/IMTEC-93-11) Defense Health Care: Additional Improvements Needed in CHAMPUS’s Mental Health Program (GAO/HRD-93-34) Defense Health Care: CHAMPUS Mental Health Demonstration Project in Virginia (GAO/HRD-93-53) Defense Health Care: Implementing Coordinated Care—A Status Report (GAO/HRD-92-10) Defense Health Care: Lessons Learned From DOD’s Managed Health Care Initiatives (GAO/T-HRD-93-21) Defense Health Care: Obstacles in Implementing Coordinated Care (GAO/T-HRD-92-24) Defense Health Care: Physical Exams and Dental Care Following the Persian Gulf War (GAO/HRD-93-5) DOD Health Care: Further Testing and Evaluation of Case-Managed Home Health Care Is Needed (GAO/HRD-93-59) DOD Medical Inventory: Reductions Can Be Made Through the Use of Commercial Practices (GAO/NSIAD-92-58) Federal Health Benefits Program: Stronger Controls Needed to Reduce Administrative Costs (GAO/GGD-92-37) Maternal and Child Health: Block Grant Funds Should Be Distributed More Equitably (GAO/HRD-92-5) Medicaid: HealthPASS: An Evaluation of a Managed Care Program for Certain Philadelphia Recipients (GAO/HRD-93-67) Medical ADP Systems: Automated Medical Records Hold Promise to Improve Patient Care (GAO/IMTEC-91-5) Methadone Maintenance: Some Treatment Programs Are Not Effective; Greater Federal Oversight Needed (GAO/HRD-90-104) Military Health Care: Recovery of Medical Costs From Liable Third Parties Can Be Improved (GAO/NSIAD-90-49) Pesticides: Need To Enhance FDA’s Ability To Protect the Public From Illegal Residues (GAO/RCED-87-7) Psychiatric Fraud and Abuse: Increased Scrutiny of Hospital Stays Is Needed for Federal Health Programs (GAO/HRD-93-92) Trauma Care Reimbursement: Poor Understanding of Losses and Coverage for Undocumented Aliens (GAO/PEMD-93-1) VA Health Care: Actions Needed to Control Major Construction Costs (GAO/HRD-93-75) VA Health Care: Copayment Exemption Procedures Should Be Improved (GAO/HRD-92-77) VA Health Care: Inadequate Controls Over Scarce Medical Specialist Contracts (GAO/HRD-92-114) VA Health Care: Inadequate Enforcement of Federal Ethics Requirements at VA Medical Centers (GAO/HRD-93-39) VA Health Care: Labor Management and Quality-of-Care Issues at the Salem VA Medical Center (GAO/HRD-93-108) VA Health Care: Medical Centers Are Not Correcting Identified Quality Assurance Problems (GAO/HRD-93-20) VA Health Care: Modernizing VA’s Mail-Service Pharmacies Should Save Millions of Dollars (GAO/HRD-92-30) VA Health Care: Offsetting Long-Term Care Costs by Adopting State Copayment Practices (GAO/HRD-92-96) VA Health Care: Potential for Offsetting Long-Term Care Costs Through Estate Recovery (GAO/HRD-93-68) VA Health Care: Problems in Implementing Locality Pay for Nurses Not Fully Addressed (GAO/HRD-93-54) VA Health Care: Role of the Chief of Nursing Service Should Be Elevated (GAO/HRD-92-74) VA Health Care: Telephone Service Should Be More Accessible to Patients (GAO/HRD-91-110) VA Health Care: The Quality of Care Provided by Some VA Psychiatric Hospitals Is Inadequate (GAO/HRD-92-17) VA Health Care: Use of Private Providers Should Be Better Controlled (GAO/HRD-92-109) VA Health Care: VA Plans Will Delay Establishment of Hawaii Medical Center (GAO/HRD-92-41) VA Health Care: Variabilities in Outpatient Care Eligibility and Rationing Decisions (GAO/HRD-93-106) VA Health Care: Verifying Veterans’ Reported Income Could Generate Millions in Copayment Revenues (GAO/HRD-92-159) Veterans Affairs Issues (GAO/OCG-93-21TR) Veterans Benefits: Acquisition of Information Resources for Modernization Is Premature (GAO/IMTEC-93-6) Income security programs affect all Americans at some time. Their purpose, in part, is to help people become self-sufficient and to support those unable to support themselves. The programs provide cash aid to the elderly, the disabled, the poor, and veterans; a conduit for funding in-kind assistance for such needy populations as the homeless, refugees, runaway youth, and abused children; and oversight for the private pension system. Income security expenditures make up about 35 percent of all federal spending. Our work provided information and recommendations directed at (1) improving the planning and the management of retirement programs; (2) ensuring the protection of worker benefits; (3) helping the government meet the needs of the poor by getting them on the path toward self-sufficiency; (4) seeing that vulnerable groups, including the disabled, were well served and protected by income security programs; (5) improving the quality of services provided to the public; and (6) ensuring the efficient administration of income security programs. For example, because of GAO’s review, the Pension Benefit Guaranty Corporation developed and implemented new procedures for collecting insurance premiums, penalties, and interest, resulting in the collection of an additional $20 million. Also, on the basis of our recommendations, the Department of Veterans Affairs (VA) established procedures for verifying the accuracy of medical expenses claimed by pension beneficiaries and used by VA in computing benefit amounts. These procedures should save VA an estimated $91 million annually. On the basis of our recommendations, legislation was enacted authorizing the Social Security Administration (SSA) to recover debts owed by former SSA beneficiaries by requesting Treasury to withhold any tax refunds. SSA estimated that it would collect $213 million during the first 3 years of implementation of this legislation. Further, on the basis of our recommendations, legislation was enacted allowing VA access to tax data to verify income information provided by VA pension program beneficiaries and persons receiving VA unemployability benefits. When enacted, this legislation had a September 1992 expiration date. Our work in this area, which indicated that millions of dollars could be saved by matching records, led to an extension of VA’s authority until September 1997. Finally, on the basis of our recommendations related to ensuring that subsidized housing units were occupied by needy families, the Congress enacted legislation allowing the Department of Housing and Urban Development access to federal tax data to verify program eligibility. As a result of our continuing work in the use of SSA’s death information to reduce payments to deceased persons, legislation was enacted that will cause states to share previously restricted information. This will result in yearly savings of $5.5 million with first-year savings of $14 million. In addition, pursuant to our recommendations, VA and the Department of Health and Human Services (HHS) entered into an agreement to match VA compensation and pension files and SSA death files to identify and end erroneous payments. Finally, savings of $127 million annually resulted from legislation enacted upon our recommendation to reduce pension payments to veterans’ surviving spouses without dependents who receive Medicaid-supported nursing home care. In June 1992, we reported that states had done little to help defray the costs of providing child support enforcement services to clients who did not receive Aid to Families With Dependent Children (AFDC) benefits. With the broad discretion available to them, most states have implemented minimal fee policies. In 1990, about 3.5 percent of the $644 million in administrative costs for non-AFDC clients was recovered by the states through fees. We recommended that the Congress amend title IV-D of the Social Security Act to require states to recover more of these costs in the future. Congressional actions has not been initiated because the Congress is awaiting the administration’s proposals for welfare and child support reforms. (GAO/HRD-92-91) In August 1992, we reported that child abuse prevention programs had been shown to be effective. Although few in number, evaluations of these programs indicate that they reduced the incidence of abuse in high-risk families and the cost of long-range problems associated with abuse. While the federal government provides billions of dollars annually to states for foster care and other assistance for children who have already been abused, it provides relatively little funding for prevention. We recommended that, to give states incentives to implement and sustain child abuse prevention programs, the Congress amend title IV of the Social Security Act to authorize the Secretary of Health and Human Services to reimburse states, at foster care matching rates, for the cost of implementing prevention programs. The reimbursements would be provided to states that demonstrated that the programs, by reducing child abuse and related foster care placements, were paying for themselves. The Congress has taken some action to make limited funds available for possible use in child abuse prevention but has not specifically addressed our recommendation. (GAO/HRD-92-99) In September 1991, we pointed out the need for nationwide foster care data for use in federal policy deliberations. We found that the lack of common definitions or methodologies nationwide; the absence of data from states over the years; and the collection of aggregate, rather than case-level, data all served to impede the development of a national foster care information system. We recommended that the Congress (1) reemphasize the need for prompt issuance of regulations for improved state data bases, (2) amend the timetable for states to implement automated data systems, and (3) establish specific federal policy on funding these systems. Action is being taken on these recommendations. The Congress enacted legislation providing a 75-percent match for administrative costs associated with developing a foster care data base. HHS has drafted regulations that states must meet to be eligible for the match. (GAO/HRD-91-64) In November 1992, we reported that states were struggling to enforce their child care standards and promote quality in various child care settings. While legislation authorizing the Child Care and Development Block Grant (CCDBG) had been recently enacted and provided some money to states for quality improvement activities, including enforcement, HHS regulations further restricted the amount to be used for these activities. Given this, state officials were not optimistic about CCDBG’s impact on their quality improvement and enforcement efforts, especially if state budget constraints continued and heavy caseloads worsened as new providers, paid with CCDBG funds, entered the market. We recommended to HHS that it assess whether the quantity of child care services under CCDBG would exceed the states’ capacities to ensure that those services meet an acceptable level of care and, if so, modify its regulations restricting the use of CCDBG’s quality improvement money. Action has not yet been initiated on this recommendation. (GAO/HRD-93-13) In July 1989, we reported that an estimated 19 percent of veterans receiving compensation benefits had disabilities resulting from diseases that had probably been neither caused nor aggravated by military service. Many of these diseases that are related to heredity or lifestyle resulted in benefits estimated at about $1.7 billion in 1986. We recommended that the Congress reconsider whether these diseases should be compensated as service-connected disabilities. The Congress has not yet taken action. (GAO/HRD-89-60) In March 1992, we reported that three VA-administered life insurance programs had sufficient excess funds to pay their own administrative costs. This would save an estimated $27 million annually in appropriated funds. We recommended that the Congress amend 38 U.S.C. 1982 to require that these administrative costs be paid from excess interest income. The Congress has not yet initiated action. (GAO/HRD-92-42) In July 1992, we reported that the operating reserves for VA’s Servicemen’s Group Life Insurance Program (SGLI) needed to be increased by $85 million by 1998. At the same time, the contingency reserves contained about $51 million in excess funds in relation to program needs. Throughout the 1980s, VA overcharged military personnel for their insurance, causing continued growth of excess reserves. We recommended that VA (1) reduce the contingency reserve to $25 million and use the excess funds to provide a portion of the additional operating reserves and (2) compute each year the true premiums to be paid by SGLI participants and adjust premiums as appropriate. VA has not initiated action on our recommendations. (GAO/HRD-92-71) In September 1992, we reported that VA’s vocational rehabilitation program did not emphasize finding jobs for veterans, that VA did not know why most veterans had dropped out of the program, and that standards for measuring service to veterans needed to be improved. We recommended that VA (1) meet legislative requirements related to finding and maintaining suitable employment for disabled veterans, (2) work with the Department of Labor to effectively provide job placement services, (3) determine the reasons why veterans were dropping out and take action to increase the number of veterans completing the program, and (4) establish a realistic performance measurement system. VA agreed with the recommendations and has initiated some action. (GAO/HRD-92-100) We evaluated the readability of forms used by retirees who had chosen not to select survivor benefits for their spouses. In December 1989 and in December 1992, we recommended that the Internal Revenue Service (IRS) develop model language to be used by pension plans to clarify the implications of options available to retirees and their spouses. Once implemented, this recommendation could lead to an increase in the number of elderly widowed spouses receiving income from the private pension system. IRS has taken some action. (GAO/HRD-90-20) In March 1987, we reported on management problems that SSA must address to ensure high-quality services. Our report contained numerous recommendations. While some are closed, those still open address (1) improving the long-term operational plan, (2) reexamining resources and priorities of existing automated data processing systems, (3) improving various aspects of the management information system, and (4) establishing performance standards and measurements. (GAO/HRD-87-39) In July 1991, we provided information to the Congress on debt management practices at SSA, the Railroad Retirement Board, the Office of Personnel Management, and VA. We recommended that SSA (1) assign central responsibility for debt management to the Deputy Commissioner for Finance, Assessment, and Management and (2) accelerate completion of the management information system needed to support effective debt management. Also, we recommended that the Director, Office of Management and Budget, direct the Secretary of Veterans’ Affairs to assess interest and administrative costs on overpayments, as required by the Veterans Rehabilitation and Education Amendments of 1980. (GAO/HRD-91-46) Adequacy of the Administration on Aging’s Provision of Technical Assistance for Targeting Services Under the Older Americans Act (GAO/T-PEMD-91-3) Administration on Aging: More Federal Action Needed to Promote Service Coordination for the Elderly (GAO/HRD-91-45) Board and Care Homes: Elderly at Risk From Mishandled Medications (GAO/HRD-92-45) Child Abuse: Prevention Programs Need Greater Emphasis (GAO/HRD-92-99) Child Care: States Face Difficulties Enforcing Standards and Promoting Quality (GAO/HRD-93-13) Child Support Enforcement: Opportunity To Defray Burgeoning Federal and State Non-AFDC Costs (GAO/HRD-92-91) Child Support Enforcement: States Proceed With Immediate Wage Withholding; More HHS Action Needed (GAO/HRD-93-99) Debt Management: More Aggressive Actions Needed to Reduce Billions in Overpayments (GAO/HRD-91-46) Early Intervention: Federal Investments Like WIC Can Produce Savings (GAO/HRD-92-18) Employee Benefits: Improved Plan Reporting and CPA Audits Can Increase Protection Under ERISA (GAO/AFMD-92-14) Employee Benefits: States Need Labor’s Help Regulating Multiple Employer Welfare Arrangements (GAO/HRD-92-40) Federal Employees’ Compensation Act: Need to Increase Rehabilitation and Reemployment of Injured Workers (GAO/GGD-92-30) Financial Audit: Department of Veterans Affairs Financial Statements for Fiscal Years 1989 and 1988 (GAO/AFMD-91-6) Financial Audit: System and Control Problems Further Weaken the Pension Benefit Guaranty Fund (GAO/AFMD-92-1) Financial Audit: Veterans Administration’s Financial Statements for Fiscal Year 1986 (GAO/AFMD-87-38) Financial Audit: Veterans Administration’s Financial Statements for Fiscal Years 1987 and 1986 (GAO/AFMD-89-23) Financial Audit: Veterans Administration’s Financial Statements for Fiscal Years 1988 and 1987 (GAO/AFMD-89-69) Financial Management: Opportunities for Improving VA’s Internal Accounting Controls and Procedures (GAO/AFMD-89-35) Foreign Farm Workers in U.S.: Department of Labor Action Needed to Protect Florida Sugar Cane Workers (GAO/HRD-92-95) Foster Care: Children’s Experiences Linked to Various Factors; Better Data Needed (GAO/HRD-91-64) Foster Care: Services to Prevent Out-of-Home Placements Are Limited by Funding Barriers (GAO/HRD-93-76) Homelessness: Access to McKinney Act Programs Improved but Better Oversight Needed (GAO/RCED-91-29) Homelessness: Action Needed to Make Federal Surplus Property Program More Effective (GAO/RCED-91-33) Homelessness: Federal Personal Property Donations Provide Limited Benefit to the Homeless (GAO/RCED-91-108) Housing Programs: VA Can Reduce Its Guaranteed Home Loan Foreclosure Costs (GAO/RCED-89-58) Immigration Reform: Verifying the Status of Aliens Applying for Federal Benefits (GAO/HRD-88-7) Older Americans Act: More Federal Action Needed on Public/Private Elder Care Partnerships (GAO/HRD-92-94) Pension Plans: Pension Benefit Guaranty Corporation Needs to Improve Premium Collections (GAO/HRD-92-103) Premium Accounting System: Pension Benefit Guaranty Corporation System Must Be an Ongoing Priority (GAO/IMTEC-92-74) Private Pensions: Protections for Retirees’ Insurance Annuities Can Be Strengthened (GAO/HRD-93-29) Private Pensions: Spousal Consent Forms Hard to Read and Lack Important Information (GAO/HRD-90-20) Rental Housing: Housing Vouchers Cost More Than Certificates but Offer Added Benefits (GAO/RCED-89-20) Social Security Administration: Stable Leadership and Better Management Needed To Improve Effectiveness (GAO/HRD-87-39) Social Security: Beneficiary Payment for Representative Payee Services (GAO/HRD-92-112) Social Security Disability: SSA Needs to Improve Continuing Disability Review Program (GAO/HRD-93-109) Social Security: IRS Tax Identity Data Can Help Improve SSA Earnings Records (GAO/HRD-93-42) Social Security: Many Administrative Law Judges Oppose Productivity Initiatives (GAO/HRD-90-15) Social Security: Measure of Telephone Service Accuracy Can Be Improved (GAO/HRD-91-69) Social Security: Need for Better Coordination of Food Stamp Services for Social Security Clients (GAO/HRD-92-92) Social Security: Need to Improve Postentitlement Service to the Public (GAO/HRD-93-21) Social Security: Racial Difference in Disability Decisions Warrants Further Investigation (GAO/HRD-92-56) Social Security: Reconciliation Improved SSA Earnings Records, but Efforts Were Incomplete (GAO/HRD-92-81) Social Security: Reporting and Processing of Death Information Should Be Improved (GAO/HRD-92-88) Social Security: Selective Face-to-Face Interviews With Disability Claimants Could Reduce Appeals (GAO/HRD-89-22) Social Security: Status and Evaluation of Agency Management Improvement Initiatives (GAO/HRD-89-42) SSA Computers: Long-Range Vision Needed to Guide Future Systems Modernization Efforts (GAO/IMTEC-91-44) The New Earned Income Credit Form Is Complex and May Not Be Needed (GAO/T-GGD-91-68) Urban Poor: Tenant Income Misreporting Deprives Other Families of HUD-Subsidized Housing (GAO/HRD-92-60) VA Benefits: Law Allows Compensation for Disabilities Unrelated to Military Service (GAO/HRD-89-60) VA Life Insurance: Administrative Costs for Three Programs Should Be Paid From Excess Funds (GAO/HRD-92-42) VA Life Insurance: Premiums and Program Reserves Need More Timely Adjustments (GAO/HRD-92-71) Veterans Affairs IRM: Stronger Role Needed for Chief Information Resources Officer (GAO/IMTEC-91-51BR) Veterans’ Benefits: Improved Management Needed to Reduce Waiting Time for Appeal Decisions (GAO/HRD-90-62) Veterans’ Compensation: Premature Closing of VA Office in the Philippines Could Be Costly (GAO/HRD-93-96) Vocational Rehabilitation: Better VA Management Needed to Help Disabled Veterans Find Jobs (GAO/HRD-92-100) Vocational Rehabilitation: VA Needs to Emphasize Serving Veterans With Serious Employment Handicaps (GAO/HRD-92-133) Welfare Benefits: States Need Social Security’s Death Data to Avoid Payment Error or Fraud (GAO/HRD-91-73) Welfare Eligibility: Programs Treat Indian Tribal Trust Fund Payments Inconsistently (GAO/HRD-88-38) Welfare Programs: Ineffective Federal Oversight Permits Costly Automated System Problems (GAO/IMTEC-92-29) Welfare to Work: Implementation and Evaluation of Transitional Benefits Need HHS Action (GAO/HRD-92-118) Welfare to Work: JOBS Participation Rate Data Unreliable for Assessing States’ Performance (GAO/HRD-93-73) Welfare to Work: States Move Unevenly to Serve Teen Parents in JOBS (GAO/HRD-93-74) As health care financier and insurer, the federal government serves over 35 million elderly and disabled under Medicare, an estimated 33 million poor under Medicaid, and 9 million active and retired federal employees and their families under the Federal Employees Health Benefits Program. The government’s primary programs for financing health care, Medicare and Medicaid, have a federal spending total estimated at over $260 billion in fiscal year 1994; an additional $70 billion in state and local funds is expected to be spent on Medicaid. Our primary objective in reviewing these programs is to find ways to reduce costs without adversely affecting beneficiary access to quality care. Other important objectives are to (1) assess the processes used to control and identify fraud, abuse, and mismanagement in the programs; (2) evaluate quality-of-care assurance systems; and (3) review issues related to beneficiary access to care. Throughout the 1980s, the Congress looked to Medicare for deficit reduction opportunities, and billions of dollars in monetary savings were achieved. Medicaid became a means of expanding health care services for those too poor to obtain them, particularly pregnant women and children. But, the 1990s are presenting new challenges to these programs and health care in general. Health care costs have skyrocketed, and the nation’s uninsured-underinsured population continues to grow. New approaches for delivering health care services to millions of Americans are being tried. Our work continues to support many of the Medicare and Medicaid program initiatives and legislative changes undertaken by the Congress. In August 1993, the Omnibus Budget Reconciliation Act (OBRA) of 1993 became law. This act was the first major legislation affecting the health-financing programs since 1990 and contains a number of provisions related to our recommendations. For example, OBRA: Reduces Medicare’s clinical diagnostic laboratory service fee schedule payment rates. The act phases in the reduction and will reach the level we recommended in 1996. This provision will result in 5-year savings to Medicare of $3.3 billion. Implements our recommendation to equalize Medicare payment rates for anesthesia services whether anesthesiologists directly furnish the service or certified registered nurse anesthetists furnish it under the medical direction of an anesthesiologist. This action is being phased in over 5 years, and Medicare savings over that period will total $429 million. Shifts certain supply items, used by Medicare patients in their homes, to a different fee schedule category as we had recommended. This will result in lower payment rates for the shifted items and achieve 5-year savings of $97 million. Eliminates the Medicare requirement to set higher payment limits for hospital-based home health agencies then for freestanding agencies. We reported that the higher limits were not necessary to ensure beneficiary access to services. OBRA eliminated the differential in payment limits, which will reduce Medicare costs over 2 years by $220 million. Prohibits physicians from referring Medicare and Medicaid beneficiaries to a variety of providers and suppliers if the physicians have ownership interests in them. We reported that physicians who had ownership interests in laboratories and diagnostic imaging facilities ordered more, and more expensive, services than physicians who did not have such ownership interests. The new provision will result in an estimated 5-year savings of $387 million. Makes a number of changes to Medicare and Medicaid rules related to when employer-sponsored group health insurance pays for beneficiaries’ services. These changes included standardizing the definition of employers covered by Medicare’s secondary payor provisions, extending their effective dates, and establishing a registry of Medicare and Medicaid beneficiaries with coverage under employer-sponsored plans. We have reported and testified many times on problems with, and opportunities to improve, the Medicare and Medicaid secondary payer programs. Our work contributed to enactment of OBRA provisions. In total, the congressional actions in this area will result in a 5-year savings of $5.6 billion. Clarifies, as we recommended, when anticancer drugs that are used in situations not covered by their Food and Drug Administration-approved labeling will be paid by Medicare. This will increase the uniformity of payment determinations across the country and ease administrative and financial burdens for beneficiaries and providers. Puts new limitations on how states distribute payments to hospitals with a disproportionate share of Medicaid and indigent patients. A hospital may not be designated as a disproportionate share hospital unless it has a Medicaid inpatient utilization rate of at least 1 percent. In addition, payments to disproportionate share hospitals may not exceed the cost of providing care less amounts received from Medicaid and the uninsured. Two recent reports on the disproportionate share program highlighted issues associated with the distribution of funds to hospitals participating in the program. Our work contributed to the legislative changes. Places additional restrictions on the transfer of assets by persons applying for Medicaid. The “look back” period for asset transfers is extended to 36 months. Individuals transferring assets within this period will have to wait an extended period before they are eligible to participate in Medicaid. Our report contributed to these changes. Total 5-year savings will total $650 million. Requires states to recover the costs of nursing facility and other long-term care services furnished to Medicaid beneficiaries from the estates of such beneficiaries. The law further requires states to establish hardship procedures for waiver of recovery in cases where undue hardship would result. We recommended similar actions. Total 5-year savings will amount to $310 million. During the year, we continued to monitor efforts by Medicare contractors to recover duplicate payments from providers and private insurers. Our previous work identified ways that the Health Care Financing Administration (HCFA), through its Medicare contractors, could recover several hundred million dollars in mistaken payments. So far, hospital and other providers have refunded $462 million in outstanding credit balances. Also, Medicare contractors continue to recover payments for services that were subsequently determined to be the responsibility of private insurers. Backlogs of mistaken payments were reduced from over $1 billion to about $135 million. Our most recent testimony highlighted the difficulties in identifying beneficiaries with other health insurance and suggested approaches that HCFA should pursue in its efforts to obtain insurance information before payments are made. Our review of Medicare audits of dialysis facility cost reports found that the audits were incomplete and had been poorly done. If the audits had been adequately performed, additional costs would probably have been disallowed, which would have resulted in a reduction of the median cost per treatment. Our work relating to Medicare payments for braces and artificial limbs showed that there was no need to establish separate fees for professional services because Medicare’s payment amounts for these items included a component for the practitioner’s professional services. Other fiscal year 1993 reports dealt with (1) HCFA management weaknesses relating to the lack of information on program safeguard activities, (2) changes in drug prices paid by health maintenance organizations and hospitals since the enactment of OBRA 1990 Medicaid drug rebate provisions, (3) Medicare payment rates for mammography, and (4) Medicaid drug fraud. Virtually, all states have already established, or will soon establish, managed care programs for their low-income populations. Such programs can provide an opportunity to improve access while providing quality comparable to that provided by more-traditional fee-for-service programs. Our report and testimony showed that strong monitoring and oversight were needed to help ensure patient access. Our report on Oregon’s managed care program demonstrated the need for additional program safeguards. As a result, the Secretary of Health and Human Services required the state to provide assurances that the program has enough providers to serve the intended population and report on participating plans’ quality assurance programs, financial viability, and disclosure of ownership. We recommended that HCFA (1) survey the technical component costs incurred by facilities providing radiology services and revise the fee schedule to more accurately reflect the costs incurred and (2) periodically adjust technical component payments to reflect changing costs, with annual payment reviews for procedures using high-cost technologies. This would save Medicare a significant amount of money and, even though costs per scan would decrease, providers would still realize profits because there would be fewer machines and utilization would rise. (GAO/HRD-92-59) We recommended that HCFA develop and issue specific coverage criteria for durable medical equipment that it identifies as subject to unnecessary payments. We also recommended that HCFA require physicians to provide narrative justifications for this equipment on certificates of medical necessity. These actions could substantially reduce Medicare expenditures. (GAO/HRD-92-64) We reported that the extra payments to Medicare teaching hospitals were too high and that the Congress should reduce the percentage add-on payments that teaching hospitals received. About $1 billion could be saved annually. (GAO/HRD-89-33) We recommended that the Congress amend the law to provide a fixed upper limit on the size of monetary penalties in lieu of the current cost-based limit. This would provide a more substantial penalty, and penalty amounts would be determined in the same manner as other provisions administered by the Inspector General of Health and Human Services. (GAO/HRD-89-18) Funding of Medicare’s safeguard activities has not kept pace with program growth. As a result, opportunities to save hundreds of millions of dollars annually have been lost. The basic problem is that under deficit control legislation, increasing safeguard funding requires reducing federal expenditures in benefit programs or raising taxes. We recommended that, because safeguard activities were cost-effective, returning in savings over $10 for every dollar spent on the activities, the Congress establish a process whereby increased funding of safeguard activities would not necessitate budget cuts in other areas. (GAO/HRD-91-67) We recommended that the Mayor of the District of Columbia establish a demonstration or pilot project focusing on the enrollment of Medicaid-eligible persons at hospitals. The project could (1) identify and describe the eligible patients having the most difficulty getting enrolled, (2) identify the assistance needs of these groups, and (3) test methods of providing these patients with needed assistance through outstationing of eligibility workers and other means. (GAO/HRD-93-28) We recommended that the Secretary of Health and Human Services direct the HCFA Administrator to develop an overall strategy to address prescription diversion as part of the larger problem of Medicaid fraud. This would highlight the importance of lessons learned from state initiatives and their applicability to health care in general. One key element of such a strategy might be the designation of a unit within HCFA responsible for (1) conducting continuing evaluations of state initiatives targeting prescription diversion and other Medicaid fraud and (2) providing guidance and technical assistance tailored to individual state problems. (GAO/HRD-93-118) Alleged Lobbying Activities: Office for Substance Abuse Prevention (GAO/HRD-93-100) District of Columbia: Barriers to Medicaid Enrollment Contribute to Hospital Uncompensated Care (GAO/HRD-93-28) Durable Medical Equipment: Specific HCFA Criteria and Standard Forms Could Reduce Medicare Payments (GAO/HRD-92-64) Health Insurance: Vulnerable Payers Lose Billions to Fraud and Abuse (GAO/HRD-92-69) Long-Term Care Case Management: State Experiences and Implications for Federal Policy (GAO/HRD-93-52) Medicaid Drug Fraud: Federal Leadership Needed to Reduce Program Vulnerabilities (GAO/HRD-93-118) Medicaid: Ensuring that Noncustodial Parents Provide Health Insurance Can Save Costs (GAO/HRD-92-80) Medicaid: HealthPASS: An Evaluation of a Managed Care Program for Certain Philadelphia Recipients (GAO/HRD-93-67) Medicaid: Oversight of Health Maintenance Organizations in the Chicago Area (GAO/HRD-90-81) Medicare: Excessive Payments Support the Proliferation of Costly Technology (GAO/HRD-92-59) Medicare: Experience Shows Ways to Improve Oversight of Health Maintenance Organizations (GAO/HRD-88-73) Medicare: Further Changes Needed to Reduce Program and Beneficiary Costs (GAO/HRD-91-67) Medicare: HCFA Needs to Take Stronger Actions Against HMOs Violating Federal Standards (GAO/HRD-92-11) Medicare: HCFA Should Improve Internal Controls Over Part B Advance Payments (GAO/HRD-91-81) Medicare: Indirect Medicare Education Payments Are Too High (GAO/HRD-89-33) Medicare: Millions of Dollars in Mistaken Payments Not Recovered (GAO/HRD-92-26) Medicare: One Scheme Illustrates Vulnerabilities to Fraud (GAO/HRD-92-76) Medicare: Over $1 Billion Should Be Recovered From Primary Health Insurers (GAO/HRD-92-52) Medicare: Payments for Clinical Laboratory Test Services Are Too High (GAO/HRD-91-59) Medicare Physician Payment: Geographic Adjusters Appropriate But Could Be Improved With New Data (GAO/HRD-93-93) Medicare: PRO Review Does Not Assure Quality of Care Provided by Risk HMOs (GAO/HRD-91-48) Medicare: Reasonableness of Health Maintenance Organization Payments Not Assured (GAO/HRD-89-41) Medicare: Renal Facility Cost Reports Probably Overstate Costs of Patient Care (GAO/HRD-93-70) Medicare: Separate Payment for Fitting Braces and Artificial Limbs Is Not Needed (GAO/HRD-93-98) Medicare: Statutory Modifications Needed for the Peer Review Program Monetary Penalty (GAO/HRD-89-18) Medicare: Variations in Payments to Anesthesiologists Linked to Anesthesia Time (GAO/HRD-91-43) Screening Mammography: Higher Medicare Payments Could Increase Costs Without Increasing Use (GAO/HRD-93-50) The federal government is the guardian of the public health. Among its functions in this role are providing research funds, support for educating and training health professionals, and surveillance of contagious diseases; overseeing food and drugs; providing block grants to states for mental health services, drug and alcohol programs, and maternal and child health services; and providing health care services to underserved areas and population groups. The Public Health Service, through its numerous administrations and agencies, carries out most of these tasks. Our work has made a significant contribution to the debate on health insurance reform as it relates to affordability and availability of health care. We issued reports that discussed approaches to addressing rising health care costs and declining availability of health insurance. We have continued to look at foreign, state, and local models of reform that was combined with earlier work. Our reviews of German health care reform and superior access and cost containment in Rochester, New York, suggest that universal access to health insurance is an achievable goal entailing changes in the role of government, the structure of the health finance system, and the financial responsibilities of individuals and employers. We have also supported congressional oversight of the Public Health Service. Our work has pointed out the need for more sustained and systemic management attention by the Department of Health and Human Services (HHS) in the issuance of Food and Drug Administration (FDA) regulations, in the regulation of hospital sterilants, and in monitoring the national organ transplant program. During fiscal year 1993, the results of much of our work were presented to the Speaker of the House of Representatives and the Majority Leader of the Senate in a transition report on health care reform. The report discussed major policy, management, and program issues facing the Congress and the new administration. In our April 1993 report, we identified weaknesses in the national system, established by federal legislation, to increase the supply of transplant donor organs (such as kidneys and hearts) and ensure an equitable distribution of organ donations. We found that some organ procurement organizations did not follow the policy for ranking potential donor recipients, did not use areawide lists when selecting patients to receive organs, and did not keep documentation of their selection decisions. As a result of this practice, some higher-ranked patients at transplant centers that were not in the selection process could miss their chance of getting an organ transplant. We recommended that HHS (1) require procurement organizations to use established criteria for allocating donor organs and selecting organ recipients and (2) establish criteria for measuring the effectiveness of organ procurement organizations. (GAO/HRD-93-56) In June 1993, we reported on FDA’s regulation of hospital sterilants and disinfectants used to clean medical instruments. These products are supposed to protect patients from serious infections that can result when unsterile instruments are used in treating them. FDA requires manufacturers to submit evidence that their products are safe and effective in killing harmful microorganisms before they are marketed. We found that only a few sterilant and disinfectant manufacturers had registered their products with FDA and that there were hundreds of products on the market that had not been authorized by FDA, as required by law. We recommended that the FDA Commissioner ensure that all current and future manufacturers of sterilants and disinfectants comply with the requirements for marketing their products. (GAO/HRD-93-79) Our recent review of the three major sources of information on use of illegal drugs showed that the nation lacked good evidence on which to gauge progress in drug control. Surveys of households and high school students do not cover the populations at highest risk and, for those who are surveyed, self-reports of drug use are questionable. We recommended that the Secretary of Health and Human Services make new efforts to validate the commonly used self-report surveys, that the Congress change current laws to require less frequent collection of data on the general population, and that the Secretary of Health and Human Services expand special studies of high-risk groups to fill the gaps in current surveys. (GAO/PEMD-93-18) Access to Health Care: States Respond to Growing Crisis (GAO/HRD-92-70) ADMS Block Grant: Drug Treatment Services Could Be Improved by New Accountability Program (GAO/HRD-92-27) Adolescent Drug Use Prevention: Common Features of Promising Community Programs (GAO/PEMD-92-2) Biotechnology: Managing the Risks of Field Testing Genetically Engineered Organisms (GAO/RCED-88-27) Board and Care Homes: Elderly at Risk From Mishandled Medications (GAO/HRD-92-45) Child Abuse: Prevention Programs Need Greater Emphasis (GAO/HRD-92-99) Childhood Immunization: Opportunities to Improve Immunization Rates at Lower Cost (GAO/HRD-93-41) Community Health Centers: Administration of Grant Awards Needs Strengthening (GAO/HRD-92-51) Drug Abuse Research: Federal Funding and Future Needs (GAO/PEMD-92-5) Drug Abuse: Research on Treatment May Not Address Current Needs (GAO/HRD-90-114) Drug Treatment: Despite New Strategy, Few Federal Inmates Receive Treatment (GAO/HRD-91-116) Early Intervention: Federal Investments Like WIC Can Produce Savings (GAO/HRD-92-18) FDA Regulations: Sustained Management Attention Needed to Improve Timely Issuance (GAO/HRD-92-35) Food Safety and Quality: FDA Strategy Needed to Address Animal Drug Residues in Milk (GAO/RCED-92-209) Food Safety and Quality: Uniform, Risk-based Inspection System Needed to Ensure Safe Food Supply (GAO/RCED-92-152) Food Safety: Building a Scientific, Risk-Based Meat and Poultry Inspection System (GAO/T-RCED-93-22) Freedom of Information: FDA’s Program and Regulations Need Improvement (GAO/HRD-92-2) Health Information Systems: National Practitioner Data Bank Continues to Experience Problems (GAO/IMTEC-93-1) Health Insurance: Vulnerable Payers Lose Billions to Fraud and Abuse (GAO/HRD-92-69) Hospital Sterilants: Insufficient FDA Regulation May Pose a Public Health Risk (GAO/HRD-93-79) Long-Term Care Case Management: State Experiences and Implications for Federal Policy (GAO/HRD-93-52) Long-Term Care Insurance: Risks to Consumers Should Be Reduced (GAO/HRD-92-14) Management of HHS: Using the Office of the Secretary to Enhance Departmental Effectiveness (GAO/HRD-90-54) Maternal and Child Health: Block Grant Funds Should Be Distributed More Equitably (GAO/HRD-92-5) Medical Technology: For Some Cardiac Pacemaker Leads, the Public Health Risks Are Still High (GAO/PEMD-92-20) Medical Technology: Quality Assurance Needs Stronger Management Emphasis and Higher Priority (GAO/PEMD-92-10) Medical Technology: Quality Assurance Systems and Global Markets (GAO/PEMD-93-15) Medical Waste Regulation: Health and Environmental Risks Need to Be Fully Assessed (GAO/RCED-90-86) Methadone Maintenance: Some Treatment Programs Are Not Effective; Greater Federal Oversight Needed (GAO/HRD-90-104) Nuclear Health and Safety: More Attention to Health and Safety Needed at Pantex (GAO/RCED-91-103) Occupational Safety & Health: OSHA Action Needed to Improve Compliance With Hazard Communication Standard (GAO/HRD-92-8) Organ Transplants: Increased Effort Needed to Boost Supply and Ensure Equitable Distribution of Organs (GAO/HRD-93-56) Pesticides: Need To Enhance FDA’s Ability To Protect the Public From Illegal Residues (GAO/RCED-87-7) Public Health Service: Evaluation Set-Aside Has Not Realized Its Potential to Inform the Congress (GAO/PEMD-93-13) Public Housing: Housing Persons With Mental Disabilities With the Elderly (GAO/RCED-92-81) State Health Care Reform: Federal Requirements Influence State Reforms (GAO/T-HRD-92-55) Truck Transport: Little Is Known About Hauling Garbage and Food in the Same Vehicles (GAO/RCED-90-161) The administration of justice issue area encompasses a wide range of federal activities, including all (1) civil and criminal law enforcement, such as antitrust, firearms licensing, and drug abuse; (2) litigative and judicial activities, such as sentencing reform; (3) correctional activities; and (4) immigration control and criminal justice assistance. As part of the Congress’ effort to identify successful drug abuse control programs, we examined the Treatment Alternatives to Street Crime Program. We found that Treatment Alternatives to Street Crime appeared promising as a way to help reduce offender drug use. Several barriers to Treatment Alternatives to Street Crime Program implementation exist, however, including disagreement on how Treatment Alternatives to Street Crime should be funded and lack of impact because Treatment Alternatives to Street Crime Programs are not located in many areas that have major drug problems. In its 1992 National Drug Control Strategy, the Office of National Drug Control Policy (ONDCP) also believed that Treatment Alternatives to Street Crime had promise and recommended that Treatment Alternatives to Street Crime be expanded. ONDCP has not, however, taken any specific actions to expand Treatment Alternatives to Street Crime. We recommended that ONDCP take the lead on expanding Treatment Alternatives to Street Crime. We also recommended that the ONDCP Director, in concert with relevant federal and state officials, identify additional cities that might benefit from Treatment Alternatives to Street Crime and reach agreement on Treatment Alternatives to Street Crime funding. ONDCP plans, in its 1993 interim drug strategy, to emphasize the need for additional Treatment Alternatives to Street Crime Programs. In the area of inspecting firearms licenses, we urged the Bureau of Alcohol, Tobacco, and Firearms (ATF) to randomly sample and inspect dealer licensees in order to obtain information that could help ATF better manage its compliance inspection efforts. In that regard, ATF initiated an effort to obtain statistically valid information with which it could make projections and provide more accurate information to the Congress about the dealer licensee universe. Regarding processing applicants for new firearms licenses, we urged ATF to expedite issuance of licenses to applicants who passed required background checks but for whom ATF field offices determined a field inspection was not necessary. Subsequently, ATF revised its license-processing procedures to ensure expedited license issuances when such circumstances were met. Our work on white-collar crime continued to focus on the federal government’s response to the bank and thrift fraud crisis. We pointed out that fraud and wrongdoing played a significant role in the financial institution crisis and called for enhanced, coordinated efforts by Justice, Treasury, the Resolution Trust Corporation (RTC), and the Federal Deposit Insurance Corporation (FDIC). The collection of fines and restitution ordered in criminal cases continues to be a problem. The great majority of fines and restitution remain unpaid. Criminal debt collection is plagued by multiple agency involvement, unclear delegations of authority, and the lack of a centralized collection and tracking system. The U.S. Courts National Fine Center is designed, in part, to address such problems. We reported, however, that implementation of the Fine Center’s efforts has been delayed, in part because of the poor state of records on criminal debts. We have carried out a significant body of work in money laundering focusing primarily on improvements in enforcing the provisions of the Bank Secrecy Act and section 6050I of the Tax Code and ways to better use the reports required by these laws. We called for increasing involvement of the states in money-laundering enforcement and recommended improvements that would result in increased state use of federal money-laundering reports. The Judicial Conference has adopted our recommendation that it provide the Congress data explaining the policies, formal and informal, it uses to assess the need for additional judgeships. It has also begun to develop a more accurate, useful measure of appellate judge workload, as recommended. The Bureau of Prison’s (BOP) total inmate population is growing at the rate of about 700 inmates per month. To keep pace without increasing overcrowding, BOP would need to open the equivalent of one new low-security facility every month. In response to our recommendations, BOP established a double-bunking policy in 1991, which saved about $210 million for existing institutions and is expected to save about $260 million in new construction through fiscal year 1994. BOP revised the policy further to fully double-bunk minimum-security and low-security facilities and to include some double-bunking at administrative-level and high-security facilities, thus saving additional funds because new capacity requirements are further reduced. By taking full advantage of less costly halfway houses for inmates with short sentences or inmates nearing the end of their prison terms, BOP can decrease prison crowding without building new prisons. As a result of our recommendations, BOP implemented new guidance on the use of halfway houses and has increased its use of total available contract beds from 73 percent in 1991 to 87 percent as of November 1992. In our report on drug use measurement, we recommended to the Congress that part A of title V of the Public Health Service Act be amended to provide that the Secretary of Health and Human Services collect survey data only biennially, rather than each year, on the national prevalence of the various forms of substance abuse among high school students and the general population. But if local or regional indicators portend an increase in drug use, then the Secretary should have the authority to initiate new or augment current studies to determine the nature and the degree of the problem. We also recommended that the Secretary of Health and Human Services (1) develop or improve supplementary data sources to more appropriately determine heroin and cocaine prevalence patterns and trends; (2) design and conduct a systematic program for the study of drug use prevalence rates among underrepresented high-risk groups; and (3) give high priority to validating self-reports of the use of illicit drugs, focusing particularly on objective techniques, such as hair testing. In addition, we recommended that the Director, National Institute of Justice, (1) review the practicality of improving the Drug Use Forecasting design and (2) give priority to creating a drug use forecasting arrestee data base that could be generalized to booked arrestees in the geographic areas surveyed. (GAO/PEMD-93-18) In our testimony on misuse of criminal justice information contained in the National Crime Information Center (NCIC), we identified sufficient examples of misuse about which we recommended that the Congress enact legislation with strong criminal sanctions directed specifically at the misuse of NCIC. Further, we recommended that the Federal Bureau of Investigation Director and the NCIC Advisory Policy Board re-evaluate the security specifications in the NCIC Security Policy and, as a minimum, amend the policy to endorse and encourage state and local users to enhance their security features. (GAO/T-GGD-93-41) The federal government’s response to the bank and thrift fraud crisis is not as coordinated as it should be. Justice’s Special Counsel for Financial Institution Fraud has not been effectively managed this response from a governmentwide perspective. We recommended that the Special Counsel determine the adequacy of Justice and non-Justice resources devoted to financial institution fraud and develop measures for gauging the overall effectiveness of the government’s response. (GAO/GGD-93-48) As a part of efforts to address wrongdoing in connection with failed thrifts, RTC is responsible for pursuing professional liability claims against those whose alleged professional misconduct caused losses to failed thrifts. But, certain RTC management actions have disrupted the professional liability program. RTC needs to take steps to stabilize this program including working with FDIC to ensure an orderly transfer of functions to FDIC. (GAO/T-GGD-92-42) National Fine Center is designed to address a number of problems with the collection of criminal fines and restitution orders. Full implementation of National Fine Center has been delayed, and the design has security flaws. We recommended that the Administrative Office of the Courts take a number of steps to improve the security of the system to better ensure unauthorized access to the sensitive data. (GAO/GGD-93-95) Greater involvement by state law enforcement in addressing money laundering would help reduce the profitability of crime. The federal government could do more to help the states provide data from reports required by the Bank Secrecy Act and section 6050I of the Internal Revenue Code. We recommended that the Congress amend the disclosure provisions of the Internal Revenue Code to give the Secretary of Treasury permanent authority to disclose information reported on Internal Revenue Service Forms 8300 and to allow states access to the data on the same basis as federal law enforcement agencies. (GAO/GGD-93-1) Our report on sentencing guidelines identified major shortcomings in the data available to determine the impact of the guidelines. We recommended that the Congress direct the U.S. Sentencing Commission to continue its efforts to analyze sentencing disparity under the sentencing guidelines, particularly unwarranted disparity. Action has not yet been taken. (GAO/GGD-92-93) In our general management report on the Immigration and Naturalization Service (INS), we recommended that the Commissioner of INS set priorities within the framework of the overall INS mission and reorganize the agency’s field structure. We also made recommendations to reduce the overlap and the duplication in the enforcement program, improve allocation of resources in the examination and inspection programs, and strengthen the financial and information management programs. INS has taken steps to address some of these recommendations, particularly in the financial management area. (GAO/GGD-91-28) Unless the programs designed to prevent aliens from illegally entering the country and to remove those who have no legal basis to remain here are made more effective, INS has little hope of detaining any more than a small fraction of the criminal and other aliens meeting its detention criteria. Inevitable, proposals to tighten the nation’s borders and to expedite the expulsion of deportable aliens have to consider their rights to constitutionally based protections and must deal with complex and sensitive issues, such as potential strains in relationships with Mexico and other nations, humanitarian concerns relating to equitable treatment of aliens, and difficult budgetary tradeoffs. Nonetheless, until the Congress comes to grips with these problems and tradeoffs, little progress in resolving detention issues can be expected. The Congress may therefore wish to address border security and deportation issues in the course of future deliberations on immigration policy, specifically: How tight do we want our borders to be? How aggressively should we expel deportable aliens? How much additional funding are we willing to invest in these efforts? (GAO/GGD-92-85) Our general management review of the Customs Service found that Customs cannot adequately ensure that it is meeting its responsibilities to combat unfair foreign trade practices or protect the public from unsafe goods because of interrelated problems in the management culture, including weak mission planning and outdated organizational structures. In September 1992, we recommended that Customs institute a strategic management process to set priorities for its trade enforcement strategy, establish measurable performance objectives, and monitor progress toward achieving them. Customs has formed task forces to address these, but it has made little progress to date. We also recommended that Customs evaluate the adequacy of its current headquarters organizational structure and its relationship to the new trade enforcement strategy that it will develop. In addition, we recommended that the Congress remove existing legislative provisions prohibiting Customs from planning changes to its field structure. Customs has a number of actions in process responsive to our recommendations. It has developed a draft 5-year strategic plan and plans to develop measures to assess performance against the plan’s goals. It has formed a task force to establish a statistically valid approach to assessing compliance with the trade laws. The preliminary results of the task force’s work confirms our findings that Customs has a greater noncompliance problem than it realized. Customs sought and obtained congressional repeal of the legislative provisions prohibiting it from planning changes to its field structure. It now has formed a task force to develop a proposal for a new organizational structure. (GAO/GGD-92-123) Asset Forfeiture: Improved Guidance Needed for Use of Shared Assets (GAO/GGD-92-115) Asset Forfeiture: Noncash Property Should Be Consolidated Under the Marshals Service (GAO/GGD-91-97) Bank and Thrift Criminal Fraud: The Federal Commitment Could Be Broadened (GAO/GGD-93-48) Bank and Thrift Failures: FDIC and RTC Could Do More to Pursue Professional Liability Claims (GAO/T-GGD-92-42) Bankruptcy Administration: Justification Lacking for Continuing Two Parallel Programs (GAO/GGD-92-133) Child Abuse: Prevention Programs Need Greater Emphasis (GAO/HRD-92-99) Customs Service and INS: Dual Management Structure for Border Inspections Should Be Ended (GAO/GGD-93-111) Customs Service: Comments on the Customs Modernization and Informed Compliance Act (GAO/T-GGD-92-22) Customs Service: 1911 Act Governing Overtime Is Outdated (GAO/GGD-91-96) Customs Service: Trade Enforcement Activities Impaired by Management Problems (GAO/GGD-92-123) Defense Procurement Fraud: Justice’s Overall Management Can Be Enhanced (GAO/GGD-88-96) Document Security: Justice Can Improve Its Controls Over Classified and Sensitive Documents (GAO/GGD-93-134) Drug Abuse: Research on Treatment May Not Address Current Needs (GAO/HRD-90-114) Drug Control: Communications Network Funding and Requirements Uncertain (GAO/NSIAD-92-29) Drug Control: Inadequate Guidance Results in Duplicate Intelligence Production Efforts (GAO/NSIAD-92-153) Drug Control: Treatment Alternatives Program for Drug Offenders Needs Stronger Emphasis (GAO/GGD-93-61) Drug Treatment: Despite New Strategy, Few Federal Inmates Receive Treatment (GAO/HRD-91-116) Drug Use Measurement: Strengths, Limitations, and Recommendations for Improvement (GAO/PEMD-93-18) Drug War: Drug Enforcement Administration Staffing and Reporting in Southeast Asia (GAO/NSIAD-93-82) EEO at Justice: Progress Made but Underrepresentation Remains Widespread (GAO/GGD-91-8) Federal Jail Bedspace: Cost Savings and Greater Accuracy Possible in the Capacity Expansion Plan (GAO/GGD-92-141) Federal Judiciary: How the Judicial Conference Assesses the Need for More Judges (GAO/GGD-93-31) Federal Prisons: Inmate and Staff Views on Education and Work Training Programs (GAO/GGD-93-33) Federal Tax Deposit System: IRS Can Improve the Federal Tax Deposit System (GAO/AFMD-93-40) Financial Audit: IRS Significantly Overstated Its Accounts Receivable Balance (GAO/AFMD-93-42) Illegal Aliens: Despite Data Limitations, Current Methods Provide Better Population Estimates (GAO/PEMD-93-25) Immigration Control: Immigration Policies Affect INS Detention Efforts (GAO/GGD-92-85) Immigration Management: Strong Leadership and Management Reforms Needed to Address Serious Problems (GAO/GGD-91-28) Immigration Reform: Verifying the Status of Aliens Applying for Federal Benefits (GAO/HRD-88-7) Information Management: Immigration and Naturalization Service Lacks Ready Access to Essential Data (GAO/IMTEC-90-75) Justice Automation: Tighter Computer Security Needed (GAO/IMTEC-90-69) Money Laundering: State Efforts To Fight It Are Increasing But More Federal Help Is Needed (GAO/GGD-93-1) National Crime Information Center: Legislation Needed to Deter Misuse of Criminal Justice Information (GAO/T-GGD-93-41) National Fine Center: Expectations High, but Development Behind Schedule (GAO/GGD-93-95) Office of Justice Programs: Discretionary Grants Reauthorization (GAO/GGD-93-23) Prison Inmates: Better Plans Needed Before Felons Are Released (GAO/GGD-93-92) Resolution Trust Corporation: Additional Monitoring of Basic Ordering Agreements Needed (GAO/GGD-93-107) Resolution Trust Corporation: Affordable Multifamily Housing Program Has Improved but More Can Be Done (GAO/GGD-92-137) Resolution Trust Corporation: A More Flexible Contracting-Out Policy Is Needed (GAO/GGD-91-136) Resolution Trust Corporation: Assessing Portfolio Sales Using Participating Cash Flow Mortgages (GAO/GGD-92-33BR) Resolution Trust Corporation: Asset Pooling and Marketing Practices Add Millions to Contract Costs (GAO/GGD-93-2) Resolution Trust Corporation: Better Assurance Needed That Contractors Meet Fitness and Integrity Standards (GAO/GGD-93-127) Resolution Trust Corporation: Controls Over Asset Valuations Do Not Ensure Reasonable Estimates (GAO/GGD-93-80) Resolution Trust Corporation: Effectiveness of Auction Sales Should Be Demonstrated (GAO/GGD-92-7) Resolution Trust Corporation: Loan Portfolio Pricing and Sales Process Could Be Improved (GAO/GGD-93-116) Resolution Trust Corporation: 1992 Washington/Baltimore Auctions Planned and Managed Poorly (GAO/GGD-93-115) Resolution Trust Corporation: Performance Assessment for 1991 (GAO/T-GGD-92-14) Resolution Trust Corporation: Progress Under Way in Minority and Women Outreach Program for Outside Counsel (GAO/GGD-91-121) Resolution Trust Corporation: Subcontractor Cash Management Practices Violate Policy and Reduce Income (GAO/GGD-93-7) Resolution Trust Corporation: Survey Results on RTC’s Communication and Real Estate Marketing (GAO/GGD-92-134BR) Resolution Trust Corporation: Timelier Action Needed to Locate Missing Asset Files (GAO/GGD-93-76) Sentencing Guidelines: Central Questions Remain Unanswered (GAO/GGD-92-93) Tax Administration: IRS’ Management of Seized Assets (GAO/T-GGD-92-65) Thrift Failures: Actions Needed to Stabilize RTC’s Professional Liability Program (GAO/GGD-93-105) U.S. Attorneys: Better Models Can Reduce Resource Disparities Among Offices (GAO/GGD-91-39) U.S. Customs Service: Limitations in Collecting Harbor Maintenance Fees (GAO/GGD-92-25) War on Drugs: Federal Assistance to State and Local Drug Enforcement (GAO/GGD-93-86) In 1993, the federal government spent about $150 billion in pay and benefits for over 3 million civilian employees. The effectiveness of federal agencies in achieving their missions depends largely on the quality, the motivation, and the performance of these employees. Recruiting, hiring, training, and managing a quality work force is the foundation of effective governance. In recent years, several significant actions have been taken on the basis of our recommendations. In our February 1992 report, we noted that the Office of Personnel Management (OPM) had established a review process to ensure that conversions of appointments, from political to career, adhered to merit system principles but that not all of OPM’s examining offices had implemented the process. Consequently, conversions whose propriety was questionable had not been reviewed by OPM. We recommended that the OPM Director ensure that (1) procedures were established in all of its examining offices to identify and review conversions within their jurisdictions and (2) the review process be revised to include the pre-appointment review of conversions at agencies that OPM had delegated examining authority to. On February 21, 1992, OPM implemented our recommendations, providing greater assurance that career appointments granted political appointees were based on merit principles. In July 1991 and February 1992, we reported that stronger controls were needed to reduce the risk of fraud and abuse and administrative costs in the Federal Employees Health Benefits Program (FEHBP). Program funds paid to fee-for-service health insurance plans are highly vulnerable to fraud and abuse and, in 1988, FEHBP paid an operational expense ratio that was 51 percent higher than the average ratio for other large insured health benefits programs we reviewed and 89 percent higher than the average ratio for programs that were self-insured. We have made numerous recommendations to help OPM strengthen controls over FEHBP. To strengthen controls against fraud and abuse, OPM developed minimum internal control and quality assurance standards for financial claims and processing controls, which will become formal parts of the 1994 service charge negotiations. Moreover, it is working with the Office of Management and Budget on procedures to implement cost accounting standards in the program and intends to make them effective with the 1995 contracts. OPM also now requires carriers to submit semiannual reports on the number and the status of fraud and abuse cases pursued and is continuing to work with carriers and its Office of the Inspector General to implement a sanctions program. Finally, in February 1993, OPM started an activity to prevent payments on contracts between debarred providers and carriers. Also, as a result of our recommendations, OPM negotiated administrative expense cuts with the fee-for-service carriers for 1993 and the next 2 contract years. The administrative expense reductions represent permanent decreases in the administrative expense bases such that, over the 3 years, FEHBP will save $43.3 million. In a report assessing the services job seekers were receiving at OPM’s Federal Job Information Centers, we recommended several steps OPM should take to give the Centers greater customer focus. Consistent with our recommendations, on September 14, 1992, OPM reported to the Chairs of the cognizant congressional subcommittees that several steps had been taken at some Centers in response to our recommendations. These included (1) expanding Centers’ hours to coincide with the hours of the buildings in which they were located, (2) ensuring there were enough chairs and tables to accommodate job seekers, (3) improving telephone access, and (4) creating an Employment Information Task Force to examine the staffing situation along with other issues addressed in our report. In our November 1992 report discussing opportunities to lower drug testing program costs, we recommended that the Secretary of Health and Human Services reduce the required rate of blind proficiency testing performed by agencies. The Department of Health and Human Services (HHS) agreed with this and, in January 1993, published proposed revisions to the mandatory guidelines that would reduce the requirement for agencies to maintain a minimum of 10-percent blind samples to 3 percent. According to HHS officials, this change could significantly reduce the costs associated with maintaining a blind sample program without affecting the ability to monitor a laboratory’s performance. Final issuance of the revised guidelines is expected for late 1993. In a governmentwide review of expert and consultant appointments, we found that 35 percent were inappropriate. (GAO/GGD-91-99) In 1991, we recommended that, to improve compliance with federal requirements governing the expert and consultant appointing authority, OPM revise Federal Personnel Manual (FPM) guidance to (1) define the meaning of operating duties, (2) give examples of nonoperating duties that experts and consultants may perform, and (3) specify that experts and consultants may not do routine and continuous duties that are the responsibility of regular employees. OPM agreed with our recommendations and, on January 4, 1993, revised guidance through FPM Letter 304-4. If the American people are to receive the high-quality government services they deserve, continuing attention needs to be given to the manner in which federal employees are managed. Improvements in the management of federal human resources can yield substantial improvements to government programs. We have made numerous recommendations to OPM and other agencies to improve the quality of the federal work force. The following are areas in which we believe further action or monitoring is needed to adequately respond to our recommendations. Although federal agencies are diverse and have different missions, they are required to use the same general performance management system. A general framework for federal performance management systems seems appropriate. Agencies believe that, within this framework, however, they should be able to tailor specific elements to reflect such factors as their missions, organizational structure, and the way their work is done. The lack of sufficient flexibility for agencies to design their own performance management systems has created problems in managing and improving employee performance. As suggested in our February 1993 report, we believe that when the Congress considers legislation concerning the Performance Management and Recognition System, extending Pay-for-Performance to General Schedule employees, and other performance management legislation, it should consider giving agencies the flexibility needed to tailor performance management systems to their own work environments. (GAO/GGD-93-57) Data on the gender, race, and national origin of applicants for federal employment—known as applicant flow data—are not adequately collected. During the early 1980s, OPM and the Equal Employment Opportunity Commission (EEOC) required agencies to collect the data using an OPM form. However, authority to use the form expired and OPM and EEOC no longer require agencies to collect applicant flow data. In 1989, EEOC proposed a directive that would have required agencies to collect applicant flow data but, at OPM’s request, did not issue the proposed directive. In our October 1991 testimony, we recommended that OPM, in cooperation with EEOC, examine options for collecting and analyzing applicant flow data and take prompt appropriate action. (GAO/T-GGD-92-2) Under court order, OPM is collecting and analyzing applicant flow data from persons who take the Administrative Careers With America examination. OPM data show, however, that this examination produces a small percentage of all federal new hires. Additional stages toward automating the hiring process include other examinations and government hiring authorities that are planned. The extent to which applicant flow data will be collected during these additional stages is under consideration. As discussed in our December 1992 report, OPM is not providing sufficient oversight of the federal personnel system to ensure compliance with laws, rules, and regulations. Passage of the Civil Service Reform Act of 1978 allowed the government’s personnel system to become more decentralized and increased the ability of personnel officers to respond quickly and efficiently to line managers. It also increased the risk that federal personnel requirements will be misinterpreted, unknown, or ignored by those responsible for carrying them out. This can result in legal or merit system violations, inadequate agency mission support, and miscalculated payments. Although OPM is responsible for administering and protecting the federal personnel system, reduced staff and resources has forced it to depend on agencies to shoulder much of the responsibility. This would be reasonable if appropriate personnel management evaluation standards existed and were followed and if all agencies did personnel management evaluations regularly. In our December 1992 report, however, we stated that varying degrees of personnel management evaluation activity existed among 35 of the largest federal agencies and that OPM had not issued standards by which to adequately judge quality. (GAO/GGD-93-24) We believe that, to improve oversight of the federal personnel system, OPM needs to assess standards for evaluation systems, make changes where needed, and develop qualifications for evaluators and assess the training available to them. Acquisition Management: Implementation of the Defense Acquisition Workforce Improvement Act (GAO/NSIAD-93-129) AID Management: EEO Issues and Protected Group Underrepresentation Require Management Attention (GAO/NSIAD-93-13) AID Management: Strategic Management Can Help AID Face Current and Future Challenges (GAO/NSIAD-92-100) Alleged Lobbying Activities: Office for Substance Abuse Prevention (GAO/HRD-93-100) Apprenticeship Training: Administration, Use, and Equal Opportunity (GAO/HRD-92-43) Aviation Safety: Limited Success Rebuilding Staff and Finalizing Aging Aircraft Plan (GAO/RCED-91-119) Aviation Safety: Problems Persist in FAA’s Inspection Program (GAO/RCED-92-14) Customs Service and INS: Dual Management Structure for Border Inspections Should Be Ended (GAO/GGD-93-111) Customs Service: 1911 Act Governing Overtime Is Outdated (GAO/GGD-91-96) Department of Education: Long-Standing Management Problems Hamper Reforms (GAO/HRD-93-47) EEO at Justice: Progress Made but Underrepresentation Remains Widespread (GAO/GGD-91-8) Employee Conduct Standards: Some Outside Activities Present Conflict-of-Interest Issues (GAO/GGD-92-34) Employee Drug Testing: Opportunities Exist to Lower Drug-Testing Program Costs (GAO/GGD-93-13) Energy Management: Using DOE Employees Can Reduce Costs for Some Support Services (GAO/RCED-91-186) FAA Staffing: Improvements Needed in Estimating Air Traffic Controller Requirements (GAO/RCED-88-106) Federal Affirmative Action: Better EEOC Guidance and Agency Analysis of Underrepresentation Needed (GAO/GGD-91-86) Federal Affirmative Employment: Status of Women and Minority Representation in the Federal Workforce (GAO/T-GGD-92-2) Federal Employees’ Compensation Act: Need to Increase Rehabilitation and Reemployment of Injured Workers (GAO/GGD-92-30) Federal Employment: Displaced Federal Workers Can Be Helped by Expanding Existing Programs (GAO/GGD-92-86) Federal Employment: Inquiry Into Sexual Harassment Issues at Selected VA Medical Centers (GAO/GGD-93-119) Federal Employment: Poor Service Found at Federal Job Information Centers (GAO/GGD-92-116) Federal Health Benefits Program: Stronger Controls Needed to Reduce Administrative Costs (GAO/GGD-92-37) Federal Hiring: Does Veterans’ Preference Need Updating? (GAO/GGD-92-52) Federal Labor Relations: A Program in Need of Reform (GAO/GGD-91-101) Federal Lobbying: Federal Regulation of Lobbying Act of 1946 Is Ineffective (GAO/T-GGD-91-56) Federal Lobbying: Lobbying the Executive Branch (GAO/T-GGD-91-70) Federal Performance Management: Agencies Need Greater Flexibility in Designing Their Systems (GAO/GGD-93-57) Federal Personnel Management: OPM Reliance on Agency Oversight of Personnel System Not Fully Justified (GAO/GGD-93-24) Federal Recruiting and Hiring: Authority for Higher Starting Pay Useful but Guidance Needs Improvement (GAO/GGD-91-22) Federal Recruiting and Hiring: Making Government Jobs Attractive to Prospective Employees (GAO/GGD-90-105) Federal Workforce: Inappropriate Use of Experts and Consultants at Selected Civilian Agencies (GAO/GGD-91-99) Financial Disclosure: Implementation of Statute Governing Judicial Branch Personnel (GAO/GGD-93-85) Fraud and Abuse: Stronger Controls Needed in Federal Employees Health Benefits Program (GAO/GGD-91-95) General Services Administration: Actions Needed to Improve Protection Against Fraud, Waste, and Mismanagement (GAO/GGD-92-98) General Services Administration: Sustained Attention Required to Improve Performance (GAO/GGD-90-14) Government National Mortgage Association: Greater Staffing Flexibility Needed to Improve Management (GAO/RCED-93-100) Management of HHS: Using the Office of the Secretary to Enhance Departmental Effectiveness (GAO/HRD-90-54) Management of VA: Improved Human Resource Planning Needed to Achieve Strategic Goals (GAO/HRD-93-10) Managing Human Resources: Greater OPM Leadership Needed to Address Critical Challenges (GAO/GGD-89-19) Managing IRS: Actions Needed to Assure Quality Service in the Future (GAO/GGD-89-1) National Labor Relations Board: Action Needed to Improve Case-Processing Time at Headquarters (GAO/HRD-91-29) National Science Foundation: Better Guidance on Employee Book Writing Could Help Avoid Ethics Problems (GAO/GGD-93-8) Nuclear Security: DOE’s Progress on Reducing Its Security Clearance Work Load (GAO/RCED-93-183) Personnel Practices: Propriety of Career Appointments Granted Former Political Appointees (GAO/GGD-92-51) Personnel Practices: Retroactive Appointments and Pay Adjustments in the Executive Office of the President (GAO/GGD-93-148) Personnel Practices: Schedule C and Other Details to the Executive Office of the President (GAO/GGD-93-14) Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations (GAO/RCED-93-23) Railroad Safety: FRA’s Staffing Model Cannot Estimate Inspectors Needed for Safety Mission (GAO/RCED-91-32) Senior Executive Service: Reasons the Candidate Development Program Has Not Produced More SES Appointees (GAO/GGD-88-47) Social Security Administration: Stable Leadership and Better Management Needed To Improve Effectiveness (GAO/HRD-87-39) State Department: Management Weaknesses at the U.S. Embassies in Panama, Barbados, and Grenada (GAO/NSIAD-93-190) State Department: Management Weaknesses at the U.S. Embassy in Mexico City, Mexico (GAO/NSIAD-93-88) State Department: Need to Ensure Recovery of Overseas Medical Expenses (GAO/NSIAD-92-277) Tax Administration: Better Training Needed for IRS’ New Telephone Assistors (GAO/GGD-91-83) Tax Administration: Improved Staffing of IRS’ Collection Function Would Increase Productivity (GAO/GGD-93-97) Tax Administration: IRS Should Expand Financial Disclosure Requirements (GAO/GGD-92-117) Tax Administration: Need for More Management Attention to IRS’ College Recruitment Program (GAO/GGD-90-32) The Changing Workforce: Comparison of Federal and Nonfederal Work/Family Programs and Approaches (GAO/GGD-92-84) Transition From School to Work: Linking Education and Worksite Training (GAO/HRD-91-105) UNESCO: Status of Improvements in Management Personnel, Financial, and Budgeting Practices (GAO/NSIAD-92-172) U.S. Attorneys: Better Models Can Reduce Resource Disparities Among Offices (GAO/GGD-91-39) U.S. Department of Agriculture: Need for Improved Workforce Planning (GAO/RCED-90-97) U.S. Department of Agriculture: Strengthening Management Systems to Support Secretarial Goals (GAO/RCED-91-49) VA Health Care: Inadequate Enforcement of Federal Ethics Requirements at VA Medical Centers (GAO/HRD-93-39) VA Health Care: Problems in Implementing Locality Pay for Nurses Not Fully Addressed (GAO/HRD-93-54) Veterans Affairs IRM: Stronger Role Needed for Chief Information Resources Officer (GAO/IMTEC-91-51BR) Whistleblower Protection: Agencies’ Implementation of the Whistleblower Statutes Has Been Mixed (GAO/GGD-93-66) Whistleblower Protection: Determining Whether Reprisal Occurred Remains Difficult (GAO/GGD-93-3) Workplace Accommodation: EPA’s Alternative Workspace Process Requires Greater Managerial Oversight (GAO/GGD-92-53) A public and political consensus recently has emerged that fundamental changes are needed in the way the federal government manages. Both the September 1993 report of the Vice President’s National Performance Review and the Government Performance and Results Act of 1993 drew heavily on our work in analyzing federal management and the need for change. We have a long record of work aimed at improving the management of the federal government. For example, since 1982, we have been doing broad reviews of the management processes and systems of major federal agencies. Our goals have been to improve agencies’ management and facilitate congressional oversight and action on management issues. These reviews draw upon and complement our more customary evaluations of individual programs within agencies. We have completed 48 general management reviews, covering 23 agencies, during the last 10 years. These reviews have shown that agencies need to develop strategic plans and clearly articulated goals and program objectives. Agencies also need better financial and information systems to support managers’ decisionmaking. The general management reviews have further demonstrated that agencies need to make an aggressive commitment to improving accountability and internal controls. In the past year, we continued to focus on the need for agencies to work with congressional and other stakeholders to define their missions and desired program outcomes, develop measures of program performance, and align their management and administrative support functions to support program results. We assisted the Senate Committee on Governmental Affairs in drafting and marking up the performance measurement legislation that became the Government Performance and Results Act. Since the legislation has passed, we have met with agency officials in a number of forums to explain the requirements of the act, program performance measurement, and the implications for program management and audit and evaluation efforts. We continued to closely monitor the progress of Census Bureau plans for the 2000 census and provided testimony to the Congress so that it might better understand and influence the critical early decisions that determine the cost and the accuracy of the next census. We warned in March 1993 testimony that a lack of Bureau progress in redesigning the 2000 census jeopardized the prospects of reform. We issued several reports and testimonies on economic statistical issues. We testified on the limitations of U.S. statistics on trade with Mexico. We issued a report on the status of the 1992 Agriculture and Economic Censuses and future challenges facing the 1997 Agriculture Census. We also issued a report that included no evidence that estimates of the Gross Domestic Product for the first quarter of 1991 had been manipulated for political purposes. This latter report also discussed statistical questions and issues in measuring employment, personal income and the Gross Domestic Product. We will continue to develop a body of work on economic statistical issues and the organization and the leadership of the federal statistical system as a whole. Our management work consistently has found that the long-standing management problems confronting the federal government will require long-term and concerted attention by the senior leadership in the agencies. The following key open recommendations are from our most recently completed general management reviews. These recommendations deserve priority consideration and are described both in the section below and in the appropriate substantive issue area section. The Department of Education, charged with managing the federal investment in education and leading the long-term effort to improve education, lacks a clear management vision of how to best marshal its resources to effectively achieve its mission. The Department has no systematic processes for planning, organizing, or monitoring for results and quality improvement. The Department’s major management systems for information, financial, and human resources management also need attention. In May 1993, we recommended a series of actions that the Department should take to address management weaknesses. Specifically, we recommended that the Secretary (1) articulate a strategic management vision demonstrating how its management infrastructure would be developed to support its missions and secretarial policy priorities; (2) adopt a strategic management process for setting clear goals and priorities, measuring progress toward those goals, and ensuring accountability for attaining them; (3) enhance management leadership throughout the Department and strengthen agency culture; and (4) create for information, financial, and human resources management, strategic visions, and strategic plans that are integrated with the Department’s overall strategic management process. In response to our recommendations, the Department has begun to implement a strategic planning process. Staff have also been meeting weekly to establish a framework for implementing such initiatives as the national goals legislation. The Department is refining its financial management strategic plan and redesigning its core financial management systems and has begun implementing its strategic and tactical plans for information technology resources. The Department has established a committee to address problems in data collection and dissemination and is working with the National Academy of Public Administration to determine what information is useful in accomplishing programs goals and objectives. The Department has formed task forces to address recruitment issues and to study the use of training funds and to make recommendations to ensure adequate support for training across the Department. (GAO/HRD-93-47) Our general management review of the Customs Service found that Customs could not adequately ensure that it was meeting its responsibilities to combat unfair foreign trade practices or protect the public from unsafe goods because of interrelated problems in its management culture, including weak strategic planning and outdated organizational structures. In September 1992, we recommended that Customs institute a strategic management process to set priorities for its trade enforcement strategy, establish measurable performance objectives, and monitor progress toward achieving them. We also recommended that Customs evaluate the adequacy of its current headquarters organizational structure and its relationship to the new trade enforcement strategy that it would develop. In addition, we recommended that the Congress remove existing legislative provisions prohibiting Customs from planning changes to its field structure. Customs has a number of actions in process that are responsive to our recommendations. It has developed a draft 5-year strategic plan, and it plans to develop measures to assess performance against goals. It has formed a task force to establish a statistically valid approach to assessing compliance with the trade laws. The preliminary results of the task force’s work confirm our findings that Customs has a greater noncompliance problem than it realized. Customs sought and obtained congressional repeal of the legislative provisions prohibiting it from planning changes to its field structure. It now has formed a task force to develop a proposal for a new organizational structure. (GAO/GGD-92-123) In a series of reports on the management of the U.S. Department of Agriculture (USDA), we noted structural problems that, if addressed, could lead to greater efficiency, effectiveness, and cost savings. A key issue is the independence of the major component agencies of USDA, each established in response to a separate legislative mandate. Because these agencies have historically established their own information, financial, and human resources management systems to comply with legislative mandates, efficiencies have not been achieved departmentwide. With these systems, the Department is data rich but information poor, which makes it difficult for the Secretary to make informed decisions. Furthermore, weaknesses in financial management systems and internal and accounting controls substantially increase the risk of mismanagement, fraud, waste, and abuse in Department programs. We made a number of recommendations specific to departmental structures and strategies that would result in needed improvement. We also recommended that farm agencies’ field structures be given a major overhaul; management of cross-cutting agricultural issues be improved; management systems—financial, informational, and human resource—be strengthened; and USDA be revitalized to meet new challenges and increased responsibilities in nutrition, international trade, and resource conservation issues. Recent progress toward streamlining the USDA field structure is very encouraging, and cost savings should be significant. In September 1993, Agriculture Secretary Mike Espy announced a plan for closing some farm agency offices and consolidating farm agencies into a single farm agency. He also announced a plan to streamline headquarters. (GAO/IMTEC-93-20, GAO/RCED-91-49, GAO/RCED-91-41, and GAO/RCED-91-9) As part of our management review at the Agency for International Development (AID), we reported in March 1992 that a strategic management plan could help AID focus on an agencywide direction and address its key issues. New programs and approaches introduced by each Administrator, added to ongoing activities and congressional directives, have forced AID to address so many objectives that the agency has no clear priorities or meaningful direction. With the dissolution of the Warsaw Pact and the demise of the Soviet Union, as well as other dramatic global changes, the rationale for foreign aid has shifted. Without a clear vision of what AID should be doing and why, AID’s efforts to reorganize, focus its programs, plan for future work force needs, measure program performance, and implement major changes in financial and management information systems may be futile. We developed the elements of a strategic planning and management process framework for federal agencies and recommended in our report that AID establish such a process. It should enable AID to develop an agencywide direction, select effective management strategies to achieve this direction and address critical issues, assign accountability and monitor feedback, and ensure that its direction continues beyond one Administrator’s tenure. AID responded favorably to our conclusions and recommendations on strategic planning and management and proposed a two-phased approach. Although AID has begun some steps in the strategic management process, it has not yet issued a detailed action plan for implementation nor has it released the results of its initial efforts. In our final general management review report, we developed some of these issues further and reported that (1) the diffusion of the foreign aid program had constrained AID management, (2) key groups lacked consensus on AID’s goals and priorities, (3) lack of central controls had resulted in a fragmented and ineffective organization, and (4) AID had not adequately managed the changes in its overseas work force. We made numerous recommendations to the AID Administrator designed to give focus to the foreign aid program, bring AID’s management system into balance within the agency’s decentralized structure, and improve work force planning and management processes. The new AID Administrator responded favorably to our conclusions and is taking steps to implement them. For example, a proposed reorganization will address the need for clear responsibility and authority. (GAO/NSIAD-92-100 and GAO/NSIAD-93-106) AID Management: Strategic Management Can Help AID Face Current and Future Challenges (GAO/NSIAD-92-100) Asset Forfeiture: Improved Guidance Needed for Use of Shared Assets (GAO/GGD-92-115) Asset Forfeiture: Noncash Property Should Be Consolidated Under the Marshals Service (GAO/GGD-91-97) Customs Service and INS: Dual Management Structure for Border Inspections Should Be Ended (GAO/GGD-93-111) Customs Service: Trade Enforcement Activities Impaired by Management Problems (GAO/GGD-92-123) Decennial Census: 1990 Results Show Need for Fundamental Reform (GAO/GGD-92-94) Department of Education: Long-Standing Management Problems Hamper Reforms (GAO/HRD-93-47) Department of Energy: Better Information Resources Management Needed to Accomplish Missions (GAO/IMTEC-92-53) Department of Energy: Management Problems Require a Long-Term Commitment to Change (GAO/RCED-93-72) DOE Management: Better Planning Needed to Correct Records Management Problems (GAO/RCED-92-88) Education Issues (GAO/OCG-93-18TR) Energy Management: Contract Audit Problems Create the Potential for Fraud, Waste, and Abuse (GAO/RCED-92-41) Energy Policy: Changes Needed to Make National Energy Planning More Useful (GAO/RCED-93-29) Environmental Protection Agency: Protecting Human Health and the Environment Through Improved Management (GAO/RCED-88-101) Federal Employment: Poor Service Found at Federal Job Information Centers (GAO/GGD-92-116) Federal Personnel Management: OPM Reliance on Agency Oversight of Personnel System Not Fully Justified (GAO/GGD-93-24) Federal Workforce: Inappropriate Use of Experts and Consultants at Selected Civilian Agencies (GAO/GGD-91-99) Financial Management: Customs Needs to Establish Adequate Accountability and Control Over Its Resources (GAO/AFMD-92-30) Financial Management: The U.S. Mint’s Accounting and Control Problems Need Management Attention (GAO/AFMD-89-88) General Services Administration: Sustained Attention Required to Improve Performance (GAO/GGD-90-14) Government Civilian Aircraft: Central Management Reforms Are Encouraging but Require Extensive Oversight (GAO/GGD-89-86) Immigration Management: Strong Leadership and Management Reforms Needed to Address Serious Problems (GAO/GGD-91-28) Information Management: Immigration and Naturalization Service Lacks Ready Access to Essential Data (GAO/IMTEC-90-75) Management of HHS: Using the Office of the Secretary to Enhance Departmental Effectiveness (GAO/HRD-90-54) Management of VA: Improved Human Resource Planning Needed to Achieve Strategic Goals (GAO/HRD-93-10) Management Review: Follow-Up on the Management Review of the Defense Logistics Agency (GAO/NSIAD-88-107) Managing Human Resources: Greater OPM Leadership Needed to Address Critical Challenges (GAO/GGD-89-19) Managing IRS: Actions Needed to Assure Quality Service in the Future (GAO/GGD-89-1) Managing IRS: Important Strides Forward Since 1988 but More Needs to Be Done (GAO/GGD-91-74) Medicaid: Ensuring that Noncustodial Parents Provide Health Insurance Can Save Costs (GAO/HRD-92-80) National Archives: A Review of Selected Management Issues (GAO/AFMD-89-39) Personnel Practices: Propriety of Career Appointments Granted Former Political Appointees (GAO/GGD-92-51) Social Security Administration: Stable Leadership and Better Management Needed To Improve Effectiveness (GAO/HRD-87-39) Tax Administration: Congress Needs More Information on Compliance Initiative Results (GAO/GGD-92-118) Tax Administration: Opportunities to Further Improve IRS’ Business Review Process (GAO/GGD-92-125) Thrift Failures: Actions Needed to Stabilize RTC’s Professional Liability Program (GAO/GGD-93-105) U.S. Department of Agriculture: Farm Agencies’ Field Structure Needs Major Overhaul (GAO/RCED-91-9) U.S. Department of Agriculture: Improving Management of Cross-Cutting Agricultural Issues (GAO/RCED-91-41) U.S. Department of Agriculture: Interim Report on Ways to Enhance Management (GAO/RCED-90-19) U.S. Department of Agriculture: Need for Improved Workforce Planning (GAO/RCED-90-97) U.S. Department of Agriculture: Strengthening Management Systems to Support Secretarial Goals (GAO/RCED-91-49) VA Health Care: Inadequate Controls Over Scarce Medical Specialist Contracts (GAO/HRD-92-114) Our work provided information, analyses, and recommendations to the Congress and regulator agencies on financial services industry reform, regulation, and oversight. We analyzed (1) emerging issues, financial health of various segments of the financial services sector, and gaps in regulatory coverage; (2) existing regulatory practices to see if they worked as intended; and (3) the continued appropriateness of federal policies governing financial institutions and markets. We continued to advise congressional leadership—through reports, testimonies, and briefings—on the implementation of key provisions of banking reform legislation that were intended to strengthen the banking system and reduce taxpayers’ exposure to losses. Our work has also helped enhance understanding and congressional oversight of the regulatory burden issue and the regulators’ efforts to address it. Through reports and recommendations on financial markets issues, we have improved the disclosure of information on government and private securities transactions and further upgraded the protection afforded investors. This work and other work designed to improve capital requirements and strengthen regulation of financial services industries have resulted in a stronger financial system and a strengthened regulatory structure to protect the American public. In our report on credit unions, we recommended some 50 regulatory and legislative actions to ensure the future soundness of the industry, including changes to (1) maintain safe and sound insurance operations, (2) upgrade the regulation and supervision of credit unions, and (3) clarify the “common bond” characteristic distinguishing credit unions from banks and thrifts. The National Credit Union Administration has issued regulations to address many of the recommendations; there has not yet been action on our recommendations to the Congress. (GAO/GGD-91-85) We reported that the Federal Deposit Insurance Corporation’s (FDIC) Liquidation Asset Management Information System was not adequately supporting the Division of Liquidation’s needs. We recommended that, to correct the problem, FDIC develop a system that would meet the Division’s requirements for managing loan and for strengthening real estate assets. FDIC is developing an improved system. (GAO/IMTEC-93-08) In a report on the U.S. government securities market, we recommended that the Congress (1) extend the Department of the Treasury’s rulemaking authority, subject to a sunset provision; (2) authorize Treasury to adopt rules as needed over the sales practices of government securities brokers and dealers; and (3) require screen brokers to make transaction information available to market participants on a real-time basis. We also recommended that the Congress extend Securities Investor Protection Corporation (SIPC) insurance coverage to customer accounts in specialized government securities dealers. Legislation is pending before the Congress that would implement these changes. (GAO/GGD-90-114) Our report on investment advisers showed that regulatory oversight of advisers was very weak. We recommended that the Congress clarify its regulatory intent for the investment advisers program by either strengthening it to meet some minimal standard or repealing requirements for federal regulation of advisers. Legislation to strengthen the program has been introduced during past sessions of the Congress. (GAO/GGD-90-83) In a report on securities trading, we recommended that the Securities and Exchange Commission (SEC) closely monitor the development of the exchanges market linkage system. SEC is implementing such a system. (GAO/GGD-90-52) In our report on SEC’s EDGAR System, we recommended that the priority of users’ needs and system requirements be determined and that SEC set realistic project schedules. SEC is doing so. (GAO/IMTEC-92-85) In our report on security investor protection, we found that SIPC needed to periodically review the adequacy of its funding arrangement, improve its access to information, and speed its liquidations. We recommended SIPC and SEC improve each of the above. (GAO/GGD-92-109) In our report on the need to regulate additional financial activities of securities firms, we expressed concern about the unregulated financial activities of affiliated and holding companies of U.S. securities firms, especially given the collapse of Drexel Burnham Lambert. We recommended that SEC determine whether the overall risks posed by the unregulated financial activities of broker-dealer holding companies and affiliates warranted additional regulation or legislative changes. (GAO/GGD-92-70) Our report on international capital standards stated that the capital of securities firms was reduced when they traded in foreign securities that SEC had not recognized as being readily marketable. We recommended that SEC consider revising its capital rule to recognize more foreign markets and more foreign securities as readily marketable under SEC’s 1975 criteria and develop a mechanism to recognize additional foreign securities and markets as they develop. (GAO/GGD-92-41) Our report on penny stocks said that SEC needed to require the National Association of Securities Dealers (NASD) to develop a plan for examining the branch offices of penny stock broker-dealers. The plan should require all NASD districts to include a sampling plan to identify high-risk branches, establish the frequency of examinations, and determine the number of employees required to examine branches. NASD is preparing such a plan for SEC’s review. (GAO/GGD-93-59) Our testimony on market fragmentation recommended that SEC periodically monitor the effects of market fragmentation. SEC is considering this recommendation. (GAO/T-GGD-93-35) Asset Forfeiture: Improved Guidance Needed for Use of Shared Assets (GAO/GGD-92-115) Asset Management System: Liquidation of Failed Bank Assets Not Adequately Supported by FDIC System (GAO/IMTEC-93-8) Bank and Thrift Failures: FDIC and RTC Could Do More to Pursue Professional Liability Claims (GAO/T-GGD-92-42) Bank and Thrift Regulation: Improvements Needed In Examination Quality and Regulatory Structure (GAO/AFMD-93-15) Bank Examination Quality: FDIC Examinations Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-12) Bank Examination Quality: FRB Examinations and Inspections Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-13) Bank Examination Quality: OCC Examinations Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-14) Bank Regulation: Regulatory Impediments to Small Business Lending Should Be Removed (GAO/GGD-93-121) Credit Unions: Reforms for Ensuring Future Soundness (GAO/GGD-91-85) Financial Audit: Savings Association Insurance Fund’s 1991 and 1990 Financial Statements (GAO/AFMD-92-72) Funding Foreign Bank Examinations (GAO/GGD-93-35R) Investment Advisers: Current Level of Oversight Puts Investors at Risk (GAO/GGD-90-83) Penny Stocks: Regulatory Actions to Reduce Potential for Fraud and Abuse (GAO/GGD-93-59) Resolution Trust Corporation: Additional Monitoring of Basic Ordering Agreements Needed (GAO/GGD-93-107) Resolution Trust Corporation: Affordable Multifamily Housing Program Has Improved but More Can Be Done (GAO/GGD-92-137) Resolution Trust Corporation: A More Flexible Contracting-Out Policy Is Needed (GAO/GGD-91-136) Resolution Trust Corporation: Assessing Portfolio Sales Using Participating Cash Flow Mortgages (GAO/GGD-92-33BR) Resolution Trust Corporation: Asset Pooling and Marketing Practices Add Millions to Contract Costs (GAO/GGD-93-2) Resolution Trust Corporation: Better Assurance Needed That Contractors Meet Fitness and Integrity Standards (GAO/GGD-93-127) Resolution Trust Corporation: Controls Over Asset Valuations Do Not Ensure Reasonable Estimates (GAO/GGD-93-80) Resolution Trust Corporation: Effectiveness of Auction Sales Should Be Demonstrated (GAO/GGD-92-7) Resolution Trust Corporation: Loan Portfolio Pricing and Sales Process Could Be Improved (GAO/GGD-93-116) Resolution Trust Corporation: 1992 Washington/Baltimore Auctions Planned and Managed Poorly (GAO/GGD-93-115) Resolution Trust Corporation: Performance Assessment for 1991 (GAO/T-GGD-92-14) Resolution Trust Corporation: Progress Under Way in Minority and Women Outreach Program for Outside Counsel (GAO/GGD-91-121) Resolution Trust Corporation: Subcontractor Cash Management Practices Violate Policy and Reduce Income (GAO/GGD-93-7) Resolution Trust Corporation: Survey Results on RTC’s Communication and Real Estate Marketing (GAO/GGD-92-134BR) Resolution Trust Corporation: Timelier Action Needed to Locate Missing Asset Files (GAO/GGD-93-76) Securities and Exchange Commission: Delays in Processing Time-Sensitive Stock Filings (GAO/GGD-93-130) Securities and Exchange Commission: Effective Development of the EDGAR System Requires Top Management Attention (GAO/IMTEC-92-85) Securities Firms: Assessing the Need to Regulate Additional Financial Activities (GAO/GGD-92-70) Securities Investor Protection: The Regulatory Framework Has Minimized SIPC’s Losses (GAO/GGD-92-109) Securities Markets: Challenges to Harmonizing International Capital Standards Remain (GAO/GGD-92-41) Securities Markets: SEC Actions Needed to Address Market Fragmentation Issues (GAO/T-GGD-93-35) Securities Trading: SEC Action Needed to Address National Market System Issues (GAO/GGD-90-52) Thrift Examination Quality: OTS Examinations Do Not Fully Assess Thrift Safety and Soundness (GAO/AFMD-93-11) Thrift Failures: Actions Needed to Stabilize RTC’s Professional Liability Program (GAO/GGD-93-105) Unemployment Insurance: Trust Fund Reserves Inadequate (GAO/HRD-88-55) U.S. Government Securities: More Transaction Information and Investor Protection Measures Are Needed (GAO/GGD-90-114) Our work in this area focuses on three of the government’s largest business entities: the General Services Administration (GSA), the Resolution Trust Corporation (RTC), and the U.S. Postal Service. This area also encompasses responsibilities for many other federal entities, such as the Smithsonian Institution, the National Archives, parts of the Department of the Treasury, and the D.C. Government. Ultimately, the policies, operations, and employees of the three large entities have an impact not only on meeting their own mission goals but on the capability of other federal agencies to meet theirs. Our work has directly influenced GSA and the congressional leadership to rethink and change GSA’s role as a central management agency and a monopoly provider of services to federal agencies. Numerous reports and testimonies, which culminated in our Transition Series report, highlighted the need for GSA to manage in a more businesslike manner its public buildings and supply distribution operations and called for improved congressional oversight. Two recently issued reports drove this point home in that they showed how millions of dollars could be saved if GSA had more orders shipped directly to the customer agencies from suppliers rather than from its depots and eliminated repeat poor-performing vendors from the supply system. Because of high congressional interest, we have given special attention to federal asset management and disposition activities of RTC in liquidating assets from failed savings and loan institutions. We have focused our efforts on RTC’s sales strategies, contracting activities, and the affordable housing and minority- and women-owned business programs. Through reports and frequent testimony, we have fostered considerable positive change in the organization and management of these activities. We will continue to try to improve the way RTC carries out its asset management responsibilities—specifically, we believe that it should consolidate individual agency activities. During the coming year, we will also begin to focus on the asset management and disposition activities of the Federal Deposit Insurance Corporation. Key efforts at the U.S. Postal Service have focused on the need for an appropriate response to rapidly changing electronic communication technology and an increasingly competitive market for postal services. In reports and testimony in 1992, we directed congressional and Postal Service attention to the limited progress in controlling labor costs through the automation of mail processes and the factors such as outmoded pricing policies hindering Postal Service efforts to compete effectively. We will continue to focus on the competitive challenges the Postal Service faces and the success of its efforts to improve the quality of its services, motivate employees, improve labor and management relations, and generate and protect revenue. GSA recognized the potential for increasing direct delivery and is developing a plan to test the recommendation in the marketplace. GSA also is establishing an interagency committee of supply management personnel to evaluate current depot operations and participate in developing the most cost-effective supply system. Although these are worthwhile first steps, no substantive action has been completed and we thus cannot assess the adequacy of these efforts. (GAO/GGD-93-32) GSA acknowledged that it had had difficulty with vendors who had failed to perform as quality contractors. GSA concurred in the recommendations to remove poor-performing vendors from the supply system and provided information on various initiatives it has planned or under way to implement them. But, it is too early to determine how these new initiatives will have on reducing GSA’s existing vulnerability to using repeat poor-performing vendors. (GAO/GGD-93-34) Although RTC requires its asset management contractors to open interest-bearing operating accounts to pay asset management expenses, there is no such requirement for property management subcontractors. Our analysis of the bank accounts managed by 14 subcontractors showed that only 1 had opened an interest-bearing account. If the other 13 had opened interest-bearing accounts, RTC would have earned approximately $111,000 in interest. If the more than 1,600 property management subcontractors have not established interest-bearing accounts, the amount of additional interest income foregone by RTC could be significant. In October 1992, we recommended that RTC revise its policy and asset management contracts to require that property management subcontractors establish interest-bearing operating accounts for RTC assets with the interest accruing to RTC. (GAO/GGD-93-7) The Postal Reorganization Act of 1970 established criteria for setting postal rates at a time when the Postal Service had less competition than it faces now. Since passage of the 1970 act, the Postal Service’s competitive position has eroded, especially in the parcel post and overnight mail markets. We recommended that, because of the Postal Service’s increasingly competitive environment and the need for greater pricing flexibility, the Congress reexamine the criteria used in setting postal rates to determine whether the criteria were still valid in light of changing marketplace realties. We also said that, if the Congress intended that the Postal Service compete in the parcel post and express mail markets, it should consider a policy of granting discounts to customers on the basis of their mail volumes. No action has been taken on our recommendations. (GAO/GGD-92-49) In March 1993, we recommended that the Congress authorize the introduction of a new, well-designed $1 coin and eliminate the dollar note, a move that would save the government nearly $400 million per year over 30 years. We recommended that the Congress require the Secretary of the Treasury to designate an advocate of the new coin, who would promote it and respond to public inquiries and complaints. The Congress has not yet taken action on our recommendations. (GAO/GGD-93-56) Data Collection: Opportunities to Improve USDA’s Farm Costs and Returns Survey (GAO/RCED-92-175) Defense ADP: Corporate Information Management Must Overcome Major Problems (GAO/IMTEC-92-77) Disinfectants: EPA Lacks Assurance They Work (GAO/RCED-90-139) Environmental Enforcement: EPA Needs a Better Strategy to Manage Its Cross-Media Information (GAO/IMTEC-92-14) Environmental Protection Agency: Plans in Limbo for Consolidated Headquarters Space (GAO/GGD-93-84) FAA Information Resources: Agency Needs to Correct Widespread Deficiencies (GAO/IMTEC-91-43) Federal Buildings: Actions Needed to Prevent Further Deterioration and Obsolescence (GAO/GGD-91-57) Federal Buildings: Many Are Threatened by Earthquakes, but Limited Action Has Been Taken (GAO/GGD-92-62) Federal Formula Programs: Outdated Population Data Used to Allocate Most Funds (GAO/HRD-90-145) Federal Judiciary Space: Long-Range Planning Process Needs Revision (GAO/GGD-93-132) Federal Lands: Improvements Needed in Managing Short-Term Concessioners (GAO/RCED-93-177) Federal Lobbying: Lobbying the Executive Branch (GAO/T-GGD-91-70) Federal Office Space: Increased Ownership Would Result in Significant Savings (GAO/GGD-90-11) Federal Research: System for Reimbursing Universities’ Indirect Costs Should Be Reevaluated (GAO/RCED-92-203) Foreign Direct Investment: Assessment of Commerce’s Annual Report and Data Improvement Efforts (GAO/NSIAD-92-107) Foster Care: Children’s Experiences Linked to Various Factors; Better Data Needed (GAO/HRD-91-64) Freedom of Information: FDA’s Program and Regulations Need Improvement (GAO/HRD-92-2) FTS 2000 Overhead: GSA Should Reassess Contract Requirements and Improve Efficiency (GAO/IMTEC-92-59) General Services Administration: Actions Needed to Improve Protection Against Fraud, Waste, and Mismanagement (GAO/GGD-92-98) General Services Administration: Actions Needed to Stop Buying Supplies From Poor-Performing Vendors (GAO/GGD-93-34) General Services Administration: Distribution Center Modernization Was Mismanaged (GAO/GGD-92-71) General Services Administration: Efforts to Communicate About Asbestos Abatement Not Always Effective (GAO/GGD-92-28) General Services Administration: Increased Direct Delivery of Supplies Could Save Millions (GAO/GGD-93-32) General Services Administration: Sustained Attention Required to Improve Performance (GAO/GGD-90-14) Government Civilian Aircraft: Central Management Reforms Are Encouraging but Require Extensive Oversight (GAO/GGD-89-86) Gross Domestic Product: No Evidence of Manipulation in First Quarter 1991 Estimates (GAO/GGD-93-58) GSA’s Computer Security Guidance (GAO/AIMD-93-7R) Illegal Aliens: Despite Data Limitations, Current Methods Provide Better Population Estimates (GAO/PEMD-93-25) Immigration Management: Strong Leadership and Management Reforms Needed to Address Serious Problems (GAO/GGD-91-28) Managing IRS: Important Strides Forward Since 1988 but More Needs to Be Done (GAO/GGD-91-74) Multiple Award Schedule Contracting: Changes Needed in Negotiation Objectives and Data Requirements (GAO/GGD-93-123) National Archives: A Review of Selected Management Issues (GAO/AFMD-89-39) Occupational Safety & Health: Assuring Accuracy in Employer Injury and Illness Records (GAO/HRD-89-23) One-Dollar Coin: Reintroduction Could Save Millions if Properly Managed (GAO/GGD-93-56) Paperwork Reduction: Agency Responses to Recent Court Decisions (GAO/PEMD-93-5) Patent and Trademark Office: Key Processes for Managing Automated Patent System Development Are Weak (GAO/AIMD-93-15) Postal Service: Service Impact of South Dakota Mail Facility Not Fully Recognized (GAO/GGD-93-62) Radon Testing in Federal Buildings Needs Improvement and HUD’s Radon Policy Needs Strengthening (GAO/T-RCED-91-48) Regulatory Flexibility Act: Inherent Weaknesses May Limit Its Usefulness for Small Governments (GAO/HRD-91-16) Social Security Administration: Stable Leadership and Better Management Needed To Improve Effectiveness (GAO/HRD-87-39) Social Security: Status and Evaluation of Agency Management Improvement Initiatives (GAO/HRD-89-42) Tax Administration: Federal Agencies Should Report Service Payments Made to Corporations (GAO/GGD-92-130) Telecommunications: Concerns About Competition in the Cellular Telephone Service Industry (GAO/RCED-92-220) Treasury Automation: Automated Auction System May Not Achieve Benefits or Operate Properly (GAO/IMTEC-93-28) U.S. Government Securities: More Transaction Information and Investor Protection Measures Are Needed (GAO/GGD-90-114) U.S. Postal Service: Pricing Postal Services in a Competitive Environment (GAO/GGD-92-49) Weather Forecasting: Important Issues on Automated Weather Processing System Need Resolution (GAO/IMTEC-93-12BR) Welfare Benefits: States Need Social Security’s Death Data to Avoid Payment Error or Fraud (GAO/HRD-91-73) Our work in this area has provided information and analyses directed at (1) enhancing efforts to ensure compliance with the country’s tax laws, (2) assessing the progress of the Internal Revenue Service (IRS) in modernizing its tax-processing system, (3) increasing collection of IRS’ accounts receivable, (4) revising the tax laws to ease taxpayer burden and promote more effective and equitable tax subsidies, and (5) improving the ability of IRS to effectively manage its tax administration activities. We suggested ways that IRS could reduce individual taxpayers’ overstatement of real estate tax deductions and improve voluntary compliance. We also recommended that IRS not abandon a long-used compliance measurement program until a suitable substitute could be developed. We recommended, and IRS agreed, that IRS start requiring corporations to report their accumulated net operating losses from past tax years to offset taxable income in other tax years. Having this information will improve revenue estimates as well as IRS’ compliance programs. We recommended that, to reduce the growing IRS accounts receivable, IRS examine collection methods of private companies and local governments and develop a plan to deploy its collection staff to maximize the assessment and the collection of taxes. In monitoring IRS’ Tax Systems Modernization project, we concluded generally that IRS had progressed more slowly than expected in completing steps basic to successful modernization, such as planning for its business reorganization, developing detailed security and telecommunications requirements, and addressing related human resource implications. We also recommended that, because of significant slippage in implementation schedules, IRS reevaluate the utility of certain short-term computer projects. We also reported that the IRS’ electronic filing program benefited both IRS and taxpayers and recommended that to broaden its use, IRS redirect its marketing focus. In addition, we recommended that IRS institute additional controls to reduce the program’s vulnerability to fraud. Our study of a federal value-added tax provided the Congress with basic information related to issues and costs that would be involved in its administration. We also provided analysis and data to the Congress that was used in changing the section 936 tax credit to reduce federal revenue losses while maintaining incentives for investment in Puerto Rico. Additionally, we supplied information used by the Congress in its analysis of taxes paid by foreign-controlled corporations as opposed to those paid by U.S.-controlled corporations. For about 30 years, the Taxpayer Compliance Measurement Program (TCMP) has been the IRS’ primary means of gathering information on taxpayer compliance. In 1991, IRS began making plans to redesign TCMP because of concerns about TCMP’s cost, burden to taxpayers, and timeliness. We found that neither cost nor taxpayer burden justified the proposed changes to TCMP and recommended that IRS delay any changes until a satisfactory substitute could be found that met the criteria we had set out in our report. (GAO/GGD-93-52) The volume of long-term tax-exempt bonds doubled between 1968 and 1990, while the amount of foregone federal tax revenues grew proportionately, exceeding $20 billion in 1990. We recommended that IRS improve its oversight of compliance with tax-exempt bond requirements by redirecting its enforcement program to test current market compliance, to make better use of information collected from bond issuers, and to reassess staffing levels and locations. We also recommended that IRS develop and implement a plan for more effective use of resources to promote voluntary compliance in the tax-exempt bond industry. (GAO/GGD-93-104) IRS audits indicate that individuals overstated their real estate tax deductions by an estimated $1.5 billion in 1988, resulting in nearly $700 million in federal income tax losses for 1988 and 1989. We recommended that, to improve voluntary compliance, IRS clearly define user fees, special assessments, and rebates in Form 1040 instructions and that it work with local governments to revise their real estate tax bills to distinguish user fees and special assessments as “nondeductible.” Further, we suggested that IRS auditors routinely check local records and that IRS negotiate agreements with local governments to share data on taxpayers’ real estate payments. (GAO/GGD-93-43) In October 1992, we reported on what states were doing to combat money laundering. We noted that the Internal Revenue Code required persons engaged in a trade or business who received cash payments of over $10,000 to file reports with IRS, but these reports are not available to state law enforcement agencies. We recommended that the Congress amend the IRS disclosure laws to allow states access to data on these reports. (GAO/GGD-93-1) IRS’ electronic filing program benefits both IRS and taxpayers by reducing handling costs, while allowing faster and more accurate processing of returns and refunds. IRS’ marketing of the electronic filing program has focused on attracting more preparers and transmitters, but approximately 90 percent of all individual returns were not filed electronically in 1992. We recommended that, to broaden the use of electronic filing, IRS devise a marketing plan directing appropriate attention to other segments of the population. (GAO/GGD-93-40) Electronic filing also significantly reduces the time it takes to issue a refund to a taxpayer—on average, from 5 weeks for taxpayers who file paper returns to 2 weeks for those who file electronically. Because this speed leaves IRS with as little as 2 days to investigate and stop a refund, however, the program is particularly vulnerable to fraud. We assessed IRS’ controls to prevent electronic filing fraud and recommended additional controls. (GAO/GGD-93-27) Accounts Receivable Collection We studied private sector and state collection techniques to determine whether IRS could make changes to improve its collection of delinquent taxes. We recommended that IRS restructure its collection program to support earlier telephone contact with delinquent taxpayers, develop detailed information on delinquent taxpayers for customized collection procedures, test the use of private collection companies, and identify ways to increase cooperation with state governments. (GAO/GGD-93-67) While IRS’ delinquent taxpayer workload has continued to grow, productivity of collection staff has varied at different field locations. Presently, IRS’ staff allocation system does not use marginal productivity measurements to adjust the staff levels at the various field locations. We recommended that IRS develop a plan to ensure that collection staff would be allocated to maximize the assessment and collection of taxes and that it reconsider its policy against the transfer of collection staff among field offices. (GAO/GGD-93-97) In a review of taxpayer compliance in claiming the dependent exemption, we concluded that the rules for claiming dependent exemptions were too complex and too burdensome for many taxpayers to comply. We suggested that the Congress simplify the rules by substituting a residency test similar to that used in the Earned Income Tax Credit program. We also recommended that IRS resolve operational problems in its computer matching program, thereby enabling IRS to cost-effectively implement a 100-percent computer matching program to identify erroneous dependent claims. (GAO/GGD-93-60) In September 1993, we stated that the earned income tax credit had been the source of more taxpayer mistakes than any other individual income tax provision. We observed that IRS had introduced a complex schedule in an attempt to prevent ineligible taxpayers from receiving the credit but gave the credit even when the schedule lacked pertinent information. We recommended that, to eliminate the need for this complex schedule, IRS modify its tax schedule. Further, as we found that IRS credit-processing procedures are inconsistent in the way that they treat taxpayers who claim the credit but fail to file complete information, we recommended that IRS adjust its procedures to ensure that all taxpayers equitable treatment. Finally, we recommended that IRS expand its efforts to inform low-income workers about the tax credit by sending explanatory notices to all nonfiling workers who had earned income. (GAO/GGD-93-145) In September 1992 testimony, we noted wasteful practices, flawed processes, and inadequate controls related to property seized and maintained by the IRS’ Collection and Criminal Investigation Divisions. We believe that the storage and sale costs could be reduced and revenue increased if sales were consolidated. We recommended that IRS assess the options available for consolidation and contractor management and that its Collection Division provide guidance to its revenue officers on making cost-effective seizures. (GAO/T-GGD-92-65) At present, the Federal Tax Deposit (FTD) system collects payment and tax data separately, thereby creating problems related to matching the accounting information on tax returns to payment data on FTD coupons. The Department of the Treasury is automating the FTD process. We recommended that these automation efforts be monitored to ensure that the new automated system collected together the appropriate accounting data with the taxpayer payment data. (GAO/AFMD-93-40) In March 1993, we reported that taxpayer identity data that IRS routinely collected each year to process tax returns could help the Social Security Administration (SSA) identify the correct accounts to which to credit workers’ social security taxes. We recommended that IRS and SSA work together to conduct a study evaluating the extent to which the agency could better match workers’ earnings to correct social security accounts by using IRS taxpayer data. (GAO/HRD-93-42) IRS continues to lose millions of dollars of interest payments due to delays in depositing individual income tax payments. We believe that IRS needs to aggressively seek ways to deposit tax payments faster and recommended that IRS collect data to help it develop strategies for identifying and rapidly depositing large tax payments. (GAO/GGD-93-64) Collecting Back Taxes: IRS Phone Operations Must Do Better (GAO/IMTEC-91-39) Earned Income Tax Credit: Advance Payment Option Is Not Widely Known or Understood by the Public (GAO/GGD-92-26) Identifying Options for Organizational and Business Changes at IRS (GAO/T-GGD-91-54) International Taxation: Problems Persist in Determining Tax Effects of Intercompany Prices (GAO/GGD-92-89) IRS Information Systems: Weaknesses Increase Risk of Fraud and Impair Reliability of Management Information (GAO/AIMD-93-34) IRS Procurement: Software Documentation Requirement Did Not Restrict Competition (GAO/IMTEC-92-30) Managing IRS: Actions Needed to Assure Quality Service in the Future (GAO/GGD-89-1) Managing IRS: Important Strides Forward Since 1988 but More Needs to Be Done (GAO/GGD-91-74) Targeted Jobs Tax Credit: Employer Actions to Recruit, Hire, and Retain Eligible Workers Vary (GAO/HRD-91-33) Tax Administration: Approaches for Improving Independent Contractor Compliance (GAO/GGD-92-108) Tax Administration: Benefits of a Corporate Document Matching Program Exceed the Costs (GAO/GGD-91-118) Tax Administration: Better Training Needed for IRS’ New Telephone Assistors (GAO/GGD-91-83) Tax Administration: Changes Are Needed to Improve Federal Agency Tax Compliance (GAO/GGD-91-45) Tax Administration: Computer Matching Could Identify Overstated Business Deductions (GAO/GGD-93-133) Tax Administration: Congress Needs More Information on Compliance Initiative Results (GAO/GGD-92-118) Tax Administration: Delayed Tax Deposits Continue to Cause Lost Interest for the Government (GAO/GGD-93-64) Tax Administration: Effectiveness of IRS’ Return Preparer Penalty Program Is Questionable (GAO/GGD-91-12) Tax Administration: Efforts to Prevent, Identify, and Collect Employment Tax Delinquencies (GAO/GGD-91-94) Tax Administration: Erroneous Dependent and Filing Status Claims (GAO/GGD-93-60) Tax Administration: Erroneous Penalties for Failure to File Returns or Pay Taxes Can Be Reduced (GAO/GGD-90-80) Tax Administration: Expanded Reporting on Seller-Financed Mortgages Can Spur Tax Compliance (GAO/GGD-91-38) Tax Administration: Federal Agencies Should Report Service Payments Made to Corporations (GAO/GGD-92-130) Tax Administration: Federal Contractor Tax Delinquencies and Status of the 1992 Tax Return Filing Season (GAO/T-GGD-92-23) Tax Administration: Improved Staffing of IRS’ Collection Function Would Increase Productivity (GAO/GGD-93-97) Tax Administration: Information Returns Can Improve Reporting of Forgiven Debts (GAO/GGD-93-42) Tax Administration: IRS Can Improve Controls Over Electronic Filing Fraud (GAO/GGD-93-27) Tax Administration: IRS Can Improve Its Process for Recognizing Tax-Exempt Organizations (GAO/GGD-90-55) Tax Administration: IRS Can Improve Its Program to Find Taxpayers Who Underreport Their Income (GAO/GGD-91-49) Tax Administration: IRS Experience Using Undercover Operations’ Proceeds to Offset Operational Expenses (GAO/GGD-91-106) Tax Administration: IRS’ Implementation of the 1988 Taxpayer Bill of Rights (GAO/GGD-92-23) Tax Administration: IRS’ Management of Seized Assets (GAO/T-GGD-92-65) Tax Administration: IRS Needs More Reliable Information on Enforcement Revenues (GAO/GGD-90-85) Tax Administration: IRS’ 1992 Filing Season Was Successful But Not Without Problems (GAO/GGD-92-132) Tax Administration: IRS’ Plans to Measure Tax Compliance Can Be Improved (GAO/GGD-93-52) Tax Administration: IRS Preparer Penalty Data Inaccurate and Misleading (GAO/GGD-90-92) Tax Administration: IRS Should Expand Financial Disclosure Requirements (GAO/GGD-92-117) Tax Administration: IRS’ System Used in Prioritizing Taxpayer Delinquencies Can Be Improved (GAO/GGD-92-6) Tax Administration: IRS Undercover Operations Management Oversight Should Be Strengthened (GAO/GGD-92-79) Tax Administration: Need for More Management Attention to IRS’ College Recruitment Program (GAO/GGD-90-32) Tax Administration: Negligence and Substantial Understatement Penalties Poorly Administered (GAO/GGD-91-91) Tax Administration: New Delinquent Tax Collection Methods for IRS (GAO/GGD-93-67) Tax Administration: Opportunities to Further Improve IRS’ Business Review Process (GAO/GGD-92-125) Tax Administration: Opportunities to Increase Revenue Before Expiration of the Statutory Collection Period (GAO/GGD-91-89) Tax Administration: Opportunities to Increase the Use of Electronic Filing (GAO/GGD-93-40) Tax Administration: Overstated Real Estate Tax Deductions Need To Be Reduced (GAO/GGD-93-43) Tax Administration: Standards Adhered to in Issuing Revenue Ruling 90-27 (GAO/GGD-92-15) Tax Administration: Status of Efforts to Curb Motor Fuel Tax Evasion (GAO/GGD-92-67) Tax Policy: Allocation of Taxes Within the Life Insurance Industry (GAO/GGD-90-19) Tax Policy and Administration: Improvements for More Effective Tax-Exempt Bond Oversight (GAO/GGD-93-104) Tax Policy: Earned Income Tax Credit: Design and Administration Could Be Improved (GAO/GGD-93-145) Tax Policy: Summary of GAO Work Related to Expiring Tax Provisions (GAO/T-GGD-92-11) Tax Policy: Tax Treatment of Life Insurance and Annuity Accrued Interest (GAO/GGD-90-31) The New Earned Income Credit Form Is Complex and May Not Be Needed (GAO/T-GGD-91-68) The size and the persistence of the budget deficit is central to the nation’s economic future. The budget has become a focal point for many of the current policy debates. As a result, the budget and the budget process are expected to meet many demands. They are expected to provide a mechanism to reduce the deficit, to promote greater long-term economic growth, to provide policymakers with information and choices needed to make short-term and long-term spending decisions, and to enable managers to use funds in the most efficient manner consistent with congressional priorities. Our work provides information and analysis directed at each of these challenges. Specifically, our work (1) provides the Congress with deficit reduction analysis and reduction options and strategies, (2) recommends improvements in the budget presentation and choices provided by the budget and budget process, (3) highlights for decisionmakers the choices between consumption and investment spending and provides criteria and analysis to decisionmakers to help in the selection of effective investments, and (4) assesses the impacts of budget rules and incentives on management and examines the potential impacts of proposed budget changes on both managerial efficiency and congressional oversight. Deficit reduction is essential to our nation’s long-term economic health. The fiscal year 1993 deficit is approximately $254.9 billion, or 4 percent of the Gross Domestic Product, net interest is approximately $199 billion, and debt held by the public approximately $3.2 trillion. As we reported to the Congress last year, failure to make difficult choices concerning what responsibilities the federal government will carry out and how those activities will be financed will lead to increasingly large deficits, accompanied by steady erosion of economic growth. Our analysis of the impact of the deficit on economic growth and productivity has provided the public, private policy organizations, and the Congress with information required to develop a perspective on recent economic experiences and on the administration’s economic plan. A provision similar to recommendations that we made to the House and Senate Budget Committees regarding budget control was applied to existing entitlements and mandatory programs by the House and incorporated by the President in an executive order following the passage of the Omnibus Reconciliation Act of 1993. Our report on the implications of state balanced budget requirements for the federal government has contributed to congressional debate on a proposed federal balanced budget amendment and will be used again in the near future as the issue of a federal balanced budget is expected to be reintroduced for debate. Our current work on deficit reduction options and strategies includes organizing a multiyear GAO-wide effort initiated by the Comptroller General to provide critical budget-related information to congressional and administrative decisionmakers. The budget presentation provides a framework of choices and therefore heavily influences budget outcomes. Budget choices could be improved if the budget structure and process highlighted and provided needed information about critical budget choices. In addition, better information about the costs of federal programs and a greater ability to link budgeting and accounting data could enhance the quality of budget decisions. Our review of linkages between budgeting and financial statements at the Department of Veterans Affairs identified problems that result when accounting systems are not structured to provide the types of data needed in the budget process. We developed and presented new financial reporting models that would result in the audit of actual budget execution data and better recognition of future budgetary claims. The Office of Management and Budget (OMB) incorporated several of our proposed recommendations into its guidance for the preparation of financial statements. Also, the financial reporting models served as a starting point for the Federal Accounting Standards Advisory Board in its efforts to develop more-relevant and more-useful financial reports. We informed the Congress of the factors that led to the substantial differences between estimates and actual results for receipts and outlay accounts for fiscal year 1992. This analysis was used by the Senate Committee on Budget in formulating the Senate Budget Resolution for fiscal years 1994-98. Our work on restructuring the way budget data are presented has focused on helping decisionmakers understand the long-term economic impact of their choices. Information provided to both the Congress and the administration has contributed to the current debate on investment and capital budgeting issues and has been incorporated into congressional capital budgeting proposals. Our work on the economic impact of the deficit identified the need to refocus the budget structure to promote a shift in the composition of federal spending from consumption to investment programs. Our work on restructuring the way budget data are presented has assisted the Congress in looking at investment as a share of the budget. In addition, our current work on investment provides criteria and analysis to help decisionmakers select effective investments. For example, we have issued a framework to help the Congress choose effective federal investments. We have also established a network within GAO to work with other divisions in evaluating federal investments. We provided valuable input to OMB regarding its guidance to agencies on evaluating investment programs. Because of our efforts, OMB has requested our continuing advice and counsel as it works with the executive agencies over the next year to plan evaluation designs. The reinventing government agenda—the conceptual driver for the Vice President’s National Performance Review as well as pending performance measures bills—has focused new attention on the impact of budget rules and incentives on agency management and program delivery. Our report to the Congress and OMB on the uses and the limitations of performance measurement and budgeting and the potential implications of state experiences for the federal government highlighted the need for fundamental change, especially in the area of better cost accounting systems to support performance budgeting at the federal level. Our report on performance budgeting documented implementation issues that OMB incorporated into its discussions on the implementation of the Government Performance Results Act of 1993. We plan to focus our current work on the examination of budget formulation and execution procedures, paying particular attention to proposals discussed in the National Performance Review. We will examine the impact on agency management of executive branch rules and incentives for controlling funds, as well as reviewing congressional techniques for control, such as rescissions, earmarking, reprogramming, and transfers. We are required by law to submit an annual compliance report that addresses compliance by OMB and the Congressional Budget Office (CBO) with the Budget Enforcement Act of 1990. When we reviewed the reports and presidential orders for the session of the Congress ended January 3, 1992, we reported that OMB and CBO had substantially complied with the act, but we found several minor instances in which either OMB or CBO, or both, had not implemented certain provisions. We discussed several matters for congressional consideration for making technical corrections to the act to clarify certain areas and allow more precise implementation. The Congress considered changes to the Budget Enforcement Act of 1990 for inclusion in the Omnibus Reconciliation Act of 1993. While no changes were enacted in the Omnibus Reconciliation Act of 1993, the Conference Report on the Budget Reconciliation Act of 1993 indicated that the House intended to pursue changes to the Budget Enforcement Act of 1990 at a later date. (GAO/AFMD-92-43) Budget Issues: Compliance Report Required by the Budget Enforcement Act of 1990 (GAO/AFMD-92-43) Our civil agency audits have illustrated the importance of reliable financial statements and effective systems in strengthening accountability and improving control over the federal government’s financial resources and affairs. The preparation and the audit of accurate and useful financial statements depends upon the quality and the availability of the financial information on which they are based and ultimately the adequacy of the underlying systems and related internal controls. The government’s financial systems and internal controls are woefully inadequate. But, even though agencies have spent billions of dollars to upgrade their financial systems, these efforts have had limited success. Many federal financial systems are weak, outdated, and inefficient and cannot routinely produce relevant, timely, and accurate data on the results and the costs of operations. Since its passage 3 years ago, the Chief Financial Officers (CFO) Act of 1990 has set the foundation for effective implementation and for beginning the process of change. A mechanism for reform is now in place, which represents a major accomplishment of our work and our long-term commitment to restore integrity to the federal government’s financial management operations. The act can achieve substantive change, but this is just the first step; reform will require strong leadership, new thinking, and sustained high-level support and oversight. While much more needs to be done, agencies are beginning to recognize and fix their extensive financial systems deficiencies, come to grips with financial personnel recruitment and retention problems, and understand better the benefits to be gained through using new types of useful and relevant financial reports that are backed up by annual audits. We have helped agency managers and others become familiar with the CFO Act’s principal features and more fully understand the actions needed to successfully implement the act. We have worked to foster adoption of appropriate financial reporting and accounting standards, promote quality financial audits and audit methods, and develop meaningful performance measures and cost systems. As part of our Transition Series, our report entitled Financial Management Issues (GAO/OCG-93-4TR) discusses the (1) widespread financial management weaknesses that exist in government today, (2) role of the CFO Act in providing a road map for reform, (3) steps needed to fully implement this act and make good financial management a reality, and (4) further necessary actions. Our civil audits, over the past several years, have also resulted in other significant improvements in federal financial management. We demonstrated, for example, through discussion and analysis of several agencies’ financial operations, the type of information that will give the Congress and the President greater insight into, and understanding of, agencies’ financial affairs and the type of information that should be addressed in agency reports and attested to by the independent auditor. We have also conducted financial audits resulting in significant improvements in the quality of agency financial information and identified serious problems in agency financial operations. Most recently, we completed the first financial audits of the Internal Revenue Service (IRS) and the Customs Service, which were done under the CFO Act’s pilot program of agency-level audited financial statements (GAO/AIMD-93-2 and GAO/AIMD-93-3). In addition, we audited the Department of Education’s Federal Family Education Loan Program’s financial statements for fiscal year 1992 (GAO/AIMD-93-4). We have also facilitated fundamental change in the government’s Financial Integrity Act program and strengthened implementation of the act. We have developed a program that will enable the Office of Management and Budget (OMB) and the agencies to focus on high-risk areas and to provide leadership to redirect the government’s program for addressing long-standing internal control problems. Also, we have issued a series of reports that summarize our findings and recommendations for 17 federal programs identified as highly susceptible to waste, fraud, abuse, and mismanagement. Our repeated emphasis on the need for long-range financial management planning for the government has resulted in OMB’s issuing, in April 1992, its first 5-year federal financial management status report under the CFO Act. The plan outlines OMB’s approach to implementing the act and provides its vision as to what constitutes good financial management. Our recommendations that delinquent nontax debt owed to agencies be collected through the IRS refund offset led the Congress, in 1993, to make this program mandatory through legislation; the program is expected to save the government billions of dollars. Also, the Congress passed legislation requiring agencies to report closed-out debts to IRS as income to the debtors, which we also recommended. Across government, effective financial management operations and information are hampered by financial systems that are incompatible; have been allowed to deteriorate; are out of date; and cannot meet managers’ cost, performance measurement, and other financial information needs. Agencies face a great challenge in providing strong financial management, effective internal controls, and sound fiscal accountability, but the investment will pay for itself many times over in improved operations and useful information for decisionmaking. We have continually pressed agencies and the administration to improve credit management and debt collection practices. Our report on OMB’s nine-point credit management program recommended that the Congress amend the Debt Collection Act of 1982 to require agencies, where consistent with program legislation, to use provisions of the act that are now optional and other credit management techniques. We continue to consider strengthened legislation in the credit management area to be an extremely important element for improving the government’s loan programs, with billions in savings possible. (GAO/AFMD-90-12) Over the years, we have made many agency-specific recommendations to correct problems of fundamental accounting procedures, including serious internal control and accounting system weaknesses. The following recommendations deserve priority attention. We recommended, in our report on IRS’ accounts receivable, certain actions to develop a strategy for distinguishing between assessments that should be included in the receivables and those that should not, to include only valid receivables in the balances reported in IRS financial statements, and to modify IRS’ methodology for assessing the collectibility of its receivables. (GAO/AFMD-93-42) In our report on the National Aeronautics and Space Administration’s internal controls and financial management systems, we recommended actions necessary to improve the reliability of contractor cost data, improve controls over the accounting for and reporting of contractor-held property, strengthen budgetary funds controls to ensure proper use of resources, and resolve discrepancies in general ledger accounts to improve the accuracy of financial reporting to Treasury. (GAO/AFMD-93-3) Regarding the serious deficiencies we found in the Department of State’s financial systems, which require sustained attention, we recommended actions to give top priority to resolving fundamental financial problems and emphasize short-term actions, monitor long-range standardization and integration efforts, and ensure that future financial systems development and enhancement projects incorporated reporting requirements to meet users’ needs. (GAO/AFMD-93-9) We recommended actions to improve Education’s Guaranteed Student Loan Program internal controls, including the preparation of a comprehensive plan on the role of guaranty agencies and the manner in which they are compensated. The plan should recommend changes in the program that would provide more effective incentives to guaranty agencies and lenders to help prevent defaults, improve controls over conflicting activities by guaranty agencies, and enhance federal oversight. (GAO/AFMD-93-20) In our report on the serious problems that the Bureau of Indian Affairs was experiencing in accounting for and reconciling Indian trust fund moneys totaling more than $2 billion, we recommended that the Department of the Interior seek alternative ways to reconcile the accounts and develop a proposal for reaching a satisfactory resolution of the trust fund account balances with account holders. (GAO/AFMD-92-38) We recommended a number of actions that the Commissioner of Customs could take to improve accounting for and control over resources, including receivables and property, and to collect billions of dollars in duties and fees and additional millions of dollars in amounts owed to the agency. (GAO/AFMD-92-30) Major improvements are needed to restore integrity to the federal government’s financial management operations. Key elements of successful federal financial management reform are high-quality leadership, an effective CFO organizational structure, effective long-range planning, and preparation of meaningful and auditable financial statements. While agencies have made some progress in these areas, making substantive and lasting improvements is possible by taking prompt actions necessary to implement our recommendations and to meet the CFO Act’s requirements. Bureau of Indian Affairs’ Efforts to Reconcile and Audit the Indian Trust Funds (GAO/T-AFMD-91-2) Cost Accounting: Department of Energy’s Management of Contractor Pension and Health Benefit Costs (GAO/AFMD-90-13) Credit Management: Deteriorating Credit Picture Emphasizes Importance of OMB’s Nine-Point Program (GAO/AFMD-90-12) Federal Credit Programs: Agencies Had Serious Problems Meeting Credit Reform Accounting Requirements (GAO/AFMD-93-17) Federal Tax Deposit System: IRS Can Improve the Federal Tax Deposit System (GAO/AFMD-93-40) Financial Audit: Department of Veterans Affairs Financial Statements for Fiscal Years 1989 and 1988 (GAO/AFMD-91-6) Financial Audit: EPA’s Financial Statements for Fiscal Years 1988 and 1987 (GAO/AFMD-90-20) Financial Audit: Forest Service’s Financial Statements for Fiscal Year 1988 (GAO/AFMD-91-18) Financial Audit: Guaranteed Student Loan Program’s Internal Controls and Structure Need Improvement (GAO/AFMD-93-20) Financial Audit: IRS Significantly Overstated Its Accounts Receivable Balance (GAO/AFMD-93-42) Financial Audit: Veterans Administration’s Financial Statements for Fiscal Year 1986 (GAO/AFMD-87-38) Financial Audit: Veterans Administration’s Financial Statements for Fiscal Years 1987 and 1986 (GAO/AFMD-89-23) Financial Audit: Veterans Administration’s Financial Statements for Fiscal Years 1988 and 1987 (GAO/AFMD-89-69) Financial Management: Actions Needed to Ensure Effective Implementation of NASA’s Accounting System (GAO/AFMD-91-74) Financial Management: BIA Has Made Limited Progress in Reconciling Trust Accounts and Developing a Strategic Plan (GAO/AFMD-92-38) Financial Management: Customs Needs to Establish Adequate Accountability and Control Over Its Resources (GAO/AFMD-92-30) Financial Management: Education’s Student Loan Program Controls Over Lenders Need Improvement (GAO/AIMD-93-33) Financial Management: IRS Lacks Accountability Over Its ADP Resources (GAO/AIMD-93-24) Financial Management: NASA’s Financial Reports Are Based on Unreliable Data (GAO/AFMD-93-3) Financial Management: Opportunities for Improving VA’s Internal Accounting Controls and Procedures (GAO/AFMD-89-35) Financial Management: Serious Deficiencies in State’s Financial Systems Require Sustained Attention (GAO/AFMD-93-9) Financial Management: The U.S. Mint’s Accounting and Control Problems Need Management Attention (GAO/AFMD-89-88) Immigration Management: Strong Leadership and Management Reforms Needed to Address Serious Problems (GAO/GGD-91-28) IRS Information Systems: Weaknesses Increase Risk of Fraud and Impair Reliability of Management Information (GAO/AIMD-93-34) Management of HHS: Using the Office of the Secretary to Enhance Departmental Effectiveness (GAO/HRD-90-54) Managing IRS: Actions Needed to Assure Quality Service in the Future (GAO/GGD-89-1) Managing IRS: Important Strides Forward Since 1988 but More Needs to Be Done (GAO/GGD-91-74) Medicare: HCFA Should Improve Internal Controls Over Part B Advance Payments (GAO/HRD-91-81) National Archives: A Review of Selected Management Issues (GAO/AFMD-89-39) Superfund: EPA Cost Estimates Are Not Reliable or Timely (GAO/AFMD-92-40) Government corporations provide trillions of dollars in guarantees and insurance in support of the nation’s major financial industries, including banks, savings and loan institutions, credit unions, and pension plans. Past severe problems in the savings and loan and banking industries and the termination of large underfunded pension plans have focused the attention of policymakers and the public on the taxpayers’ significant exposure to loss through the government’s credit and insurance activities. Although the condition and the performance of both banks and thrifts have recently improved, segments of the industry remain troubled and the insurance funds need to be rebuilt to statutorily required levels. In addition, the Pension Benefit Guaranty Corporation (PBGC) faces a large and growing deficit that threatens the insurance program’s long-term viability. To take prompt action and minimize the taxpayers’ exposure and costs, the Congress and regulators need reliable and informative financial reporting that provides early warning on emerging problems. To provide the necessary information, we have focused our work on ensuring that corporate entities accurately report their financial condition and performance, maintain internal control structures that provide accountability and safeguard assets, and effectively implement the requirements of the Chief Financial Officers (CFO) Act of 1990. We have also begun evaluating whether generally accepted accounting principles and auditing standards provide an adequate basis for assessing financial condition and operating performance. We have worked closely with the Resolution Trust Corporation (RTC), the Federal Deposit Insurance Corporation (FDIC), and PBGC to improve the reliability of their financial data and internal control systems and we have seen considerable progress over the past few years. For 1992, all three corporations received an unqualified opinion on their balance sheets. Because previous financial audits highlighted deficiencies in the corporations’ recognition and measurement of loss contingencies, all three, in 1992, improved their methods used to estimate future losses associated with insurance activities. Our audits have also disclosed internal control weaknesses of varying significance. In general, the corporations have agreed with our findings and acted quickly to address most weaknesses. In fact, many informal recommendations for improved reporting or internal controls are implemented by management before our audit work is complete. We expect internal control problems in some areas to continue in the future, however. Our 1992 financial audit work also has provided the Congress with vital information on the corporations’ status and funding needs. Although the condition of the banking industry has improved, we warned that the Bank Insurance Fund could remain undercapitalized for a number of years and, therefore, remained vulnerable to adverse changes in economic conditions. The Fund’s reserves must be rebuilt to enable it to handle any significant level of bank failures. Like the condition of the banking industry, the condition of the savings and loan industry showed considerable improvement in 1992. But, we have reported that many failed thrifts continue to lose money and add to the taxpayers’ costs because RTC lacks sufficient funds to close them. On October 1, 1993, failed thrifts not resolved by RTC will become the responsibility of the Savings Association Insurance Fund, which is expected to have a balance of less than $1.5 billion at that time. Finally, we have warned that PBGC’s large and growing deficit threatens the insurance program’s long-term viability and have supported legislative action to strengthen the funding standards for defined benefit pension plans. We have also focused on the efforts of government corporations to implement the CFO Act. We discussed with the Office of Management and Budget and each of the 33 corporations subject to the act the requirement for management’s assessment of internal controls and have worked with each corporation to provide guidance for preparing the management report. In 1992, all but one government corporation was being audited, and nearly all have issued the required assessment reports. Our efforts to urge accounting and auditing standard-setters to adopt more realistic measures of financial condition and operating performance focused on asset valuation rules and reporting on internal controls. Two important standards were issued by the Financial Accounting Standards Board during 1993 affecting asset valuation—SFAS No. 114, Accounting by Creditors for Impairment of a Loan, and SFAS No. 115, Accounting for Certain Investments in Debt and Equity Securities. While these new standards are a positive step toward improving financial reporting, they fall short of fully adopting fair value accounting for these types of assets, which we believe is a more realistic basis for establishing asset values. With regard to reporting on internal controls, in 1992, we provided FDIC with detailed, comprehensive suggestions for development of regulations to implement the internal control provisions of the FDIC Improvement Act of 1991. But, we do not believe that FDIC issued sufficiently detailed regulations for proper implementation of the act. We plan to review the actual procedures employed by financial institutions and their auditors to address the act’s requirements for assessment of internal controls. In our audit of PBGC’s fiscal year 1992 financial statements, we found that PBGC made substantial progress in dealing with significant system and internal control weaknesses and in addressing key recommendations made in our earlier report on PBGC’s fiscal year 1990 financial statements. (GAO/AFMD-92-1) This progress enabled us, for the first time, to opine on PBGC’s statement of financial condition. PBGC, however, continues to face weaknesses in financial systems and internal controls. Our report on the fiscal year 1992 audit made additional recommendations to address weaknesses in systems integration, internal controls, financial reporting, and the assessment of contingent liabilities. PBGC is addressing these weaknesses and, as part of the fiscal year 1993 financial statement audit, we will assess its progress. (GAO/AIMD-93-21) Our 1992 financial statement audit of RTC disclosed several internal control weaknesses that could affect RTC’s ability to safeguard its assets from unauthorized use or disposition or to ensure that its financial reports are complete and accurate. In our report on its internal controls at December 31, 1992, we recommended that RTC take actions to ensure that loss accruals are accurately calculated and that control procedures related to field office reconciliations and journal entry preparation are proper and consistently followed. RTC has agreed to address these weaknesses in 1993, and we will monitor its progress as part of our 1993 financial statement audit. (GAO/AIMD-93-50) In our report on the Savings Association Insurance Fund’s (SAIF) 1991 financial statements, we recommended action to improve FDIC’s internal controls over its time and attendance reporting process. FDIC has worked to address the weaknesses identified in our report and anticipates resolving them through the issuance of a revised time and attendance reporting directive and increased training during 1993. As part of our 1993 SAIF financial statement audit, we will assess FDIC’s success in addressing these weaknesses. (GAO/AFMD-92-72) Our review on bank and thrift examinations performed by FDIC, the Federal Reserve Board, the Office of the Comptroller of the Currency, and the Office of Thrift Supervision disclosed that the examinations were too limited to fully identify and determine the extent of deficiencies affecting the safety and the soundness of insured depository institutions. We made various recommendations to the regulators in our reports to improve the scope and the quality of the examinations. These recommendations focused on the need to take a more proactive approach to the examination of banks and thrifts, including more emphasis on assessing internal controls, representative sampling of the loan portfolio, and development of a sound methodology for assessment of the adequacy of loan loss reserves. The receptiveness to our recommendations varied among the four regulatory agencies. We will continue to monitor the agencies’ progress to assess the effectiveness of changes in the examination process. (GAO/AFMD-93-11, GAO/AFMD-93-12, GAO/AFMD-93-13, and GAO/AFMD-93-14) In our summary report on the examination review, we asked the Congress to consider the appropriateness of the present regulatory structure. Since that time, several bills have been introduced, and are still pending, which propose changes to the current regulatory structure. (GAO/AFMD-93-15) Asset Management System: Liquidation of Failed Bank Assets Not Adequately Supported by FDIC System (GAO/IMTEC-93-8) Bank and Thrift Regulation: Improvements Needed in Examination Quality and Regulatory Structure (GAO/AFMD-93-15) Bank Examination Quality: FDIC Examinations Do No Fully Assess Bank Safety and Soundness (GAO/AFMD-93-12) Bank Examination Quality: FRB Examinations Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-14) Bank Examination Quality: OCC Examinations Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-13) Credit Unions: Reforms for Ensuring Future Soundness (GAO/GGD-91-85) Employee Benefits: Improved Plan Reporting and CPA Audits Can Increase Protection Under ERISA (GAO/AFMD-92-14) Financial Audit: Pension Benefit Guaranty Corporation’s 1992 and 1991 Financial Statements (GAO/AIMD-93-21) Financial Audit: Resolution Trust Corporation’s Internal Controls at December 31, 1992 (GAO/AIMD-93-50) Financial Audit: Savings Association Insurance Fund’s 1991 and 1990 Financial Statements (GAO/AFMD-92-72) Financial Audit: System and Control Problems Further Weaken the Pension Benefit Guaranty Corporation (GAO/AFMD-92-1) Pension Plans: Pension Benefit Guaranty Corporation Needs to Improve Premium Collections (GAO/HRD-92-103) Premium Accounting System: Pension Benefit Guaranty Corporation System Must Be an Ongoing Priority (GAO/IMTEC-92-74) Resolution Trust Corporation: Assessing Portfolio Sales Using Participating Cash Flow Mortgages (GAO/GGD-92-33BR) Thrift Examination Quality: OTS Examinations Do Not Fully Assess Thrift Safety and Soundness (GAO/AFMD-93-11) Our work has concentrated on the systems and internal controls the Department of Defense (DOD) used both as a basis for its financial reporting and in accounting for and controlling its extensive inventories, weapon systems, equipment, and other assets. In particular, our audits illustrated the importance of reliable financial information and effective systems in strengthening accountability and improving controls over DOD’s multibillion-dollar investment in equipment and inventories. Many DOD systems are weak, outdated, and inefficient and cannot routinely produce relevant, timely, and accurate information on the results and the costs of DOD’s operations. As demonstrated by DOD’s recent “Bottom-Up” Review and the administration’s National Performance Review, as well as decisions made through the base realignment and closure process, DOD is under increasing pressure to work better and reduce costs. Specifically, DOD’s Bottom-Up Review identified about $91 billion in programmatic reductions, including reductions of over 300,000 military and civilian personnel, 2 Army divisions, 3 active Air Force fighter wings, and 55 Navy surface ships and submarines. At the same time, the Secretary of Defense set an overriding goal of accomplishing such downsizing while maintaining the ability to meet our worldwide defense commitments and sustain our high level of military capability, thus avoiding the “hollow armies” resulting from past drawdown initiatives. As a result, more reliable and relevant financial information on the resources for which it is responsible and on the costs of operations will be increasingly important if DOD is to make well-informed tradeoff decisions on how to structure and base remaining forces and how best to administratively operate and support this new structure. Our financial statement audit work identified weaknesses in the basic controls over the accuracy of financial data and in the financial information managers need to support effective management and oversight, as well as accountability over DOD’s extensive inventories of weapons systems, equipment, and supplies. As such, our financial audits resulted in (1) significant improvements in the quality of agency financial information, (2) the identification of serious problems in DOD’s financial operations, and (3) improvements in financial accounting and reporting that will enable the Department not only to better meet its own management information needs but also the reporting objectives of the Chief Financial Officers Act as well. In fiscal year 1992, we reported on our first comprehensive financial audit of the Army and our second such audit of the Air Force. In fiscal year 1993, we completed our second audit of the Army, and we monitored the Air Force Audit Agency’s audit of the Air Force’s financial operations—its first financial statement audit. As a result of our audits, DOD has begun to recognize the benefits of improved financial reporting backed by annual audits and has begun to fix its extensive financial systems deficiencies. For fiscal year 1994, we plan to work with the Army Audit Agency in conducting an audit of the Army’s financial statements. In addition, we plan to conduct a similar audit of the Navy’s fiscal year 1994 financial reporting. Because the public, the Congress, and executive branch officials are increasingly concerned about the federal government’s fiscal condition, DOD has initiated far-reaching efforts to improve and modernize its financial practices, systems, and controls. These initiatives, many of which are relatively long term, are intended to eliminate redundancies by standardizing policies, procedures, and systems. Two of DOD’s major efforts aim to reform the way business functions are performed through the use of a separate fund (the Defense Business Operations Fund) and system improvements (the Corporate Information Management initiative). Our audits focused on helping to identify areas of DOD’s financial management operations that provide opportunities to achieve greater efficiency and effectiveness. For example, our work has pointed out that DOD must adopt policies that are fully consistent with businesslike practices to effectively manage the $81 billion Defense Business Operations Fund. We demonstrated that existing DOD systems used to manage and control resources must be substantially upgraded and effective new systems must be developed and implemented. Overall, our work to date shows that DOD has made limited progress in implementing its financial management improvement initiatives, consequently reducing the initiatives’ cost-saving potential. We focused on identifying whether DOD’s internal controls ensured that its financial management systems could accurately capture, process, and report on day-to-day transactions involving billions of dollars. Numerous recommendations have resulted in improvements in DOD’s ability to ensure the integrity and the reliability of financial information, to safeguard its assets, and to promote conformity with proper operating procedures. Although DOD has increased its ability to accurately account for and report on its financial operations and the financial status of its resources, much more remains to be done. The following are among the most important recommendations that have not yet been fully implemented. In June 1993, we reported that we could not express an opinion on the reliability of Army’s fiscal year 1992 financial statements, in part because actions had not been completed on previous recommendations. Specifically, our August 1992 report on the Army’s financial management operations and financial reporting contained recommendations for improving overall financial management by (1) enhancing internal controls and accountability over assets and resources, (2) developing reliable financial performance measures, (3) improving integration of logistics and financial systems, and (4) delineating financial management responsibilities between the Army and the Defense Finance and Accounting Service. (GAO/AFMD-92-82) In addition, our January 1993 report on depot maintenance for Army weapons and equipment contained recommendations directed at improving (1) safeguards over assets during the maintenance process and (2) records and reports on job order maintenance costs. Specifically, we found that inadequate protection of Army assets led to increased scrappage rates and maintenance costs and that weak accounting controls contributed to the inclusion of nonmaintenance costs in maintenance cost reporting. (GAO/AFMD-93-8) In our August 1992 report on the Air Force’s budgeting for repairable inventory items, which constitutes about $31 billion of Air Force inventories, we recommended that the Secretary of the Air Force improve the financial management and internal control systems used to develop budget estimates and to make purchase decisions for repairable items. Our recommendations focused on (1) implementing controls to help ensure the reliability of inventory data, (2) internal reporting to improve the Air Force Logistics Command’s oversight of the Air Logistics Center, (3) revising procedures to eliminate unsupported adjustments to inventory records, and (4) reporting the system and internal control weaknesses in the annual Federal Managers’ Financial Integrity Act report until corrected. (GAO/AFMD-92-47) In February 1992, we issued our second comprehensive report on Air Force’s financial management operations. This report built upon the work and the recommendations included in our February 1990 report, which contained 26 recommendations to improve financial controls, accountability, and reporting. In our latest report, we noted that many of our previous recommendations were still appropriate and that DOD and the Air Force were relying largely on long-term DOD initiatives to improve financial systems. We recommended that (1) DOD give high priority in the short term to improving the reliability of data in the Air Force’s primary accounting systems and (2) budgetary systems be used to compile more-reliable costs of weapon systems. (GAO/AFMD-90-23 and GAO/AFMD-92-12) A November 1992 report on Air Force depot maintenance, a Defense Business Operations Fund activity, recommended that DOD implement procedures to more accurately determine costs for billing customers, improve billing practices, and ensure compliance with Defense policies regarding operations of the Fund. (GAO/AFMD-93-5) In June 1993, we issued a report recommending that the Assistant Secretary of the Navy for Financial Management take actions to help prevent unmatched disbursements and to correct the $13.6 billion of unmatched disbursements contained in one major accounting system. We also recommended that this problem be reported as a material weakness in the Navy’s annual Federal Managers’ Financial Integrity Act report to DOD. (GAO/AFMD-93-21) Our March 1993 report on Navy’s depot maintenance industrial fund, a Defense Business Operations Fund organization, showed that the fund had losses totaling over $790 million because it did not recover all costs incurred in providing customers with goods and services. The report’s recommendations focused on ensuring that (1) prices were based on realistic estimates of the costs that would be incurred in providing the goods and services to customers and (2) prices were not adjusted by factors not directly related to the costs incurred, such as the recovery of prior year losses. (GAO/AFMD-93-18) Our report on DOD’s plans to change its method of financing over $70 billion of repairable inventory recommended that the Secretary of Defense develop a uniform policy on the ownership and the control of repairable items in the installation-level supply system. DOD is developing a policy to address our concerns. (GAO/AFMD-91-40) In another report on the financing of repairable inventory items through the stock fund, we recommended that the Secretary of Defense improve DOD’s financial management systems to ensure that DOD accurately (1) track items being returned from customers to the stock fund and (2) bill customers for items purchased from the stock fund. The lack of accurate inventory data could result in DOD’s buying too much inventory or the wrong mix of items. To address our concerns, DOD is enhancing its financial management systems. (GAO/AFMD-92-15) Air Force Depot Maintenance: Improved Pricing and Financial Management Practices Needed (GAO/AFMD-93-5) Defense Inventory: Growth in Air Force and Navy Unrequired Aircraft Parts (GAO/NSIAD-90-100) Defense Inventory: Growth in Ship and Submarine Parts (GAO/NSIAD-90-111) Financial Audit: Aggressive Actions Needed for Air Force to Meet Objectives of the CFO Act (GAO/AFMD-92-12) Financial Audit: Air Force Does Not Effectively Account for Billions of Dollars of Resources (GAO/AFMD-90-23) Financial Audit: Financial Reporting and Internal Controls at the Air Force Systems Command (GAO/AFMD-91-22) Financial Audit: Financial Reporting and Internal Controls at the Air Logistics Centers (GAO/AFMD-91-34) Financial Management: Air Force Systems Command Is Unaware of Status of Negative Unliquidated Obligations (GAO/AFMD-91-42) Financial Management: Army Conventional Ammunition Production Not Effectively Accounted for or Controlled (GAO/AFMD-92-57) Financial Management: Army Lacks Accountability and Control Over Equipment (GAO/AIMD-93-31) Financial Management: Defense’s System for Army Military Payroll Is Unreliable (GAO/AIMD-93-32) Financial Management: DOD Faces Implementation Problems in Stock Funding Repairable Inventory Items (GAO/AFMD-92-15) Financial Management: Immediate Actions Needed to Improve Army Financial Operations and Controls (GAO/AFMD-92-82) Financial Management: Inadequate Accounting and System Project Controls at AID (GAO/AFMD-93-19) Financial Management: Internal Control Weaknesses Impede Air Force’s Budgeting for Repairable Items (GAO/AFMD-92-47) Financial Management: Navy Industrial Fund Has Not Recovered Costs (GAO/AFMD-93-18) Financial Management: Poor Internal Control Has Led to Increased Maintenance Costs and Deterioration of Equipment (GAO/AFMD-93-8) Financial Management: Problems in Accounting for DOD Disbursements (GAO/AFMD-91-9) Financial Management: Uniform Policies Needed on DOD Financing of Repairable Inventory Items (GAO/AFMD-91-40) Financial Management: Weak Financial Accounting Controls Leave Commodity Command Assets Vulnerable to Misuse (GAO/AFMD-92-61) Financial Systems: Weaknesses Impede Initiatives to Reduce Air Force Operations and Support Costs (GAO/NSIAD-93-70) Management Review: Follow-Up on the Management Review of the Defense Logistics Agency (GAO/NSIAD-88-107) How information resources—hardware, software, data, and people—are acquired and managed is critical to nearly every government program’s mission—from exploring space, to collecting taxes, to providing social security benefits. The government spends about $20 billion annually acquiring the thousands of telecommunications and computer systems that support these missions. Responsibility for information management and technology is shared by the central and individual executive agencies. The central agencies—the Office of Management and Budget, the General Services Administration, and the Department of Commerce’s National Institute of Standards and Technology—formulate policies, procedures, and standards and monitor individual agency information resource management activities. Individual agencies are responsible for acquiring, managing, and using their information resources effectively and efficiently. We addressed information management and technology issues both governmentwide and as they affect specific agencies. Governmentwide, our recommendations dealing with the purchase of new computers and information systems have had particular impact. We disclosed that civilian agency modernization projects were being implemented before agencies reassessed, simplified, and streamlined their business practices. On the basis of our recommendations, agencies are starting to focus their modernization efforts on the strategic uses of technology for achieving their mission. The central agencies also are taking a more active role in helping individual agencies to develop business plans based on mission goals, analysis of business practices, and long-range information technology planning. In our agency-specific work, we reviewed issues related to the acquisition and the management of computer and telecommunications resources, including development of information systems. Our reviews covered such areas as asset management, child support enforcement, pension benefits, welfare programs, internal revenue, health care management, pesticide registration, weather forecasting, and crop insurance. We also evaluated agency management of information for increased program effectiveness. Presently, all key open governmentwide recommendations are being addressed. The following key open agency-specific recommendation relates to the Department of Defense’s corporate information management strategy. Our agency-specific reports fall into substantive areas that concern other issue areas. For example, a report on the Federal Deposit Insurance Corporation’s (FDIC) automatic data processing programs supplements and complements the work of the financial institutions and markets issue area. Because our reports address specific programs, they are included in the appropriate issue area sections of this report. For example, our report on FDIC’s Asset Management System is found in the section on “Financial Institutions and Markets.” The Department of Defense has made little progress in implementing the recommendations in our September 1992 report. Defense is at a turning point regarding CIM. The new leadership of the incoming administration is reassessing the overall strategy of the CIM initiative. It is unclear what this reassessment will encompass and when it will be completed. As one of the largest information management initiatives ever undertaken, CIM has great promise—not only for Defense but for other federal agencies and the nation as well. By improving business operations with less resources, Defense can improve its war-fighting capabilities while shifting scarce resources to other national needs. Implementing CIM, however, requires a major cultural change in managing information resources that Defense is finding difficult to implement. Therefore, we believe it is critical for the Secretary of Defense to take an active role in implementing CIM. We are leaving the recommendations in our report open until we can determine if Defense’s reassessment of CIM adequately addresses our concerns. (GAO/IMTEC-92-77) ADP Procurement: Prompt Navy Action Can Reduce Risks to SNAP III Implementation (GAO/IMTEC-92-69) Air Force ADP: Lax Contract Oversight Led to Waste and Reduced Competition (GAO/IMTEC-93-3) Air Traffic Control: FAA Needs to Justify Further Investment in Its Oceanic Display System (GAO/IMTEC-92-80) Asset Management System: Liquidation of Failed Bank Assets Not Adequately Supported by FDIC System (GAO/IMTEC-93-8) Collecting Back Taxes: IRS Phone Operations Must Do Better (GAO/IMTEC-91-39) Composite Health Care System: Outpatient Capability Is Nearly Ready for Worldwide Deployment (GAO/IMTEC-93-11) Crop Insurance Program: Nationwide Computer Acquisition Is Inappropriate at This Time (GAO/IMTEC-93-20) Defense ADP: Corporate Information Management Must Overcome Major Problems (GAO/IMTEC-92-77) Defense Communications: Defense’s Program to Improve Telecommunications Management Is at Risk (GAO/IMTEC-93-15) Department of Energy: Better Information Resources Management Needed to Accomplish Missions (GAO/IMTEC-92-53) Embedded Computer Systems: Software Development Problems Delay the Army’s Fire Direction Data Manager (GAO/IMTEC-92-32) Energy Information: Department of Energy Security Program Needs Effective Information Systems (GAO/IMTEC-92-10) Environmental Enforcement: EPA Needs a Better Strategy to Manage Its Cross-Media Information (GAO/IMTEC-92-14) Environmental Enforcement: Penalties May Not Recover Economic Benefits Gained by Violators (GAO/RCED-91-166) Environmental Protection: EPA’s Plans to Improve Longstanding Information Resources Management Problems (GAO/AIMD-93-8) FAA Information Resources: Agency Needs to Correct Widespread Deficiencies (GAO/IMTEC-91-43) FTS 2000 Overhead: GSA Should Reassess Contract Requirements and Improve Efficiency (GAO/IMTEC-92-59) GSA’s Computer Security Guidance (GAO/AIMD-93-7R) Health Information Systems: National Practitioner Data Bank Continues to Experience Problems (GAO/IMTEC-93-1) High Performance Computing: Advanced Research Projects Agency Should Do More to Foster Program Goals (GAO/IMTEC-93-24) Information Management: Immigration and Naturalization Service Lacks Ready Access to Essential Data (GAO/IMTEC-90-75) Information Resources Management: Initial Steps Taken But More Improvements Needed in AID’s IRM Program (GAO/IMTEC-92-64) IRS Procurement: Software Documentation Requirement Did Not Restrict Competition (GAO/IMTEC-92-30) Justice Automation: Tighter Computer Security Needed (GAO/IMTEC-90-69) Medical ADP Systems: Automated Medical Records Hold Promise to Improve Patient Care (GAO/IMTEC-91-5) Patent and Trademark Office: Key Processes for Managing Automated Patent System Development Are Weak (GAO/AIMD-93-15) Pesticides: Information Systems Improvements Essential for EPA’s Reregistration Efforts (GAO/IMTEC-93-5) Premium Accounting System: Pension Benefit Guaranty Corporation System Must Be an Ongoing Priority (GAO/IMTEC-92-74) Securities and Exchange Commission: Effective Development of the EDGAR System Requires Top Management Attention (GAO/IMTEC-92-85) Software Tools: Defense Is Not Ready to Implement I-CASE Departmentwide (GAO/IMTEC-93-27) SSA Computers: Long-Range Vision Needed to Guide Future Systems Modernization Efforts (GAO/IMTEC-91-44) Treasury Automation: Automated Auction System May Not Achieve Benefits or Operate Properly (GAO/IMTEC-93-28) Veterans Affairs IRM: Stronger Role Needed for Chief Information Resources Officer (GAO/IMTEC-91-51BR) Veterans Benefits: Acquisition of Information Resources for Modernization Is Premature (GAO/IMTEC-93-6) Weather Forecasting: Important Issues on Automated Weather Processing System Need Resolution (GAO/IMTEC-93-12BR) Welfare Programs: Ineffective Federal Oversight Permits Costly Automated System Problems (GAO/IMTEC-92-29) Auditing is an important control to help ensure that federal programs and operations are properly carried out and potential problems are identified and resolved promptly and effectively. Auditing also helps to ensure a strong system of governance and accountability in American corporations and institutions and helps to protect federal deposit insurance funds, stockholders and creditors, and taxpayers from exposure to unanticipated risks and losses. With the growing complexity of the federal government and the problems it faces, including severe fiscal strains, an independent and reliable structure must be in place to ensure adequate audit coverage of federal programs and operations, as well as public sector activities of interest to the government. Our work has focused on improving the quality and the effectiveness of audits of federal expenditures, ensuring the quality of audits performed by nonfederal auditors, strengthening corporate governance and accountability, and improving the financial management of legislative branch operations through regular financial statement audits. In our oversight of the Inspectors General (IG) and other audit organizations, our reviews resulted in improved audit coverage, resource usage, and quality of work, as well as the removal of impairments to IG independence and authority. In addition, our audit resolution work prompted the Office of Management and Budget (OMB) to begin revising its audit followup guidance to ensure that agencies take action on IG audit recommendations. In helping to strengthen corporate governance and accountability, our work on bank audit committees contributed to the passage of legislation requiring independent audit committees for all federally insured depository institutions. In recent years, many changes have taken place in the public accounting profession, and our reports on the quality of audits by certified public accountants contributed to the impetus for the changes. As a result of our work on the audits of private employee benefit plans, legislation was introduced in January 1993 that will enhance the value of plan audits. Additional legislation is being drafted by the Department of Labor to encourage better plan management and to better protect the interests of plan participants and the government. Regarding our legislative branch work, our financial statement audits of several legislative entities (such as the House and Senate Sergeants at Arms) and other legislative programs and operations (such as the Congressional Award Program and the Library of Congress) resulted in a number of improvements in their internal controls and accounting systems. Over the years, federal managers have not paid adequate attention to implementing IG recommendations, which has rendered audit resources less effective and has resulted in losses in federal programs and operations. The audit resolution problems are attributable in part to outdated guidance in OMB Circular A-50, “Audit Followup,” on closing audit recommendations. We have recommended that OMB revise the circular to require agencies to close audit recommendations and provide the necessary documentation to verify the closure when (1) agreed-upon corrective actions have been implemented, (2) alternative actions have been taken that essentially meet the auditors’ intent, or (3) circumstances have changed and the recommendations are no longer valid. (GAO/AFMD-92-16) In our continuing review of the quality of audits by nonfederal auditors, we identified weaknesses in the audits of private employee benefit plans so serious that their reliability and usefulness were questionable. We have recommended that the Congress amend the Employee Retirement Income Security Act (ERISA) to (1) require reporting on the adequacy of internal controls by plan administrators and auditors, (2) provide for direct reporting to the Department of Labor of fraud and serious ERISA violations, and (3) require peer review of plan auditors. (GAO/AFMD-92-14) During the past several years, well-publicized cases of financial irregularities in many companies and financial institutions (such as those in the savings and loan industry) have raised serious questions about corporate accountability, the effectiveness of corporate governance and regulation, and the adequacy of audit requirements. We have supported congressional efforts to amend banking laws and securities laws to increase both management’s and the auditor’s responsibilities for detecting and reporting irregularities. We have recommended that the Securities and Exchange Commission (1) ensure that managers of public companies publicly report on their responsibilities for financial statements and internal controls, (2) require the auditor to review and publicly report on the management report, and (3) adopt a requirement for public companies to establish audit committees. (GAO/AFMD-89-38) In the first-ever attempt to audit the financial operations of the Library of Congress, we found that the Library’s financial and accounting records were in such poor condition that we could not audit significant account balances. Because of weaknesses in the Library’s financial management operations, its ability to account for and control its collection of an estimated 89 million books and other materials was limited. We recommended that, to help the Library bring about lasting improvements in its internal controls, the Librarian of Congress (1) establish accounting and internal control policies and procedures to ensure compliance with applicable accounting standards and (2) develop an overall financial management improvement plan. (GAO/AFMD-91-13) Air Force Audit Agency: Opportunities to Improve Internal Auditing (GAO/AFMD-90-16) Audit Resolution: Strengthened Guidance Needed to Ensure Effective Action (GAO/AFMD-92-16) CPA Audit Quality: Status of Actions Taken to Improve Auditing and Financial Reporting of Public Companies (GAO/AFMD-89-38) Employee Benefits: Improved Plan Reporting and CPA Audits Can Increase Protection Under ERISA (GAO/AFMD-92-14) Financial Audit: First Audit of the Library of Congress Discloses Significant Problems (GAO/AFMD-91-13) Single Audit Act: Single Audit Quality Has Improved but Some Implementation Problems Remain (GAO/AFMD-89-72) Congressional committees require evaluative information on federal government programs and issues, and they look to the congressional agencies, including GAO, to provide it. Sound program evaluations are also valuable tools for better management in government. To help improve the quality of evaluative information available to the Congress and to federal agencies, we evaluate various executive agencies’ programs, usually at the request of congressional committees. These studies generally fall into one of four areas: (1) determining the intended and unintended effects of an existing program, (2) identifying the potential effects of a proposed program, (3) assessing the quality of information available in a program area for use in congressional decisionmaking, or (4) reviewing executive branch evaluation functions and studies. In many evaluation reports, we make recommendations to agency officials to (1) correct problems identified in existing programs, (2) increase their awareness of potential effects of proposed programs, (3) improve the quality of information they are collecting and analyzing, and (4) develop more fully their own capability to perform high-quality program evaluation. Thus, while these studies are often used initially by the Congress in its deliberations on specific programs, they are also intended to bring about agency improvements. In some cases, our program evaluations have provided demonstrations of novel or substantially improved designs and methodologies for measuring the extent of program effectiveness or answering evaluation questions of general interest. Thus, the results of our work have frequently helped others in the evaluation field perform their work. Because our program evaluation and methodology studies concern other issue areas, the studies are also discussed in the appropriate issue area sections of this publication. For example, our report on student achievement standards is also discussed in the section entitled “Education and Employment.” The Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991 emphasized the linkage between traffic congestion and urban air pollution and the need to address both problems jointly and through local planning efforts. Our 1992 report identified several obstacles to achieving ISTEA’s goals in these areas and recommended that the Department of Transportation report to the Congress midway through the reauthorization cycle (FY 1995) on its activities to overcome these obstacles. We noted in particular the need to perform and widely disseminate evaluations of the effectiveness of transportation demand management measures in reducing both congestion and pollution. (GAO/PEMD-93-2) On the basis of our series of eight classified reports on the U.S. strategic nuclear triad, we made five specific recommendations to the Department of Defense (DOD) in our June 10, 1993, unclassified testimony to the Senate Governmental Affairs Committee. To date, DOD has not acted favorably or conclusively on four of those recommendations, as follows: (1) that procurement of the B-2 bomber be terminated with the completion of 15 aircraft, rather than 20 as requested by the Air Force; (2) that additional operational testing of the B-1B bomber be done to verify essential improvements in reliability and electronic countermeasures and to remove remaining uncertainties concerning range performance; (3) that the cost-effectiveness of the Air Force’s proposed life-service extension of the Minuteman III intercontinental ballistic missile be the subject of additional, rigorous review; and (4) that the Navy should continue flight testing for the D-5 submarine-launched ballistic missile at an annual rate sufficient to maintain an understanding of actual missile performance at a high level of confidence. (GAO/T-PEMD-93-5) Our recent review of the three major sources of information on use of illegal drugs showed that the nation lacked good evidence on which to gauge progress in drug control. Surveys of households and high school students do not cover the populations at highest risk and, for those who are surveyed, self-reports of drug use are questionable. We recommended that the Secretary of Health and Human Services make new efforts to validate the commonly used self-report surveys, that the Congress change current laws to require less frequent collection of data on the general population, and that the Secretary of Health and Human Services expand special studies of high-risk groups to fill the gaps in current surveys. (GAO/PEMD-93-18) After reviewing standards set to interpret students’ performance on the National Assessment of Educational Progress (NAEP), we found many technical flaws that made the results of doubtful validity. We recommended that the new standards be withdrawn by the NAEP governing board, that they not be used in reporting NAEP results, and that the governing board also take a number of specific steps to ensure that it does not adopt technically unsound policies or approve technically flawed results. (GAO/PEMD-93-12) Our 8-year followup evaluation, using unique computer-matched wage and service data, showed there were only modest long-term outcomes of the state-federal program that provided services to help persons with disabilities become employed and more independent and be integrated into the community. We also found unexplained disparities in the extent of services purchased for clients of different races. We recommended that the Secretary of Education find out why these disparities existed; strengthen evaluation in a number of ways; and take steps to establish the National Commission on Rehabilitation Services, authorized in 1992 to review the program in depth, before the next reauthorization. (GAO/PEMD-93-19) Adequacy of the Administration on Aging’s Provision of Technical Assistance for Targeting Services Under the Older Americans Act (GAO/T-PEMD-91-3) Adolescent Drug Use Prevention: Common Features of Promising Community Programs (GAO/PEMD-92-2) Drug Abuse Research: Federal Funding and Future Needs (GAO/PEMD-92-5) Drug Use Measurement: Strengths, Limitations, and Recommendations for Improvement (GAO/PEMD-93-18) Educational Achievement Standards: NAGB’s Approach Yields Misleading Interpretations (GAO/PEMD-93-12) Hazardous Waste Exports: Data Quality and Collection Problems Weaken EPA Enforcement Activities (GAO/PEMD-93-24) Illegal Aliens: Despite Data Limitations, Current Methods Provide Better Population Estimates (GAO/PEMD-93-25) Medical Technology: For Some Cardiac Pacemaker Leads, the Public Health Risks Are Still High (GAO/PEMD-92-20) Medical Technology: Quality Assurance Needs Stronger Management Emphasis and Higher Priority (GAO/PEMD-92-10) Medical Technology: Quality Assurance Systems and Global Markets (GAO/PEMD-93-15) Paperwork Reduction: Agency Responses to Recent Court Decisions (GAO/PEMD-93-5) Pesticides: A Comparative Study of Industrialized Nations’ Regulatory Systems (GAO/PEMD-93-17) Public Health Service: Evaluation Set-Aside Has Not Realized Its Potential to Inform the Congress (GAO/PEMD-93-13) Student Testing: Current Extent and Expenditures, With Cost Estimates for a National Examination (GAO/PEMD-93-8) The U.S. Nuclear Triad: GAO’s Evaluation of the Strategic Modernization Program (GAO/T-PEMD-93-5) Traffic Congestion: Activities to Reduce Travel Demand and Air Pollution Are Not Widely Implemented (GAO/PEMD-93-2) Trauma Care Reimbursement: Poor Understanding of Losses and Coverage for Undocumented Aliens (GAO/PEMD-93-1) Vocational Rehabilitation: Evidence for Federal Program’s Effectiveness Is Mixed (GAO/PEMD-93-19) This electronic edition contains the details for GAO’s open recommendations. This PC-based software lets you use several text search and retrieval options to find either summaries of key open recommendations or the details of open recommendations. To load the software on your hard drive (7.5MB required): 1. Place program disk 1 in your floppy disk drive. 2. Type the drive designation of your floppy drive and the word “INSTALL”. For example, type “B:INSTALL”. Press <Enter>. 3. Follow the instructions on the screen. 4. If you are updating a previous version, the install program will replace the old files with new ones. 2. Disk 2 of 2 is the “LAST” disk. To run the program: 1. Change to the drive and subdirectory where the software has been loaded. Type “C:”. Press <Enter>. Type “CD\OPENREC”. Press <Enter>. Type “OR”. Press <Enter>. 2. When the Introductory Menu is displayed, highlight an option to learn more about this program. Press <Enter>. You may search for open recommendations by using report number, title, date, name of a federal entity, congressional committee, name of GAO’s point of contact, or any other word or phrase that may appear in the report. To perform this search, use the numerous options provided on several search menus. Most menus have similar options and require the following general steps: 1. Start at the Introductory Menu, highlight “MAIN”. Press <Enter>. 2. At the Main Search Menu, highlight the option to locate the information you want. (See Search Options.) Press <Enter>. 3. At the next menu, indicate how much information you want to extract and where you want the output to go. (Figure 1 shows the menu screen.) Press <Enter>. 4. To perform the search, type a word or phrase. Press <Enter>. The most recent report is listed first. 5. To review the open recommendations for a specific report shown on the list of titles, highlight the “REPORT NUMBER”. Press <Enter>. 6. Use the <PgDn> and arrow keys to scroll through the open recommendations and related information. 7. When using special lists to narrow a search (see Search Options), you perform the search (in step 4 above) by first displaying the special list. Type a word or phrase to get a subset of relevant terms or type “ALL” to get the entire list of terms. Press <Enter>. Second, highlight your desired term on the list. Press <Enter>. 8. To rerun your last search after selecting another output option, press <Enter> without entering new search words. 1. Menu option 2 takes longer but will give you a count of the reports meeting your search criteria. 2. Menu option 4 directly provides the open recommendations and related information for the most recent report that meets your search criteria. Additional reports will follow in the order they were issued. 3. If chosen, output can be sent to the printer at LPT1. 4. If chosen, output can be sent as ASCII text to the disk file that you designate. The Main Search Menu includes six options to help you narrow or expedite your search. Allows you to locate open recommendations using a report number, title, date, job code, or any other word or phrase that may appear in the report. This includes “!OPTIONS” that provides a way to obtain custom askSam queries and reports (for those who know the askSam programming language). Allows you to identify the impact of GAO’s work and key open recommendations that deserve priority. You may search using key words or a table of contents. Allows you to locate open recommendations using terms indexed to major subjects in GAO reports. Allows you to locate open recommendations by the congressional committee or subcommittee having primary interest in or jurisdiction over subjects discussed in GAO reports. Allows you to locate open recommendations that were addressed to a specific executive department, agency, or congressional committee. Allows you to locate open recommendations by the GAO executive who is the point of contact for questions about reports and recommendations. To refine a search, you can use the following: 1. To look for a phrase in the exact order, enclose your search words in “”. 2. You can use wildcards characters to substitute for a single character or a group of characters. “*” can represent a group of characters. For example, use “ACCOUNT*” to get ACCOUNTING, ACCOUNTANT, and ACCOUNTS. “?” can represent a single character. For example, use “F-1?” to get F-15, F-16, and F-18. 3. Combine search words or phrases with connectors—“{and}”; “{or}”; “{not}”—to narrow or broaden a search. 1. The Escape key (i.e., “Esc”) may be used at any time to cancel a search or backup to a previous menu. 2. To quit this program and return to the DOS prompt (at any menu option), highlight “QUIT”. Press <Enter>. 1. At the Introductory Menu, highlight “How to use this software”. Press <Enter>. 2. On any menu screen, highlight “!HELP”. Press <Enter>. 3. Technical support is available from: Lawson “Rick” Gist, Jr. Assistant Director GAO, Office of Policy Voice (202) 512-4478 Fax (202) 512-4844 The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1000 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066.
|
GAO reported on the conclusions and recommendations resulting from its audits and other reviews of federal departments and agencies. Information is presented on recommendations in the areas of national security, international affairs, natural resources, economic development, human resource management, justice, general government, and financial and information management. They are submitted for use in congressional review of budget requests for fiscal year 1995.
|
The nation’s highway transportation system includes infrastructure, vehicles and users, equipment, facilities, and control and communications. Infrastructure or the “fixed” aspect of the highway transportation system includes roads, bridges, tunnels, and terminals, where travelers and freight can enter and leave the system. Many vehicle types operate on the highway system, moving both people and freight. Highway system users include commercial vehicle and private passenger drivers, cargo shippers and receivers, passengers, and pedestrians. Equipment refers to items such as machinery, cones, barriers and bollards used to create stand off distance. Facilities include terminals, warehouses, depots, and other transportation-related buildings that support the highway system. Finally, control and communications are methods for controlling vehicles, infrastructure, and the entire transportation network. These items include traffic lights, message signs, call boxes, ramp metering, closed circuit television and speed monitoring systems. Although these security enhancements are typically funded by the asset owner, the Federal Emergency Management Agency (FEMA) has provided funding to secure highway infrastructure through its grant programs. DHS funding for highway infrastructure security consists of a general appropriation to TSA for its entire surface transportation security program, which includes commercial vehicles and highway infrastructure, rail and mass transit, and pipeline security, and appropriations to FEMA for its Homeland Security Grant Program and Infrastructure Protection Program. Annual appropriations to TSA for its surface transportation security program were $36 million in fiscal year 2006, $37.2 million in fiscal year 2007, $46.6 million in fiscal year 2008, and $49.6 million in fiscal year 2009. Total FEMA funding available under the two principal grant programs increased from approximately $2 billion to over $2.5 billion from fiscal years 2006 through 2008. Protecting the nation’s highway infrastructure can be complicated due to the number of stakeholders involved. As illustrated in figure 1, numerous entities at the federal, state, and local levels, including public and private sector owners and operators, play a key role in highway infrastructure security. Highway infrastructure in the United States is owned and operated by a combination of federal entities, states, counties, municipalities, tribal authorities, private enterprise, and groupings of these entities. Although state and local governments own, operate, and have law enforcement jurisdiction over most of the highway infrastructure in the United States, bridge and turnpike authorities operate some major infrastructure, and there are a few privately owned bridges, tunnels, and roadways. DHS is the cabinet level department with primary responsibility for helping to secure highway infrastructure. Within DHS, TSA has primary responsibility for securing all modes of transportation, including highway infrastructure with support from other DHS entities including the National Protection and Programs Directorate (NPPD), USCG, Science and Technology Directorate, FEMA, and U.S. Customs and Border Protection (CBP). For example, as part of its mission, CBP is responsible for preventing people or goods that could threaten infrastructure from entering ports of entry. Although TSA is the lead agency responsible for the security of highway infrastructure, DOT, through FHWA, provides highway transportation expertise to assist TSA with respect to securing highway infrastructure. NPPD, through IP, is responsible for coordinating efforts to protect the nation’s most critical assets across all critical infrastructure and key resources, which includes surface transportation. Within the transportation sector, IP works with TSA to identify nationally critical highway assets. USCG also conducts activities in support of highway infrastructure protection, such as identifying potential vulnerabilities of individual highway assets that have a maritime nexus or that affect the marine transportation system, such as bridges over navigable waterways. The Science and Technology Directorate is responsible for advising the Secretary on research and development efforts to support the Department’s mission and conducts research to identify and mitigate vulnerabilities to bridges and tunnels. FEMA is responsible for awarding and administering DHS grant funds in conjunction with responsible program offices. While federal stakeholders play a role in facilitating risk- based infrastructure security efforts, implementation of asset-specific protective security measures remains the responsibility of individual asset owners-operators, most commonly states or other public entities. A number of national organizations and coordination groups exist to represent the broad composition of public and private sector highway infrastructure stakeholders. At the state level, representation is provided by AASHTO. To date, AASHTO has played a key role in representing state interests related to protecting highway infrastructure and routinely collaborates with federal entities to assist their members in enhancing infrastructure security. In April 2006, the Highway GCC was established to foster communication across government agency lines, and between the government and private industry, in support of the nation’s homeland security mission. The Highway GCC membership largely consists of key Federal departments and stakeholders responsible for or involved with highway and motor carrier security, but also includes key entities such as AASHTO. The objective of the Highway GCC is to coordinate highway and motor carrier security strategies and activities; establish policies, guidelines and standards; and develop program metrics and performance criteria for the highway mode. The counterpart to the Highway GCC is the Highway SCC. This group is comprised of private sector owners and operators and representative associations of highway and motor carrier assets. The Highway SCC is an industry advisory body that, as appropriate, is to coordinate the private industry perspective on highway and motor carrier security policy, practices, and standards that affect the highway mode. Federal laws and directives call for critical infrastructure protection activities to help secure infrastructure assets that are essential to national security. While a number of federal laws impose safety requirements on highway infrastructure, no federal laws explicitly require highway infrastructure operators to take action to safeguard their assets against a terrorist attack. In November 2001, the Aviation and Transportation Security Act (ATSA) generally required TSA to (1) receive, assess, and distribute intelligence information related to transportation security; (2) assess threats to transportation security and develop policies, strategies, and plans for dealing with those threats, including coordinating countermeasures with other federal organizations; and, (3) enforce security-related regulations and requirements. Further, in November 2002, the Homeland Security Act of 2002 created DHS and mandated IP to comprehensively assess the vulnerabilities of the critical infrastructure and key resources of the United States; integrate relevant information, intelligence analyses, and vulnerability assessments to identify protective priorities and support implemented protective security measures; and develop a comprehensive national plan for securing the key resources and critical infrastructures of the United States. The Intelligence Reform and Terrorism Prevention Act of 2004 also requires DHS to develop and implement a National Strategy for Transportation Security to include an identification and evaluation of the transportation assets that must be protected from attack or disruption, the development of risk-based priorities for addressing security needs associated with such assets, means of defending such assets, a strategic plan that delineates the roles and missions of various stakeholders, a comprehensive delineation of response and recovery responsibilities, and a prioritization of research and development objectives. More recently, in August 2007, the Implementing Recommendations of the 9/11 Commission Act (9/11 Commission Act), among other things, specified that the transportation modal security plans, including the plan for highways, required by the Intelligence Reform and Terrorism Prevention Act must include threats, vulnerabilities, and consequences, and requires DHS to establish a Transportation Security Information Sharing Plan. The President has also issued directives concerning protecting critical infrastructure. In May 1998, Presidential Decision Directive 63 (PDD-63) established critical infrastructure protection as a national goal and presented a strategy for cooperative efforts by the government and infrastructure stakeholders to protect the physical and cyber-based systems essential to the minimum operations of the economy and the government. In addition, in December 2003, HSPD-7 was issued, superseding PDD-63. HSPD-7 defines responsibilities for DHS, federal stakeholders that are responsible for addressing specific critical infrastructure sectors—sector-specific agencies, and other departments and stakeholders. HSPD-7 instructs these sector-specific agencies to collaborate with all relevant Federal departments and agencies, State and local governments, and the private sector, including with key persons and entities in their infrastructure sector; conduct or facilitate vulnerability assessments of the sector; and encourage risk management strategies to protect against and mitigate the effects of attacks against critical infrastructure and key resources. HSPD-7 designates DHS as responsible for, among other things, coordinating national critical infrastructure protection efforts and establishing uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across sectors. Moreover, Homeland Security Presidential Directive-8 (HSPD-8), issued at the same time as HSPD-7, directs DHS to coordinate the development of an all-hazards National Preparedness Goal that establishes measurable priorities, targets, standards for preparedness assessments and strategies, and a system for assessing the Nation’s overall level of preparedness. Further, in December 2006 the President issued Executive Order 13416, which focused on strengthening the security of surface transportation modes and requires DHS to assess the security of each surface transportation mode and evaluate the effectiveness and efficiency of current surface transportation security initiatives. For additional key federal laws and guidance related to critical highway infrastructure protection, see Appendix II. Recognizing that each sector possesses its own unique characteristics and risk landscape, HSPD-7 designates Federal Government Sector Specific Agencies (SSAs) for each of the critical infrastructure sectors who are to work with DHS to improve critical infrastructure security. On June 30, 2006, DHS released the NIPP, which developed—in accordance with HSPD-7—a risk-based framework for the development of Sector-Specific Agency (SSA) strategic plans. The NIPP defines roles and responsibilities for security partners in carrying out critical infrastructure and key resources (CIKR) protection activities through the application of risk management principles. Figure 2 illustrates the several interrelated activities of the risk management framework as defined by the NIPP, including setting security goals and performance targets, identifying key assets and sector information, and assessing risk information including both general and specific threat information, potential vulnerabilities, and the potential consequences of a successful terrorist attack. The NIPP requires that federal agencies use this information to inform the selection of risk-based priorities and for the continuous improvement of security strategies and programs to protect people and critical infrastructure through the reduction of risks from acts of terrorism. The NIPP risk management framework consists of the following interrelated activities: Set security goals: Define specific outcomes, conditions, end points, or performance targets that collectively constitute an effective protective posture. Identify assets, systems, networks, and functions: Develop an inventory of the assets, systems, and networks that comprise the nation’s critical infrastructure, key resources, and critical functions. Collect information pertinent to risk management that takes into account the fundamental characteristics of each sector. Assess risks: Determine risk by combining potential direct and indirect consequences of a terrorist attack or other hazards (including seasonal changes in consequences, and dependencies and interdependencies associated with each identified asset, system, or network), known vulnerabilities to various potential attack vectors, and general or specific threat information. Prioritize: Aggregate and analyze risk assessment results to develop a comprehensive picture of asset, system, and network risk; establish priorities based on risk; and determine protection and business continuity initiatives that provide the greatest mitigation of risk. Implement protective programs: Select sector-appropriate protective actions or programs to reduce or manage the risk identified, and secure the resources needed to address priorities. Measure effectiveness: Use metrics and other evaluation procedures at the national and sector levels to measure progress and assess the effectiveness of the national Critical Infrastructure and Key Resources protection program in improving protection, managing risk, and increasing resiliency. Several federal entities have efforts underway to assess threat, vulnerability, and consequence—the three elements of risk—for highway infrastructure; however, these assessments have not been systematically coordinated among key federal partners. DHS agencies and offices, including TSA, I&A, and USCG, each have efforts underway to assess the threats posed to highway infrastructure, including the most likely tactics that terrorists may use and potential targets. Federal agencies are also assessing the security vulnerabilities of and consequences of an attack on highway assets to some degree, although the scope and purpose of these individual efforts vary considerably. However, the risk assessment activities conducted to date have not been systematically coordinated among the federal partners. Given competing departmental priorities and limited resources identified by TSA and IP officials, it is important for federal stakeholders to coordinate their efforts and share available risk information to avoid potential duplication, better focus future assessment efforts, and leverage limited resources. Several DHS stakeholders play a role in securing highway infrastructure, including TSA, I&A, IP, and USCG—along with FHWA within DOT. Collectively, they have a number of independent efforts underway to conduct threat, vulnerability, and consequence assessments of highway assets. Although the scope and purpose of these individual efforts vary by entity and are at various levels of completion, they have been used to a limited extent to assess the general state of security for the sector, and to identify potential security enhancements for a majority of highway infrastructure assets identified as nationally critical. See table 1 for a summary of federal risk assessment activities related to highway infrastructure assets. DHS stakeholders develop a combination of products that identify what they have determined to be the most probable threat scenarios involving highway infrastructure. For example, TSA’s OI issues an annual threat assessment of the U.S. highway system and provides additional threat and suspicious incident information to key federal and nonfederal highway infrastructure stakeholders as needed. Recent suspicious activity involving highway infrastructure reported by the media could suggest potential terrorist plans to attack the nation’s highway system. For example, in July 2008, the media reported a U.S.-educated female Pakistani neuroscientist suspected of having links to Al Qaeda, while captured in Afghanistan, was found carrying handwritten notes referring to a “mass casualty attack” on famous locations in New York, including the Brooklyn Bridge. In addition to the issuance of the Highway Threat Assessment, TSA’s OI has also developed likelihood estimates for specific threat scenarios involving highway infrastructure. These estimates include scores of both terrorist intent and capability—the key components of threat—for five specific threat scenarios. These scores are intended to serve as the input for the threat component of the overall risk equation that TSA uses: Risk = ƒ(Threat x Vulnerability x Consequences). The Homeland Infrastructure Threat and Risk Analysis Center (HITRAC), which is a joint program office between the Office of Infrastructure Protection and the Office of Intelligence and Analysis, manages the Strategic Homeland Infrastructure Risk Assessment process. The results of this process provide a national overview of current high-risk scenarios for all critical infrastructure and key resources, which includes attacks on select highway infrastructure. In developing these scenarios, analysts consider terrorist capability and intent (threat), as well as vulnerability and consequence information. While this product is not intended to cover the full range of potential threat scenarios posed to the highway sector, it may serve to assist TSA and other federal highway security stakeholders in identifying specific high-risk scenarios that may require additional focus or resources. As part of its annual risk assessment of maritime infrastructure, USCG has also developed a number of threat scenarios involving select bridges and tunnels. USCG uses threat information provided internally by its Intelligence Coordination Center to evaluate 19 different attack scenarios for each infrastructure asset via the Maritime Security Risk Analysis Model (MSRAM). As with TSA and IP, USCG uses threat information as an input when conducting assessments of potential vulnerabilities and consequences of an attack on maritime highway infrastructure. According to the NIPP, DHS is responsible for ensuring that comprehensive vulnerability assessments are performed for infrastructure that is deemed nationally critical. Given the potential for loss of life, economic disruption, and other impacts resulting from an attack on critical highway infrastructure, DHS stakeholders and other federal partners have a number of efforts underway to assess the vulnerabilities of these assets. These efforts are intended to help identify potential security gaps and prioritize mitigation solutions. However, the degree to which vulnerability assessments have been completed for individual highway infrastructure assets varies considerably between these entities, given their available resources and other security priorities. For example, given the substantial number of highway infrastructure assets under their jurisdiction and staffing limitations, TSA’s Highway Motor Carrier Division (HMC) has chosen to identify highway infrastructure vulnerabilities by working primarily with State departments of transportation to identify the extent to which common security practices are employed. However, more comprehensive asset-specific vulnerability analyses are conducted by both IP and USCG, although the scope and purpose of the resulting products vary considerably. While these distinct entities each have vulnerability assessment efforts underway, the assessment efforts of TSA and IP have slowed considerably due to other identified priorities, and no timeframes currently exist for their completion. In addition, during the course of this review TSA officials stated TSA, as the Sector-Specific Agency for highway infrastructure, had not yet determined whether asset- specific federal vulnerability assessments should be completed for all critical highway infrastructure. However, when providing written comments on this report in January 2009, TSA officials noted that they intend to conduct individual assessments on all bridge and tunnel properties that it has identified as critical beginning in 2009. The following represents the specific vulnerability assessment activities conducted by DHS entities and their federal partners. Through its CSR program, HMC conducts interviews with state officials to assess the security plans, policies, and security actions of organizations whose operations include critical highway infrastructure. As part of these interviews, TSA utilizes standardized questions to document the extent to which security efforts have been implemented within 11 functional areas, including security planning, physical security measures, and security training programs, among others. These security reviews focus primarily on state DOT offices, but may include other state agencies with transportation security functions, such as the Offices of Emergency Management or Homeland Security. At the time of our review, HMC officials stated that the resources associated with conducting vulnerability assessments makes it impractical to conduct asset-specific assessments of the vast number of bridges and tunnels that comprise the nation’s highway system. For this reason, HMC had chosen to utilize primarily a non asset- specific approach to conducting vulnerability assessments of the highway infrastructure sector, through the CSRs. HMC officials stated that they rely on infrastructure owners and operators to conduct asset-level vulnerability assessments on highway assets, and that they generally review these findings as a component of their CSR activities. However, as previously stated, after reviewing a draft this report, TSA commented in January 2009 that it intends to conduct individual assessments on all bridge and tunnel properties that TSA has identified as critical beginning in 2009. Since the CSR program was initiated in May 2004, HMC has completed CSRs for most of the states and a select number of CSRs for specific highway infrastructure assets. According to HMC officials, the goal of these efforts is to assess potential security gaps and provide state officials with suggested actions for strengthening security. However, the pace of TSA’s CSR program has slowed considerably in recent years, and no timeframe currently exists for their completion for all 50 states. Specifically, most of the state level CSRs were conducted during the first two years of the program’s implementation, which began in May 2004. HMC officials stated that a combination of competing priorities and a reduction in staff available to perform CSR’s led to the slowing of this effort. Specifically, HMC officials said that the 9/11 Commission Act placed a number of additional requirements on the division, such as completing a national risk assessment for school buses. While HMC officials are currently planning to conduct highway infrastructure CSR’s in all remaining states, it remains unclear if, or when, this will be achieved. In accordance with standard program management principles, timeframes or milestones should typically be incorporated as part of a road map to achieve a specific desired outcome or result. The voluntary nature of the CSR program contributes to the inability for TSA to establish clear timeframes for completion. For example, according to HMC officials, two states have already declined to participate in the CSR program due to their lack of perceived security risk to their assets. In January 2009, HMC officials said that one of those states subsequently reversed its decision and is willing to participate in the CSR program. In 2008, HMC also began conducting follow-up state level CSR’s to states previously assessed, and has completed a limited number of such assessments as of January 2009. According to TSA officials, the purpose of these visits is to update existing data and determine current infrastructure security efforts at the state- level. In the absence of CSR vulnerability data for infrastructure assets in the remaining states, TSA may rely on other mechanisms to obtain this data. As outlined in HSPD-7, the SSA is responsible for conducting or facilitating vulnerability assessments across the sector. According to TSA officials, the CSR effort represents their primary mechanism for meeting this responsibility. Yet, given competing priorities and resource limitations identified by HMC, there may be limited value to expending further resources to complete highway infrastructure CSRs in states or territories lacking any critical assets. Specifically, only two remaining states or territories that have not undergone a CSR have any highway infrastructure assets deemed nationally critical by IP. However, to obtain vulnerability information for the remaining critical assets, TSA could conduct a CSR visit or collaborate with other highway sector stakeholders. For example, HMC may be able to leverage the resources of other federal partners that have completed vulnerability assessments for those assets. Another potential option includes the utilization of the existing bridge safety program to obtain information about critical asset vulnerabilities. According to HMC officials, they are currently conducting pilot programs with several states to incorporate security-related questions within mandatory National Bridge Inspection program conducted biennially by state inspectors. While TSA has stated that it intends to conduct individual assessments on all bridge and tunnel properties that it has identified as critical, TSA does not plan to begin those assessments until our review is completed. Thus, it is too early to tell whether these assessments will provide TSA with sufficient data about asset vulnerabilities to make informed decisions about sector needs and priorities. As part of its responsibility to help protect critical infrastructure in all industry sectors, since 2002, IP has completed a number of vulnerability assessments of specific highway infrastructure assets through two key programs. Specifically, IP has conducted, or participated in, assessments evaluating vulnerabilities of major roadways, bridges, and tunnels as part of its SAV and BZPP programs. While the scope and purpose of these two programs differ considerably, they each serve to provide DHS, as well as applicable stakeholders and owners and operators, with detailed information about identified asset vulnerabilities to develop and prioritize mitigation efforts. Site Assistance Visits (SAVs). This voluntary program includes asset- level vulnerability assessments conducted by a federally-led team in partnership with asset owners and operators. SAVs are designed to facilitate discussion about vulnerability identification and mitigation between security partners and asset owners and operators. The visits, which take between one and three days to complete, incorporate various attack scenarios to identify potential asset vulnerabilities that could be exploited by a potential terrorist. Given the voluntary nature of the SAVs, implementation of identified mitigation measures is not required through the program; however, IP provides asset owners and operators with “options for consideration” intended to help them detect and prevent terrorist attacks. According to IP officials, their experience has shown that asset operators are generally willing to address these options because it is in their best economic and social interest to do so, given the potential consequences that may result in the event of an attack. As of January 2009, IP has conducted SAVs on a number of highway infrastructure assets; however, many of these were completed prior to July 2005. Buffer Zone Protection Program (BZPP). Under this DHS grant program, IP assists state and local authorities, as well as private industry, in developing protection plans for critical infrastructure assets, including selected highway assets. Unlike the SAV, which focuses on the security of infrastructure assets directly, the BZPP focuses on the buffer area surrounding an asset that a terrorist may use to conduct surveillance or an attack. While DHS provides the assessment tools as well as operational and technical support, the actual BZPP assessment is conducted by local law enforcement agencies with jurisdiction over the selected asset. Based on the vulnerabilities identified during the assessment, a Buffer Zone Plan is developed, in cooperation between IP and state and local partners, to address potential security gaps and identify measures to deter terrorist activity near key assets. As part of this plan, recommended enhancements are identified that may be eligible for grant funding based on a validation of the assessment and approval of a spending plan by IP officials. Potential items funded through this program include personal protective equipment, interoperable communication equipment, patrol boats, and detection equipment, among others. Since October 2002, a number of highway infrastructure assets have been assessed through the BZPP program, and additional highway assets were assessed since fiscal year 2006. While BZPP and SAV assessments serve as some of DHS’ principal efforts to identify vulnerabilities and inform risk analysis of the highway sector, the pace of both of these activities has slowed considerably since 2006 due, in large part, to competing agency priorities. According to IP officials, the principal reason for the reduction in these activities is the office’s focus on sectors that are a higher priority, such as dams and nuclear facilities. Since 2006, these sectors have been deemed a higher priority due to the potential for catastrophic effects resulting from a terrorist attack. Moreover, it is uncertain to what extent IP vulnerability assessments will be conducted on additional highway infrastructure assets in the future because no timeframes for additional assessments currently exist and future resource priorities remain unknown. As part of its maritime security responsibilities, USCG completes an annual risk assessment of all key bridges and tunnels that are located on or within U.S. navigable waters. In addition to this broad effort, USCG has also conducted more comprehensive vulnerability assessments for a number of critical maritime bridges and tunnels as part of its Terrorist Operations Assessments completed in the wake of the attacks on September 11, 2001. Maritime Security Risk Analysis Model (MSRAM). Each year, USCG uses the MSRAM to develop a risk-score for maritime infrastructure likely to result in significant potential consequences if attacked, including select bridges and tunnels, as part of its port-wide risk assessments. The vulnerability component of the model is determined by identifying any applicable protective measures employed, such as access controls, perimeter security and surveillance, and explosives detection, among others, against a number of identified threat scenarios. According to USCG officials, all available federal assessments, such as SAVs, as well as those conducted by private contractors, are incorporated into the analysis to assist in determining the vulnerability of each asset being assessed. The purpose of the model is to identify port critical infrastructure that may pose the highest overall risk. The resulting information is then used to prioritize USCG security efforts and guide security planning actions with maritime stakeholders. USCG does not regulate or enforce the risk mitigation efforts for bridges and tunnels. According to USCG officials, these efforts remain voluntary and it is the owner or operator’s responsibility to implement potential countermeasures. The MSRAM tool currently covers approximately 370 maritime bridges and tunnels, including the majority of critical highway assets identified by DHS in 2007. Terrorist Operations Assessments. USCG also performed vulnerability analyses on a number of maritime bridges and tunnels as a component of port-wide security assessments conducted at the nation’s most critical ports after the attacks of September 11, 2001. These vulnerability assessments were conducted on a number of individual bridges and tunnels selected based on a combination of their perceived criticality and the absence of any previous federal assessments conducted. According to USCG officials, these assessments helped inform the agency’s infrastructure security operations and were incorporated into the MSRAM analysis described above. The results of these assessments were also shared with the owners and operators of the assets, according to USCG officials. Although DHS entities are currently the primary lead for federal highway infrastructure risk assessments, FHWA has played a key role in facilitating these efforts. Beginning in 2003, FHWA began conducting risk management workshops and responded to requests by state officials to conduct vulnerability assessments of selected bridges and tunnels that the states had identified as critical. To date, FHWA has taken the lead for conducting assessments at the state or local-level, as well as additional asset-specific assessments. Collectively, these assessments cover a number of individual bridges and tunnels, including some identified as critical assets. According to FHWA, owners generally receive a report of all assessment findings, including a suite of measures that can be used to make a facility more secure. However, officials noted that it remains the decision of the asset owner to determine how much risk to accept and how much money should be invested to protect against terrorism. From 2004 through 2005, FHWA also played a key role in assisting USCG conduct its port-wide vulnerability assessments. According to FHWA officials, their current role is to help support DHS’ overall efforts to protect highway infrastructure by providing subject matter expertise; participating in assessments with various DHS entities; conducting training, and developing guidance, in conjunction with AASHTO, to assist states in conducting their own risk assessments of transportation infrastructure. Although federal entities have collected consequence information as part of their ongoing efforts to identify critical assets and conduct vulnerability assessments, detailed consequence assessments of highway infrastructure have been limited. According to the NIPP, risk assessments should include consequence assessments to measure key effects to the well being of the nation. These effects include the negative consequences on public health and safety, the economy, public confidence in national economic and political institutions, and the functioning of government that can be expected if an asset, system, or network is damaged, destroyed, or disrupted by a terrorist attack. On a sector-wide basis, TSA and IP work together to develop a list of highway infrastructure assets deemed nationally critical based on several consequence-related factors, such as the potential loss of life and economic impact. While this list is not intended to provide the type of detailed consequence information used to prioritize mitigation decisions between specific assets, as called for in the NIPP, DHS officials stated that it serves to identify those assets that should be considered when conducting more comprehensive risk assessments of the sector. Since 2007, IP has been responsible for developing critical asset lists for all critical infrastructure and key resources in conjunction with applicable SSAs and state and territorial Homeland Security Advisors. This list is broken into two distinct tiers based on estimated consequences to the nation. The first list, Tier 1, is comprised of critical infrastructure assets and key resources that, if disrupted or destroyed, would have significant negative consequences. Currently, no highway infrastructure assets are included on the Tier 1 list. The Tier 2 list includes highway infrastructure that, based on established criteria, represent assets that, if destroyed, are also likely to result in relatively significant potential negative consequences to the nation. As part of DHS’s effort to assess risk to the nation’s critical infrastructure, HITRAC also engages in a collaborative effort with SSAs to collect consequence information. Specifically, HITRAC incorporates analysis of potential consequences when developing the high-risk threat scenarios contained within the SHIRA report. For example, HITRAC disseminates worksheets to each of the SSA’s to collect estimates of consequences resulting from a variety of different attack scenarios. For each scenario, the SSA develops numerical rankings for several categories of potential consequences, including potential loss of life, economic effects, psychological consequences, and potential effect on agency mission. Upon review of this data, HITRAC is then able to identify and prioritize those scenarios that are likely to result in significant potential consequences relative to other attack methods or targets. In addition, some asset-level federal vulnerability assessments, such as SAVs, also include estimates of potential consequences. For example, the standard template used to record information during these visits incorporates a series of questions regarding consequences to estimate the potential loss of life and other economic consequences resulting from an attack, and to determine how critical the asset is based on its interdependencies with other transportation systems or facilities. Although these consequence estimates are a key component of an asset-specific risk assessment, not all critical highway assets have been subject to an SAV assessment to allow for consequence data to be evaluated nationwide to help establish protection priorities. Similarly, USCG also calculates consequence scores for all maritime critical infrastructure as a key component of its MSRAM analysis; however, not all of the nation’s critical bridges and tunnels have a maritime nexus for which USCG analysis applies. While federal entities are conducting a number of individual efforts to assess highway infrastructure risks, they have not systematically coordinated these efforts or shared the results. Federal entities have collectively conducted asset-level vulnerability assessments on a substantial percentage of highway infrastructure assets identified on the 2007 Tier 2 list. However, limited mechanisms exist to share the assessment results among the various federal partners to inform their own assessment efforts. For example, HMC reported that it is generally unfamiliar with the assessment processes, mechanisms, and results of the other DHS entities, particularly IP. Lacking adequate coordination mechanisms, the potential for duplication and inadequate leveraging of federal resources exists. For example, multiple vulnerability assessments were conducted by federal agencies for numerous assets that were on the fiscal year 2007 Tier 2 list. Specifically, IP and USCG conducted assessments on a number of the same assets identified as critical. Given the number of highway infrastructure assets identified as critical, it is especially important to ensure that future risk assessment efforts are effectively coordinated between federal entities and the results shared amongst these entities. As the SSA for highway infrastructure security, TSA is responsible for facilitating and coordinating risk assessment activities and protection efforts for these assets. As further specified in the NIPP, the SSA is responsible for the overall coordination and facilitation of comprehensive risk assessment programs for the sector, which include gathering all available threat, vulnerability, and consequence information from sector partners for use in national risk management efforts. Our previous work has also indicated that a key component for successful collaboration between federal agencies includes the effective leveraging of available resources. While TSA is compiling limited vulnerability assessment information through its CSR program, no policies or mechanisms currently exist to coordinate this effort with those of other federal partners. Considering that IP and USCG are conducting nearly all of the federal asset-specific vulnerability assessments completed to date, TSA is missing an opportunity to fully inform its vulnerability analysis for the highway infrastructure sector and validate the findings obtained from its CSRs. While some efforts have been initiated by DHS entities to improve the coordination of highway infrastructure assessment activities, such actions have been limited. According to USCG officials, MSRAM analysis routinely includes the review of completed IP assessments of port-related infrastructure, including bridges and tunnels; however, coordination among the other two agencies is less mature. For example, HMC officials were generally unfamiliar with the scope of IP’s SAV assessments and were unaware how these activities may be leveraged to achieve mutual goals. According to TSA officials, they had begun to receive notifications of IP assessments in July 2008; however, in September 2008, they stated that they generally do not review these assessments or incorporate the results. HMC officials also stated that they have not reached out to obtain MSRAM data because they believe that port areas are well managed by USCG. Similarly, IP officials stated that they had not requested or reviewed the results of TSA’s highway infrastructure CSRs. According to IP officials, a Protective Measures Section was created in fiscal year 2008 to consolidate and track IP assessments, as part of the Vulnerability Assessment Project. This project, as described in the IP Strategic Plan: FY 2008-2013, was originally intended to also provide a mechanism to track and analyze the vulnerability assessments conducted by other Federal, State, local, and private sector partners in order to enhance coordination and collaboration with stakeholders, eliminate duplication of effort, and enable assessment prioritization. However, OIP officials stated that, due to a lack of funding, the scope of this effort was limited only to IP’s own vulnerability assessments. Another area where additional collaboration between federal partners may be improved involves the potential streamlining, or standardization, of existing assessment tools and methodologies. As outlined in the NIPP, vulnerability assessments need to be comparable to support national-level and cross-sector analysis. Further, HSPD-7 requires DHS to establish uniform policies, approaches, guidelines, and methodologies for integrating Federal infrastructure protection and risk management activities within and across sectors. However, a number of varied risk assessment tools and methodologies exist both within and across sectors that differ in terms of assumptions, comprehensiveness, and objectivity. Efforts to combine or streamline some of these tools and methodologies may assist to enhance the comparability and usefulness of the various risk assessments. For example, IP’s Strategic Plan: FY 2008-2013, identifies opportunities for the development of a scalable methodology, in collaboration with other SSAs, to standardize current approaches for identifying vulnerabilities and promote better coordination and collaboration. USCG officials also cited the need for a comprehensive risk analysis model so that all sectors could utilize a common tool. According to the Highway Modal Annex to the TSSP, issued in May 2007, TSA was working with DOT agencies, including the Federal Motor Carrier Safety Administration (FMCSA) and FHWA, to combine their respective risk assessment and risk mitigation tools into a single product that will reduce redundancy, increase efficiencies, and minimize impact on private stakeholders. However, in October 2008, FHWA officials stated that this effort had not occurred. The Modal Annex does not identify any additional plans for TSA to combine or incorporate any other key risk assessment tools, including USCG’s MSRAM tool, IP’s risk assessment and mitigation tools, or AASHTO’s risk methodology. While the development of a single risk assessment tool that meets the individual needs of the distinct federal entities involved in highway infrastructure security may not be a realistic alternative, opportunities remain for DHS to identify where specific assessment tools and methodologies can be used most effectively to enhance assessments and better leverage future resources. Effective coordination of federal vulnerability assessments and sharing of assessment results is more important given the number of highway infrastructure assets. Lacking adequate coordination with federal partners, TSA will be unable to determine the extent to which specific critical assets have been assessed and if potential adjustments in its own CSR methodology may be necessary to adequately target remaining critical infrastructure assets. Given the resource limitations and competing priorities of TSA and IP discussed previously, it is increasingly important for federal entities to coordinate their risk assessment activities and to share all available risk information to avoid duplication, better focus future assessments, and more effectively leverage resources. While DHS has developed a strategy—the Highway Modal Annex—to secure the nation’s highway infrastructure, it is not based on completed risk assessments to help ensure that federal programs and resources are focused on the areas of greatest need. Moreover, the Annex can be strengthened to better address the requirements of Executive Order 13416 on Strengthening Surface Transportation, and more fully incorporate characteristics of an effective national strategy. In addition, we identified areas where the Highway Modal Annex can be strengthened to enhance its value to highway security stakeholders by providing greater clarity of roles and focusing resources to protect highway infrastructure. TSA plans to revise the strategy in the near future, as required by the Annex and in accordance with TSA guidance, and officials stated that they would consider enhancing the Annex to address these areas at that time. In May 2007, TSA published the Highway Modal Annex which documents DHS’s strategy for securing the nation’s highway infrastructure; however, while both the NIPP and the TSSP outline a framework whereby infrastructure protection efforts are to be guided by risk assessments of critical assets, the TSSP Highway Modal Annex is not fully informed by available vulnerability and consequence information. The Annex describes key TSA and FHWA programs related to highway infrastructure security efforts, as well as how transportation sector goals and objectives are to be achieved to protect the highway transportation system. However, while nearly all of TSA’s and IP’s completed vulnerability assessments were conducted prior to the issuance of the Highway Modal Annex, their results were not used to develop the Annex. Both the NIPP and TSSP sets forth a comprehensive risk management framework which includes a process of considering threat, vulnerability and consequence assessments together to determine the likelihood of a terrorist attack and the severity of its impact. In addition, the TSA guidance used to assist each mode in drafting the Annex identifies that the Annex should emphasize how each mode will use risk informed decision-making to determine specific actions required to achieve the transportation sector goals and objectives. According to HMC officials, the Highway Modal Annex was developed in conjunction with the Highway GCC and SCC using available threat information, professional judgment, and information about past terrorist incidents. However, HMC officials stated that they did not review available IP and USCG vulnerability and consequence assessments of highway infrastructure— which represents the vast majority of asset-specific information. According to these officials, the initial development of the Highway Modal Annex was limited by time, which impacted HMC’s ability to consider more comprehensive risk assessment information collected and incorporate stakeholder input. However, officials stated that they anticipate that future revisions to the TSSP Highway Modal Annex will consider more risk assessment information and stakeholder input. In addition, HMC officials said that they are working on developing a separate national bridge strategy to supplement the Annex, but officials did not have a time frame for its completion. According to TSA guidance used to develop the Highway Modal Annex, the Highway GCC and SCC are to review the Annex annually and make periodic interim updates as required, which provide TSA with an opportunity to consider the results of risk assessments to inform its strategy moving forward. The Highway GCC and SCC are instructed to conduct a complete revision of TSA’s Highway Modal Annex every three years, and as necessary in the interim. HMC is beginning the revision process and updating the TSSP Highway Modal Annex in 2008 to allow time for the revised strategy to be reviewed by government and sector stakeholders. However, HMC officials stated that they did not know when the revision would be issued. Without considering the results of available risk assessments, TSA is limited in its ability to assist highway infrastructure operators in prioritizing investments based on risk, and target resources towards security measures that will have the greatest impact. In reviewing the Highway Modal Annex, we identified areas in which the Annex does not fully address areas outlined in Executive Order 13416, Strengthening Surface Transportation Security, which was issued in December 2006 to address surface transportation security challenges consistent with the NIPP risk management framework. Executive Order 13416 requires that the Secretary of Homeland Security assess the security of each surface transportation mode and evaluate the effectiveness and efficiency of current surface transportation security initiatives. In addition, the Executive Order required the Secretary to develop modal annexes that include, at a minimum: an identification of existing security guidelines and requirements and any a description of how the TSSP will be implemented for each mode, and the respective roles, responsibilities, and authorities of Federal, State, local, and tribal governments and the private sector; schedules and protocols for annual reviews of the effectiveness of surface transportation security-related information sharing mechanisms; and a process for assessing compliance with any security guidelines and requirements issued by the Secretary for surface transportation, and the need for revisions of such guidelines and requirements to ensure their continuing effectiveness. Although Executive Order 13416 requires the identification of existing security guidelines and security requirements for each surface transportation mode, the Annex does not reference existing guidance developed by other federal and state highway infrastructure stakeholders including IP, FHWA, or AASHTO guidance on protective measures for highway infrastructure. TSA acknowledged that this information is missing from the Annex. Without including such information in TSA’s national strategy for highway security, the agency is missing opportunities to identify and leverage available guidance resources for securing highway infrastructure. In addition, as called for in Executive Order 13416, the Annex does identify a number of existing security gaps related to highway infrastructure, and recognizes that addressing potential threats to the highway system is particularly challenging because of the openness of the system. However, while the Annex identifies that the conveyance of hazardous materials poses the greatest threat to highway infrastructure— and is where HMC has focused its efforts—the Annex provides little details about the different types of threats to highway infrastructure and their relative likelihood. For example, the Annex does not describe how terrorists might use explosives against highway infrastructure. According to the Annex, some bridges and tunnels are especially vulnerable because their structural components are in some cases easily accessible and because the assets themselves are located in remote areas. Furthermore, Executive Order 13416 requires DHS to describe how the TSSP will be implemented within the specific transportation mode, yet we identified areas where the Annex could improve its description of how the TSSP would be implemented. For example, although not specifically required, the Annex lacks milestones. Specifically, the Annex does not indicate timeframes or milestones for its overall implementation or for accomplishing specific actions or initiatives for which entities can be held responsible. In addition, the Annex’s priorities, goals and supporting objectives and activities are not ranked by their importance. Executive Order 13416 also calls for Modal Annexes to include a description of the roles and responsibilities of key stakeholders, which the Highway Modal Annex only partially addresses because the Annex does not clearly define the authorities of federal, state, local, and tribal governments and the private sector to secure highway infrastructure. For example, the Annex does not identify that TSA has the authority to issue and enforce security related regulations and requirements it deems necessary to protect transportation assets. In addition, the Highway Modal Annex discusses the Highway GCC and Highway SCC roles and responsibilities related to highway and motor carrier security strategies and activities, as well as policies, guidelines and standards and developing program metrics and performance criteria for the mode. It also describes several TSA and FHWA highway related risk assessment programs involving collaboration with stakeholders. However, the strategy does not identify the specific roles of federal and non federal stakeholders such as HMC, IP, FEMA, CBP, FHWA, or AASHTO in the protection of critical highway infrastructure or key assets. HMC officials attributed these omissions to the short turn around time required to develop the Annex. In addition, HMC officials stated that the Annex was vetted by a variety of stakeholders including IP, and no one raised concerns over the absence of a description of the roles of these federal and non federal entities and their programs. HMC officials stated that they were willing to consider including these entities in future revisions of the Annex. Moreover, the Annex does not identify lead, support, and partner roles related to highway infrastructure security. For example, CBP is responsible for prohibiting the entry into the United States of people or goods that pose a security threat; as well as the protection of the infrastructure within the footprint of the ports of entry, while TSA is responsible for the security of all modes of transportation, including any associated infrastructure. An overlap in responsibility exists when the people and goods crossing the border intend to harm infrastructure, e.g. a truck crossing a border bridge with the intention of exploding the bridge. Our prior work has highlighted the importance of addressing which organizations will implement a national strategy, their roles and responsibilities, and mechanisms for collaborating their efforts. We assessed the Highway Modal Annex using desirable characteristics developed by our prior work on national strategies, and found several areas where future versions of the Annex can be enhanced. Our prior work has shown that national strategies can be more useful if they contain characteristics such as a description of the purpose, scope, and methodology of the strategy; goals, objectives, activities, and performance measures; a definition of the roles and responsibilities and mechanisms for collaborating; the sources and types of resources and investments associated with a strategy; and a description of how a national strategy will be integrated with other national strategies and how it will be implemented. We believe that these characteristics can assist DHS in strengthening and implementing the Highway Modal Annex going forward, as well as enhance its usefulness in resource and policy decisions and to better assure accountability. This characteristic addresses the purpose for developing the strategy, the scope of its coverage, and the process by which it was developed. In addition to describing what it is meant to do and the major functions, mission areas, or activities it covers, a national strategy would ideally address the methodology used to develop it. For example, a strategy might discuss the principles or theories that guided its development, what organizations or offices drafted the document, whether it was the result of a working group, or which parties were consulted in its development. The purpose and scope of the strategy are generally described in the Annex. For example, the Annex provides a description of the nation’s highway transportation system and how transportation sector goals and objectives will be achieved to protect the highway transportation system. However, the Annex does not explain the methodology used in its development. For example, while the Highway Modal Annex references the NIPP and TSSP as providing the principles or theories that guided its development, the Annex does not describe the process and information that was used to develop it. HMC officials attributed this omission to the TSA guidance used to develop the Highway Modal Annex not requiring the process and information that was used to develop it be documented. HMC officials stated that stakeholders used their collective professional judgment to develop the Annex. This characteristic addresses what the national strategy strives to achieve and the steps needed to garner those results, as well as the priorities, milestones, and performance measures that will be used to gauge results. At the highest level, this could be a description of an ideal “end-state” of the strategy, followed by a logical hierarchy of major goals, subordinate objectives, and specific activities to achieve results. Our prior work has shown that long-term action-oriented goals and a time line with milestones are necessary to track an organization’s progress toward its goals. Ideally, a national strategy would set clear desired results and priorities, specific milestones, and outcome-related performance measures while giving implementing parties flexibility to pursue and achieve those results within a reasonable timeframe. While the Highway Modal Annex identifies individual, high-level goals, subordinate objectives, and specific activities to achieve results which are aligned with the specific goals and objectives identified in the TSSP, it does not describe key related activities. The Annex identifies three major goals—prevent and deter acts of terrorism using or against the transportation system, enhance resilience of the transportation system, and improve the cost-effective use of resources for transportation security. The three goals are underpinned by objectives, such as an objective supporting the goal of implementing flexible, layered, and effective security programs using risk management principles. The objectives in turn, have accompanying activities. For example, one of the supporting activities for the goal to prevent and deter acts of terrorism using or against the transportation system is HMC’s CSR program. However, the Annex focuses on HMC and FHWA activities, but does not describe several key related federal and non federal activities. For example, the Highway Modal Annex does not describe the relationship of IPs Vulnerability Assessment program, USCG’s risk assessment activities related to highway infrastructure, S&T Directorate‘s related research and development projects, AASHTO’s security design standard development efforts, or CBP’s activities related to international border crossings as they relate to supporting the Annex’s goals and objectives. In addition, one of the Annex’s objectives is to enhance information and intelligence sharing among transportation security partners. Accordingly, the strategy identifies the Highway Information Sharing Analysis Center (ISAC) and the Homeland Security Information Network (HSIN) as two mechanisms to share information with the highway infrastructure stakeholders. However, the Annex does not discuss how HSIN complements or is different from other information sharing tools, such as DHS’s Lessons Learned Information System (LLIS), as it concerns highway infrastructure. The Annex also does not discuss how HSIN is related to state efforts for sharing information. For example, during our review, one of the states we visited was developing a web site to share information for transportation security stakeholders which would potentially duplicate or overlap with information available through HSIN or LLIS. Furthermore, TSA, in conjunction with the Highway GCC and the Highway SCC, has not developed a baseline set of performance goals and measures or established a time frame upon which to assess and improve preparedness of highway infrastructure to an attack that are linked to the Annex’s goals, objectives, and activities for securing highway infrastructure. The NIPP requires DHS to work with its security partners to develop sector-specific metrics. In addition, the Government Performance and Results Act (GPRA) as well as Standards for Internal Control in the Federal Government, require that agencies use performance measurement to reinforce the connection between their long- term strategic goals and the day-to-day activities of their managers and staff. In addition, the Office of Management and Budget requires all programs to have at least one cost efficiency measure as part of their mix of performance measures. With respect to highway infrastructure security, performance measures would gauge to what extent federal efforts and highway infrastructure operators are achieving the Annex’s goals and objectives. HMC officials stated that although they recognize the importance of measuring the effectiveness of security efforts, they have not developed performance measures for highway infrastructure. HMC officials attributed this omission to the TSA guidance used to develop the Highway Modal Annex not requiring performance measures. Without performance measures and an evaluation of the effectiveness of the Annex’s goals and objectives, TSA will lack meaningful information from which to determine whether the strategy is achieving its intended results and to target any needed improvements. This characteristic addresses which organizations will implement the strategy, their roles and responsibilities, and mechanisms for coordinating their efforts. It helps answer the fundamental question about who is in charge, not only during times of crisis, but also during all phases of homeland security and combating terrorism efforts: prevention, vulnerability reduction, and response and recovery. This characteristic entails identifying the specific federal departments, agencies, or offices involved and, where appropriate, the different sectors, such as state, local, private, or international sectors. In our past work, we reported that a successful strategy clarifies implementing organizations’ relationships in terms of leading, supporting, and partnering. In addition, a strategy could describe the organizations that will provide the overall framework for accountability and oversight. Furthermore, a strategy might identify specific processes for collaboration between sectors and organizations— and address how any conflicts would be resolved. For example, our previous work on effective interagency collaboration has also demonstrated that a strategy provide for some mechanism to ensure that the parties are prepared to fulfill their assigned responsibilities. The Annex provides limited information related to collaboration between highway infrastructure stakeholders. In addition, the 9/11 Commission Act requires DHS and DOT to execute and develop an annex to the memorandum of understanding (MOU) between the two agencies, which was signed in September 2004, that addresses motor carrier security. The annex must delineate specific roles, responsibilities, and resources needed to address motor carrier transportation security matters and the processes the Departments will follow to promote communications, efficiency, and ensure non duplication of effort. HMC officials stated that they plan on developing a similar annex to the MOU for highway infrastructure, but they do not have a timetable for doing so. Our prior work has shown that collaboration between federal stakeholders can be improved by clearly identifying organizational roles, responsibilities and specific processes for collaboration between sectors—and how any conflicts would be resolved. HMC officials stated that such an annex would serve to lay the groundwork and provide the proper protocols for sharing of data and personnel, and acknowledge leadership roles and responsibilities to strengthen highway infrastructure security. The 9/11 Commission Act also requires that DHS, to the greatest extent practicable, provide public and private stakeholders with transportation security information in an unclassified format. The Highway Modal Annex provides limited details on how (process, policy, mechanism) it will collaborate or what is needed to enhance information and intelligence sharing. For example, the Annex does not describe HITRAC’s role related to information sharing. HITRAC is a joint organization between IP and the Critical Infrastructure Threat Analysis Division within I&A that is to integrate, analyze, and share information regarding threats and risks to U.S. critical infrastructure for DHS, other federal departments and stakeholders, the intelligence community, state and local governments and law enforcement stakeholders, and the private sector. HMC officials attributed this omission to the TSA guidance used to develop the Highway Modal Annex not requiring a description of how it is to collaborate or what is needed to enhance information and intelligence sharing. The Act also required DHS to establish a plan to share transportation information relating to the risks to transportation modes, including the highway mode that was due in early 2008; however the plan has not yet been completed. TSA officials said that DHS was developing the information sharing plan, but they did not know when the plan would be issued. Development of a plan could improve information sharing by clarifying roles and responsibilities and clearly articulating actions to address any remaining challenges, including consideration of appropriate incentives for nonfederal entities to increase information sharing with the federal government, increase sector participation, and perform other specific tasks to protect critical highway infrastructure. This characteristic addresses what the strategy will cost, the sources and types of resources and investments associated with the strategy, and where those resources and investments should be targeted. Ideally, a strategy would also identify criteria and appropriate mechanisms to allocate and take in resources—such as grants, in-kind services, loans, and user fees—based on identified needs. Alternatively, as our prior work has shown, the strategy might identify appropriate “tools of government,” such as regulations, tax incentives, and standards, to mandate or stimulate nonfederal organizations to use their unique resources. The Highway Modal Annex does not describe any incentives that could be used to encourage owners to conduct voluntary risk assessments, such as grants or training that could be used to determine the best courses of action to reduce potential consequences, threats, or vulnerabilities, as required by the NIPP. These incentives are important because asset owners are not currently regulated by TSA. According to HMC officials, the guidance provided by TSA to HMC used to develop the Highway Modal Annex did not require a description of possible incentives. In addition, HMC officials said that they are working on developing a separate national bridge strategy to supplement the Annex. According to HMC officials the national bridge strategy is to assist the stakeholder community in assessing both the criticality and the security vulnerabilities of its assets; identify the most appropriate and cost- effective mitigation tools; and serve as a mechanism for the identification of sources of funding that are exclusively dedicated to security needs and do not require diversion of funding that is otherwise reserved for safety or structural enhancement or refurbishment. However, this effort is not completed and HMC does not have a time frame for its implementation. In addition, the Annex identifies that measures to secure assets of the Highway Transportation System must be implemented in a way that balances cost, efficiency, and preservation of the nation’s commerce; however, it provides relatively few details on the types and levels of resources associated with implementation of security measures or where to target resources for securing highway infrastructure. Highway infrastructure operators have received some federal funding for implementing security upgrades since September 11th, 2001, but available funding has been limited due to competing priorities, such as dams and nuclear facilities. Targeting investments is especially important given that the current economic environment makes this a difficult time for private industry or state and local governments to make security investments. This characteristic addresses both how a national strategy relates to other strategies’ goals, objectives, and activities, and to subordinate levels of government and their plans to implement the strategy. For example, a national strategy could discuss how its scope complements, expands upon, or overlaps with other national strategies. Similarly, related strategies could highlight their common or shared goals, subordinate objectives, and activities. In addition, a national strategy could address its relationship with relevant documents from implementing organizations, such as the strategic plans, annual performance plans, or annual performance reports. A strategy might also discuss, as appropriate, various strategies and plans produced by the state, local, private, or international stakeholders. The Highway Modal Annex contains certain elements of this characteristic, but it lacks a description of how it relates to other strategies. For example, the Annex references FHWA’s Multiyear Plan for Bridge and Tunnel Security Research, Development, and Deployment, which highlights efforts to secure the nation’s highway infrastructure. However, the Highway Modal Annex does not define its relationship with other related strategies or federal actions, or address its relationship with other plans by federal, state, local, and international implementing parties. Specifically, although TSA is engaged in three strategic planning initiatives that have similar goals but slightly different requirements, the Annex does not discuss its relationship to these strategies. First, the Intelligence Reform and Terrorism Prevention Act of 2005 requires a strategy for transportation security—the National Strategy for Transportation Security (NSTS)— containing the identification and evaluation of transportation assets and appropriate mitigation approaches. Second, the NIPP and HSPD-7 require each sector to prepare a sector specific plan, in collaboration with its security partners across government and private industry. Third, Executive Order 13416 contains requirements for developing modal annexes to the TSSP for surface modes of transportation. However, the Annex does not discuss how its scope complements, expands upon, or overlaps with these strategic plans and guidance. In addition, the Annex does not discuss how the programs in IP’s strategic plan complement or overlap with the Highway Modal Annex. Without such information in TSA’s national strategy for highway security, the agency is missing opportunities to build on organizational roles and responsibilities and further clarify relationships, which could improve the strategy’s implementation. Government and industry highway sector stakeholders have taken actions to mitigate the risks to highway infrastructure through a combination of efforts, including developing publications and conducting seminars, sponsoring research and development activities, and implementing specific infrastructure protection measures. However, because HMC does not routinely conduct asset-specific assessments of highway infrastructure, TSA does not have a mechanism to monitor the implementation of both government and industry voluntary security enhancements put in place to address identified asset vulnerabilities and help protect the nation’s critical highway infrastructure. TSA is tasked with assessing and evaluating the effectiveness and efficiency of current federal government surface transportation security initiatives. According to TSA officials, such a monitoring mechanism for voluntary efforts is not necessary because TSA obtains the information that it needs to monitor highway infrastructure security efforts through HMC’s CSR efforts. However, the CSRs are at a high level and do not provide a means to assess the protective security measures implemented for specific assets. Lacking a mechanism to monitor the implementation of protective security measures, TSA cannot evaluate the effectiveness of existing programs and assessing the overall security preparedness of the nation’s critical highway infrastructure. Highway sector stakeholders have taken a variety of voluntary actions intended to enhance the security of highway infrastructure. Key efforts include developing security publications, sponsoring infrastructure security workshops, conducting research and development activities, and implementing specific protective measures intended to deter an attack or reduce potential consequences, such as security patrols, electronic detection systems, and physical barriers. Overall, these programs and activities are intended to provide asset owners and operators with tools and guidance for assessing highway infrastructure security risks, highlight effective practices in security planning and vulnerability reduction, and share technical expertise and information for enhancing asset security. See table 2 for a summary of key highway infrastructure security programs and activities. Highway infrastructure stakeholders have developed a number of products and programs intended to facilitate the identification of critical assets and provide guidance for conducting security planning. Many of these products and programs are conducted as joint efforts between the State highway agencies, represented by AASHTO and federal partners, including TSA, FHWA, and the Transportation Research Board (TRB). Since 2002, AASHTO, through TRB’s Cooperative Research Programs, sponsored or developed several key publications that serve to assist states in identifying critical assets, perform risk assessments, and evaluate options for reducing asset vulnerabilities, including providing a characterization of potential costs and challenges associated with infrastructure security enhancements. According to AASHTO, all state DOTs have access to, and a large majority (84 percent) are using, AASHTO guidance on vulnerability and criticality assessment, and risk management, to determine the extent and nature of vulnerabilities to their state’s transportation systems. As discussed previously, IP has also developed and issued several reports to provide sector stakeholders guidance on security measures, and identifies general threats and common vulnerabilities for highway infrastructure assets. In addition, IP provides stakeholders with guidance on security measures to implement based on homeland security advisory system threat levels. According to IP officials, these reports are made available to industry stakeholders via an internet portal. TSA, FHWA, and AASHTO have also co-sponsored a series of regional conferences to facilitate the exchange of information about effective security practices and communicate stakeholder concerns and implementation challenges. These conferences provide state transportation officials with a forum to share knowledge concerning infrastructure protection methods and help them identify potential training and guidance resources available. In a separate effort, FHWA also provided risk management training to bridge and tunnel engineers, asset operators, and first responders through a series of workshops. These workshops, introduced in 2003, are intended, in part, to provide highway infrastructure stakeholders a methodology for identifying vulnerabilities and developing appropriate and cost-effective risk mitigation plans. In addition, a security awareness training program is provided as part of the Trucking Security Program directed at highway sector professionals, which includes truck and motor coach drivers, highway engineers, and law enforcement, to identify and report suspicious activity on the nation’s highway system. A collection of research and development activities designed to secure highway infrastructure are currently being conducted by federal and state entities. As outlined in the Homeland Security Act of 2002, DHS is responsible for, among other things, working with federal laboratories and the private sector to develop innovative approaches to address homeland security challenges. Within the highway sector, these activities include research on the vulnerabilities of bridges and tunnels to various types of explosives and experimental methods to help protect these assets. At the federal level, research and development activities are coordinated through the DHS Transportation Sector Working Group. With fairly broad-based representation—-including representatives from TSA, IP, S&T Directorate, FHWA, and state DOTs, among others—this group serves to identify potential research areas, which are then prioritized by IP and executed by DHS’ S&T Directorate. According to S&T officials, highway infrastructure has been a focus of infrastructure security research efforts in recent years. Since 2005, bridges, in particular, have been prioritized to gain a better understanding of their potential vulnerabilities and identify better retrofit techniques. Some individual projects identified through this effort include the development of measures to reduce the vulnerability of flooding in underwater tunnels and potential attacks to bridge cables, as well as understanding failure mechanisms and mitigation against explosive attacks and other cross cutting research. See Appendix IV for a list of selected highway infrastructure research and development projects. Other key research programs include the National Cooperative Highway Research Programs (NCHRP) administered by the Transportation Research Board TRB and FHWA’s Transportation Pooled Fund Study program. Through the NCHRP Cooperative Research Programs, a number of research projects are conducted each year addressing highway-related research issues proposed by AASHTO. Although highway infrastructure security comprises just one component of the program’s research portfolio, several security-related products have been developed in recent years. Some of these products include guidance on securing transportation tunnels and a tool to estimate the impact of disruption of key transportation choke points. The Transportation Pooled Fund Study is a separate program, administered by FHWA, whereby states and other agencies contribute to a pooled fund to conduct research or provide training or education materials desired by the contributors. Some proposed products include the development of experimentally verified mitigation measures, clearly defined roles and responsibilities for State DOTs in infrastructure security, risk management training tailored to bridge and tunnel vulnerability assessments, and the development of blast mitigation measures for steel bridge towers and a bridge surveillance and security technology database, among others. While federal stakeholders play a role in facilitating risk-based infrastructure security efforts, the actual implementation of asset-specific protective security measures remains the responsibility of individual asset owners and operators, most commonly states or other public entities. Unlike some other transportation modes, such as commercial aviation, no federal laws explicitly require highway infrastructure owners to take security actions to safeguard their assets against a terrorist attack. The protection of highway infrastructure is being undertaken using a voluntary approach, although TSA retains the authority to issue and enforce security related regulations and requirements it deems necessary to protect transportation assets. According to HMC officials, TSA’s decision to implement a voluntary approach to highway infrastructure security is based on available threat information, as well as information obtained during CSR activities, which indicates to them that states are generally aware of their security responsibilities and are implementing protective actions. In addition, HMC officials stated that a voluntary approach to security requires reduced federal resources and provides a greater amount of buy-in and acceptance from asset owners than government regulations. Asset owners have implemented a range of voluntary protective security measures to help ensure public safety and protect their highway infrastructure assets. For example, asset owners commonly employ measures such as cameras or other surveillance equipment, and install fencing and other physical barriers to control access to vulnerable structures, among other protective measures. (See appendix III for additional examples of protective security measures for highway infrastructure assets). Specific mitigation measures typically fall into three broad categories: Deterrence and Detection. These mitigation measures secure access to restricted areas and reduce the likelihood of a potential attack. Common protective security measures include installing fencing, improving lighting, conducting security patrols and installing electronic detection systems. Defense. Defensive measures are intended to reduce the consequences of a successful attack. For example, installation of a physical barrier around vulnerable components or systems, such as a bridge pier, may reduce the impact of an explosive blast on the structure. Design and Redesign. These efforts are intended to harden planned or existing infrastructure assets against potential attacks by incorporating security considerations into engineering designs. According to highway infrastructure operators, factors such as competing priorities and budgetary constraints greatly influence whether security measures are implemented. One principal factor impacting the implementation of security measures identified by some state officials we spoke to concerns the availability of revenue sources to fund security improvements for individual assets. For example, bridges and tunnels funded by user fees, such as tolls, could generate additional revenue for security enhancements. Alternately, mitigation measures financed with general federal and state transportation funds may be limited due to competing state priorities. However, the federal government has provided funds to state and local stakeholders to implement highway infrastructure improvements through a combination of several FEMA grant programs. Since 2004, FEMA has funded 60 highway-related security projects, totaling approximately $34 million (see table 3). Some of these projects include funding for additional cameras and surveillance equipment, watercraft for investigation and response to threats, and interoperable communication equipment, among others. States have generally taken actions to help secure their highway infrastructure; however, wide variation exists regarding the implementation of specific protection efforts. According to TSA’s 2006 summary of its CSRs, all of the states polled have completed at least some security-related actions among the 11 functional areas assessed by TSA. However, TSA reported that the level of implementation of security actions varied between states. For example, TSA reported that background checks of transportation workers conducted by state agencies ranged from a criminal history check driving records and citizenship checks down to reference checks for employment applications. According to TSA, the need for background checks varied from state to state, since the perceived threat and the level of risk tolerance also vary by state. In another example, most of the states responded that they conducted security planning at the state level; however, according to TSA, state governments vary considerably in the way the security plans are organized. For example, they reported that states assign different security functions to different agencies—particularly for transportation security functions. Each agency does some level of planning to ensure its ability to perform its functions. As a result, these preparations are documented in different places, including emergency response plans, traffic management plans, hazardous materials management plans, National Guard plans, homeland security advisory level preparedness plans, continuity of operations plans, and police patrol plans. Some of the plans are more complete than others, depending on the diligence of the agency. TSA reported that most of these states were able to produce a document that defined basic responses to different threat levels and defined who was in charge. Similar variation in state responses and the scope of individual efforts were also illustrated in several of the other security-related functional areas. The variation in state security efforts identified by TSA is generally consistent with what we identified during interviews with officials and observations of select highway infrastructure in five states. Although the specific protective security measures implemented at the 13 individual assets we visited were varied, we identified some common mitigation themes, such as investment in new security equipment, leveraging law enforcement resources, and identifying incident response roles, among others. Specific protective measures identified by asset owners with whom we spoke, include increased surveillance efforts—adding cameras and other detection equipment—as well as installation of fencing, physical barriers, and implementation of enhanced access controls. In addition, some state officials we interviewed stated that they restricted access to building designs and response plans, increased their patrol of critical structures, and implemented stand-off distances. Although government and industry stakeholders have taken actions to address the risks to highway infrastructure, TSA lacks a mechanism to determine the extent to which specific protective security measures have been implemented for critical assets. Such a mechanism is important to evaluate the security preparedness of nationally critical infrastructure assets and to help ensure that TSA’s voluntary approach to highway infrastructure security remains adequate. For example, a monitoring mechanism would provide TSA with feedback regarding how its existing programs and security initiatives, in conjunction with highway stakeholders, are translating into specific security actions by asset owners. TSA is tasked with assessing the security of each transportation mode and evaluating the effectiveness and efficiency of current federal government surface transportation security initiatives. In addition, Standards for Internal Control in the Federal Government generally calls for controls to be designed to ensure that an agency has relevant and reliable information about programs and that ongoing monitoring occurs. However, TSA has not documented how it will monitor the industry’s progress in implementing voluntary highway infrastructure protective security measures for assets identified as nationally critical. Although various federal entities have issued suggested security measures to asset owners, the extent that they have been implemented remains unclear. DHS risk assessment activities, including the CSR and SAV programs, identified highway infrastructure assets that would benefit from additional security measures and have suggested a number of voluntary protective actions to asset owners to address these enhancements. However, given the voluntary nature of these programs, TSA, IP, and USCG stated that they do not know the extent to which asset owners are implementing the protective security measures identified by completed risk assessments for critical infrastructure. In addition to competing resource priorities previously identified, IP officials stated that monitoring the implementation of voluntary protective security measures remains difficult due to limited resources. Specifically, they stated that IP does not have the resources needed to conduct follow-up assessments on all Tier 1 and Tier 2 assets across all critical infrastructure and key resources. They also noted that repeated visits may create a burden on private sector partners. In 2008, IP implemented the Enhanced Critical Infrastructure Protection initiative. This effort involves sending PSAs to all Tier 1 and 2 assets, including transportation infrastructure. According to DHS, while this is a voluntary, non-regulatory program, PSAs conduct initial and follow-up visits to CIKR and document the implementation of enhanced security and protective measures. According to HMC officials, the completion of a second round of state CSR visits will provide an opportunity to review whether asset owners are implementing previous CSR-related security considerations; however, the follow-up visits will be performed over a four year cycle and will not be conducted at the asset level. While these efforts are a positive step, they do not provide the type of detailed information necessary to ensure that specific highway infrastructure assets, particularly those deemed nationally critical, are protected. According to TSA officials, the collection of more detailed data about protective measures is not currently feasible given available resources and other security priorities. However, HMC officials have stated that alternative cost-effective methods of collecting this information may be available, such as potentially leveraging the resources of state transportation inspectors during biannual bridge safety inspections. According to these officials, this program would provide a means to assess the protective security measures implemented for specific assets. Lacking a mechanism to monitor what protective security measures are being implemented to protect the nation’s critical highway infrastructure assets, TSA is unable to determine, with any degree of certainty, the level of overall security preparedness of these assets. In addition, without a process in place to better understand what security measures owners and operators are implementing, TSA is not effectively utilizing available information to help identify potential security gaps, establish protection priorities, and determine what, if any, additional measures may be needed to enhance highway infrastructure security. Securing the nation’s vast and diverse highway infrastructure is a daunting task. The nature, size, and complexity of this infrastructure highlights the need for federal and non-federal entities to work together to secure these assets and enhance security. While the cost of enhancing highway infrastructure security can be significant, the potential costs of a terrorist attack, in terms of both the loss of life and property and long-term economic impacts, would also be significant although difficult to predict and quantify. The importance of the nation’s highway infrastructure and the limited resources available to protect it underscore the need for a risk management approach to prioritize security efforts so that a proper balance between costs and security can be achieved. By not fully evaluating the risks posed by terrorists to the nation’s highway infrastructure through available assessments, TSA and its security partners are limited in their ability to focus resources on those highway infrastructure vulnerabilities that represent the most critical security needs. The large and diverse group of stakeholders involved in highway infrastructure security makes it difficult to achieve the needed cooperation and consensus to move forward with security efforts. As we have noted in past reports, coordination and consensus-building are critical to the successful implementation of security efforts. By coordinating risk assessment activities and sharing the results of risk assessments, DHS could more effectively use scarce resources to target further assessment activities and mitigate identified risks. By developing the Highway Modal Annex for highway infrastructure, TSA established strategic goals and objectives, a key first step in implementing a risk management approach. However, highway infrastructure stakeholders could benefit from a Highway Modal Annex that clearly describes their roles, responsibilities, relationships, and expectations for securing highway infrastructure and provides accountability for accomplishing its objectives. Moreover, performance measures developed in conjunction with the Highway GCC and SCC are important to assist TSA in evaluating the effectiveness of highway infrastructure programs, based on desired results that are defined by the Annex. Without performance measures, TSA may not have information with which to systematically assess these program’s strengths, weaknesses, and performance. Additional guidance on where to target resources and investments would help implementing parties allocate resources and investments according to priorities and constraints, track costs and performance, and shift such investments and resources as appropriate. We recognize that the Highway Modal Annex is not an endpoint for communicating and providing a framework for protecting highway infrastructure, but rather, a starting point. As with any planning effort, implementation is the key. The ultimate measure of this strategy’s value will be the extent to which it proves useful as guidance for policy and decision-makers in allocating resources and balancing highway infrastructure security priorities with other important, non-highway infrastructure security objectives. It will be important over time to obtain and incorporate feedback from the stakeholder community as to how the strategy can better provide this guidance, and how Congress and the executive branch can identify and remedy impediments to implementation, such as legal, jurisdictional, or resource constraints. Finally, while the varied actions government and industry stakeholders have taken to address the risks to highway infrastructure are important initial efforts, without a mechanism to monitor what protective security measures are being taken to secure nationally critical infrastructure, TSA cannot fully determine the extent of security preparedness across the nation’s highway infrastructure. We are recommending that the Secretary of Homeland Security take the following three actions: To enhance collaboration among federal entities involved in securing highway infrastructure and better leverage federal resources, we recommend that the Secretary of Homeland Security establish a mechanism to systematically coordinate risk assessment activities and share the results of these activities among the federal partners. To help ensure that highway infrastructure stakeholders are provided with useful information to identify and prioritize potential infrastructure security measures, enhance future planning efforts, and determine the extent to which specific protective security measures have been implemented, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for the Transportation Security Administration, in consultation with the Highway Government Coordinating Council and the Highway Sector Coordinating Council, to take the following actions: (1) for the upcoming revision to the Highway Modal Annex: in addition to the results of threat assessment information, incorporate the results of available vulnerability, and consequence assessment information into the strategy for securing highway infrastructure; consistent with Executive Order 13416 and desirable characteristics of an effective national strategy, identify existing guidance developed by other federal and state highway infrastructure stakeholders; indicate timeframes or milestones for its overall implementation for which entities can be held responsible; more clearly define security-related roles and responsibilities for highway infrastructure security activities for itself and other federal stakeholders, state and local government, and the private sector; establish a timeframe for developing performance goals and measures for monitoring the implementation of the Annex’s goals, objectives, and activities; and provide more guidance on resources, investments and risk management to help implementing parties allocate resources and investments according to priorities and constraints; and (2) develop a cost-effective mechanism to monitor the implementation of voluntary protective security measures on highway infrastructure assets identified as nationally critical. We provided a draft of this report to DHS for review and comment. DHS provided written comments on January 21, 2009, which are presented in Appendix VI. In commenting on the draft report, DHS and TSA reported that they concurred with all three of our recommendations and have started to develop plans to implement these recommendations. With regard to our first recommendation that DHS establish a mechanism to systematically coordinate risk assessment activities and share the results of these activities among federal partners, DHS stated TSA will have the lead in developing a sector coordinated risk assessment. TSA stated that it recognizes that it is responsible for all transportation security matters, must fulfill its leadership role in the highway infrastructure arena, and is prepared to assume responsibility for all highway infrastructure security issues. TSA added that it will request of all DHS, DOT and State or local governmental bodies that TSA become the repository for all risk assessment models and data associated with this mode. Toward this goal, DHS stated that TSA has convened representatives of both DHS and DOT agencies to produce the “National Strategy for Highway Bridge Security,” which is currently under review by agencies and offices within both Departments. Once fully vetted, DHS believes that this document will provide for appropriate participation and coordination of efforts by all Federal agencies engaged in highway infrastructure security. We support TSA’s efforts to improve coordination and develop the National Strategy for Highway Bridge Security. The intent of our recommendation is to help DHS avoid potential duplication, better focus future assessment efforts, and leverage limited resources. Thus, if TSA’s efforts result in a mechanism that systematically coordinates risk assessment activities among the federal partners, this effort would go far in addressing the intent of our recommendation. Developing a plan that establishes a mechanism to systematically coordinate risk assessment activities and share the results of these activities among federal partners will also be an important and necessary step to fulfilling the agency’s oversight and coordination responsibilities. TSA concurred with our second recommendation to include the results of available vulnerability and consequence assessment information in the upcoming revision to the Highway Modal Annex. In addition, TSA agreed to incorporate existing guidance developed by other federal and state highway infrastructure stakeholders, more clearly define security-related roles and responsibilities, establish a timeframe for its overall implementation and developing performance goals and measures. TSA stated that at the time of the drafting of the first iteration of the Highway Modal Annex, such vulnerability and consequence data was not available. TSA further stated that as the agency has expanded its CSR program, become more familiar with the stakeholder community security practices, and conducted much more detailed analyses of vulnerability and mitigation tools, TSA has improved its ability to conduct more comprehensive risk assessments that address threat, vulnerability, and consequences. TSA further stated that while those elements were considered in the preparation of the initial Annex, the document itself did not adequately explain how they were incorporated into the resulting strategy, and that future Annex publications would better explain TSA’s use of all three risk elements. TSA agreed that the agency is in the best position to provide strategy guidance, coordination and oversight in this area. TSA also agreed that implementation milestones and preparedness timeframes are appropriate for the Highway Modal Annex. However, TSA cautioned that any limitations on the stakeholder community’s implementation strategies will be based on a lack of resources, and indicated that the National Strategy for Highway Bridge Security is intended to help responsible stakeholders find resources dedicated exclusively to address the security needs of their structures. TSA stated that it does not believe that direct regulation is appropriate for the stakeholder community accountable for highway structures because, based on its experience, TSA believes this to be an overwhelmingly responsible constituency that will be highly proactive given appropriate resources and guidance. However, until TSA provides the details of how it plans to address our recommendation that it incorporate available vulnerability and consequence information into the Highway Annex and take other steps to strengthen the Annex, it remains unclear whether TSA can demonstrate that the Highway Modal Annex provides highway infrastructure stakeholders with available useful information to identify and prioritize potential infrastructure security measures, enhances future planning efforts, clarifies roles and responsibilities, and provides accountability. With regard to our third recommendation to develop a cost-effective mechanism to monitor the implementation of voluntary protective security measures on highway infrastructure assets identified as nationally critical, TSA agrees and stated that it is moving forward to identify a variety of mechanisms to monitor the voluntary security measures implemented with respect to critical highway structures. TSA stated that in fiscal year 2009, using funds made available specifically for this purpose for the first time since TSA was created, the agency will begin conducting individual vulnerability assessments on the nationally critical Tier 2 structures list. According to TSA, each assessment will be accompanied by a TSA- recommended approach to risk mitigation, and TSA will track the status of those recommendations on a periodic basis. TSA stated that its security partners will be kept informed of the progress of this effort. In addition, TSA stated its intention to clearly identify any to the implementation of voluntary security measures and would assist stakeholders in executing identified measures. Our intention in making this recommendation is for TSA to have the tools to allow it to more effectively monitor the level of overall security preparedness of critical assets, help identify potential security gaps, establish protection priorities, and determine what, if any, additional measures may be needed to enhance highway infrastructure security. Despite TSA’s stated plans, the agency has not indicated the frequency with which it plans to compile or analyze information on highway infrastructure operator’s security practices for critical assets, nor did TSA provide a time frame for completing the asset specific vulnerability assessments or identify what mechanisms would be used to monitor their implementation of voluntary protective security measures on highway infrastructure assets identified as nationally critical. Taking such actions would be necessary to fully address the intent of this recommendation. In addition, TSA noted that GAO has misstated or misinterpreted a key fact involving TSA’s desire and intention to conduct individual vulnerability assessments on critical highway structures. TSA believes this misstatement significantly affects the findings of the report. TSA noted that the report indicates that TSA has either not decided whether to conduct such assessments or determined that they do not need to be done. Furthermore, TSA stated that it intends to conduct individual assessments on all bridge and tunnel properties that TSA has identified as critical, beginning in 2009. However, TSA did not indicate its desire to conduct these assessments, nor did it provide any documentation to support these plans, during the course of this review. Rather, throughout this review, TSA officials repeatedly told us that the resources associated with conducting individual vulnerability assessments of critical assets made it impractical to conduct such assessments. For this reason, TSA officials stated that they would utilize primarily a non asset-specific approach to conducting vulnerability assessments of the highway infrastructure sector, through the CSR program, and that the agency would rely on infrastructure owners and operators to conduct asset-level vulnerability assessments on highway assets. TSA officials did not make us aware of its plans to conduct individual vulnerability assessments of critical assets until the agency provided written comments on a draft of this report in January 2009. While we acknowledge TSA’s stated intention to conduct individual vulnerability assessments on all critical highway infrastructure assets, we do not believe that the agency’s recently reported plans to conduct these assessments affect the findings of this report because our discussion of TSA’s efforts related to highway infrastructure vulnerability assessments was not used as the basis of any of the report’s recommendations. However, we have revised this report to clarify TSA’s plans related to vulnerability assessments. DHS also provided technical comments and clarifications, which we have considered and incorporated where appropriate. As agreed with your office, unless you publicly announce the contents of this report, we plan no further distribution for 30 days from the report date. At that time, we will send copies of this report the Secretary of Homeland Security, the Secretary of Transportation, the Assistant Secretary of the Transportation Security Administration, and appropriate congressional committees. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov/. If you have any further questions about this report, please contact me at (202) 512-3404 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. You asked us to assess the progress DHS has made in securing the nation’s highway infrastructure. This report answers the following questions: To what extent have federal entities assessed the risks to the nation’s highway infrastructure and coordinated these efforts? To what extent has DHS developed a risk-based strategy, consistent with applicable federal guidance and characteristics of an effective national strategy, to guide its highway infrastructure security efforts? and What actions have government and highway sector stakeholders taken to secure highway infrastructure, and to what extent has DHS monitored the implementation of asset-specific protective security measures? To determine the extent that federal entities have assessed the risks to the nation’s highway infrastructure and coordinated these efforts, we obtained and analyzed risk assessment data from DHS and DOT, comprised of various threat, vulnerability, and consequence related assessments for highway infrastructure assets. We did not assess the quality of the assessments completed. We sought to determine the reliability of these data by, among other things, discussing methods of inputting and maintaining data with agency officials. On the basis of these discussions and our review of the processes used to collect the data, we determined that the data were sufficiently reliable for the purposes of this report. We interviewed DHS, DOT and selected state transportation, homeland security, and law enforcement officials, associations representing highway infrastructure owners and operators, and members of the Highway GCC and the Highway SCC, to discuss federal risk assessment efforts. Although the selected state transportation and homeland security officials perspectives cannot be generalized across the wider population of highway infrastructure owners and operators, because we selected these states based on characteristics including location, and input on states representing security programs in which minimal to more robust security measures were implemented, they provided us a broad overview of highway infrastructure asset security. We selected the associations that we spoke with based on input from TSA, FHWA, and industry stakeholders who identified the major associations representing highway infrastructure owners and operators. To determine the extent to which TSA has used a risk management approach to guide decisions on securing highway infrastructure, we compared NIPP and TSSP requirements with TSA’s efforts to implement such an approach. We focused on the strategic planning and risk assessment elements related activities of the NIPP management framework because DHS is early on in the process. The views reported include only those individuals we interviewed and are not necessarily representative of the views of others in those organizations. We also reviewed federal coordination and collaboration activities related to stakeholder efforts to assess and strengthen highway infrastructure security and compared them to GAO’s recommended coordination practices. We also discussed with DHS, DOT and selected state transportation, homeland security, and law enforcement officials, associations representing highway infrastructure operators, and members of the Highway GCC, and the Highway SCC, the federal coordination and collaboration activities related to stakeholder efforts to assess and strengthen highway infrastructure security and compared them to the coordination requirements established in Homeland Security Presidential Directive-7, as well as GAO’s recommended practices for effective collaboration. In addition, we analyzed TSA’s actions regarding performance measurement with requirements in the Government Performance Results Act and GAO Standards for Internal Control in the Federal Government regarding the use of use performance measurement. To obtain information on how threat information is shared and TSA’s efforts to address threats, we met with officials from TSA’s Highway Motor Carrier Division, TSA’s OI, and HITRAC. Individuals from these offices provided documentation on DHS and DOT’s threat assessment efforts. In addition, we met with officials from DOT’s Office of Intelligence regarding the sharing of threat information. To assess the extent to which DHS developed a risk-based strategy consistent with applicable federal guidance and characteristics of an effective national strategy to guide its highway infrastructure security efforts, we reviewed federal agency reports, guidelines, and infrastructure security studies sponsored by industry associations on using risk management, and interviewed DHS, and DOT officials and state and industry association highway infrastructure representatives regarding their use of risk management for protecting highway infrastructure. As the principal strategy for protecting the nation’s highway infrastructure, we also analyzed TSA’s Highway Modal Annex to determine how it aligned with the requirements set out in Executive Order 13416: Strengthening Surface Transportation Security. In addition, we assessed the extent to which the Highway Modal Annex contained the desirable characteristics for an effective national strategy that we have previously identified. To identify the actions taken by government and highway sector stakeholders to enhance the security of highway infrastructure and assess the extent TSA has monitored the implementation of protective security measures implemented by stakeholders, we interviewed DHS, DOT, DOD, and selected state transportation, homeland security, and law enforcement officials, all major associations representing highway infrastructure operators, and members of the Highway GCC, and the Highway SCC. We also analyzed TSA, IP, and USCG vulnerability assessments of security practices at the state level and records of GCC and SCC meetings and stakeholder conferences. In addition, we selected 12 bridges and one tunnel to observe security measures implemented since September 11, 2001 and to discuss security-related issues with highway infrastructure owners and operators. We selected these assets based on characteristics including location, ownership, and criticality, and input on locations representing assets in which minimal to more robust security measures were implemented from TSA, DOT, and AASHTO. Because of the limited number of assets in our sample, and because the selected assets did not constitute a representative sample, the results of our observation and analysis cannot be generalized to the universe of highway infrastructure assets. However, we believe that the observations obtained from these visits provide us with a broad overview of highway infrastructure asset security. We also reviewed federal guidance and applicable laws and regulations. In addition, we observed FHWA training programs and joint stakeholder conferences. We also reviewed DHS Science and Technology Directorate, TSA, DOT, AASHTO, and TRB documents to identify research and development efforts to improve highway infrastructure security. We also compared TSA’s actions to obtain data on actions taken by highway infrastructure stakeholders to enhance security and to monitor implementation of those actions with criteria in GAO Standards for Internal Control in the Federal Government. We conducted this performance audit from May 2007 through January 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Although there are no laws that specifically address highway infrastructure security or require highway infrastructure owners and operators to take certain security measures, a number of laws that generally address critical infrastructure protection and transportation security have been enacted. Similarly, the President has issued directives, and federal agencies have developed strategies, designed to coordinate the federal effort to ensure the security of critical infrastructure and transportation assets. The below table lists statutes, executive orders, presidential directives, and strategies that address critical infrastructure protection and transportation security. Established the President’s Commission on Critical Infrastructure Protection (CIP) to study the nation’s vulnerabilities to both cyber and physical threats. Identified the need for the government and the private sector to work together to establish a strategy for protecting critical infrastructures from physical and cyber threats. Established CIP as a national goal and presented a strategy for cooperative efforts by government and the private sector to protect the physical and cyber-based systems essential to the minimum operations of the economy and the government. Superseded by HSPD-7 (see details on HSPD-7 below). Established the National Infrastructure Simulation and Analysis Center (NISAC) to serve as a source of national competence to address critical infrastructure protection and continuity through support for activities related to counterterrorism, threat assessment, and risk mitigation. Established the Office of Homeland Security, within the Executive Office of the President, to develop and coordinate the implementation of a comprehensive national strategy to secure the United States from terrorist threats or attacks. Established the Homeland Security Council to advise and assist the President with all aspects of homeland security and to ensure the coordination of homeland security-related activities of executive departments and agencies and effective development and implementation of homeland security policies. Established the President’s Critical Infrastructure Protection Board, which was to recommend policies and coordinate programs for protection information systems for critical infrastructure. Created the Transportation Security Administration (TSA) and conferred upon TSA responsibility for security in all modes of transportation. Identified the protection of critical infrastructures and key assets as a critical mission area for homeland security. Specified 8 major initiatives for CIP, one of which specifically calls for the development of the NIPP. Created the DHS and assigned it the following CIP responsibilities: (1) developing a comprehensive national plan for securing the key resources and critical infrastructures of the United States; (2) recommending measures to protect the key resources and critical infrastructures of the United States in coordination with other entities; and (3) disseminating, as appropriate, information to assist in the deterrence, prevention, and preemption of or response to terrorist attacks. Also provided for protection of voluntarily submitted information regarding the security of critical infrastructure. Identifies a set of goals and objectives and outlines the guiding principles that will underpin efforts to secure the infrastructures and assets vital to the nation’s public health and safety, national security, governance, economy, and public confidence. Amended Executive Order 13231 but generally maintained the same national policy statement regarding the protection against disruption of information systems for critical infrastructures. Designated the National Infrastructure Advisory Council to continue to provide the President with advice on the security of information systems for critical infrastructures supporting other sectors of the economy through the Secretary of Homeland Security. Superseded Presidential Decision Directive 63 and established that federal departments and agencies will identify and prioritize U.S. critical infrastructure and key resources and to protect them from terrorist attack. Defined roles and responsibilities for the DHS and sector-specific agencies to work with sectors to coordinate CIP activities. Established a CIP Policy Coordinating Committee to advise the Homeland Security Council on interagency CIP issues. Directed DHS to coordinate the development of an all-hazards National Preparedness Goal that establishes measurable priorities, targets, standards for preparedness assessments and strategies, and a system for assessing the Nation’s overall level of preparedness. Required the Secretary of Homeland Security to develop and implement a National Strategy for Transportation Security (NSTS) and modal security plans. Required the NSTS to include an identification and evaluation of the transportation assets that must be protected from attack or disruption, the development of risk-based priorities for addressing security needs associated with such assets, means of defending such assets, a strategic plan that delineates the roles and missions of various stakeholders, a comprehensive delineation of response and recovery responsibilities, and a prioritization of research and development objectives. Expanded security as a separate factor that must be addressed by statewide and metropolitan transportation plans by requiring that plans provide for consideration of projects and strategies that, among other things, will increase the security of the transportation system for motorized and non-motorized users. Outlines the Federal government’s approach — in partnership with state, local and tribal governments and private industry – to secure the U.S. transportation system from terrorist threats and attacks, and prepare the Nation by increasing our capacity to respond if either occurs. Expanded the purpose of the NISAC to include support for activities related to a natural disaster, act of terrorism, or other man-made disaster. Specified that the support must include modeling, simulation, and analysis of the systems and assets comprising critical infrastructure, in order to enhance preparedness, protection, response, recovery, and mitigation activities. Required any federal agency with critical infrastructure responsibilities under HSPD-7 to establish a relationship, including an agreement regarding information sharing, between such agency and the NISAC. Provided the framework and set the direction for implementing a coordinated, national effort. It provides a roadmap for identifying Critical Infrastructure/Key Resource assets, assessing vulnerabilities, prioritizing assets, and implementing protection measures in each infrastructure sector. Established procedures for federal, state, local, and tribal government agencies and contractors regarding the receipt, validation, handling, storage, marking, and use of critical infrastructure information voluntarily submitted to the DHS. Required the Secretary of Homeland Security to assess the security of each surface transportation mode and evaluate the effectiveness and efficiency of current surface transportation security initiatives. Imposed a deadline on the Secretary of Homeland Security to complete the Transportation Sector-Specific Plan (TSSP) and required the Secretary to develop modal annexes that addresses each surface transportation mode. Establishes the transportation sector’s strategic approach and related security framework. Describes how the TSSP will be implemented in the Highway mode. Required the Secretary to establish and maintain a national database of each system or asset that the Secretary determines to be vital and the loss, interruption, incapacity, or destruction of which would have a negative or debilitating effect on economic security, public health, or safety, or that the Secretary otherwise determines to be appropriate for inclusion. Required the Under Secretary for Information Analysis and Infrastructure Protection, not later than 35 days after the last day of each fiscal year, including fiscal year 2007, to submit to the appropriate committees, for each sector identified in the NIPP, a report on the comprehensive assessments carried out by the Secretary of critical infrastructure and key resources, evaluating threat, vulnerability, and consequence. Required the Secretary, not later than 6 months after the last day of each fiscal year, to submit to the appropriate committees a report that details the actions of the federal government to ensure the preparedness of industry to reduce interruption of critical infrastructure and key resource operations during an act of terrorism, natural catastrophe, or other similar national emergency. Specified that the transportation modal security plans required under 49 U.S.C. § 114(t) must include threats, vulnerabilities, and consequences for aviation, railroad, ferry, highway, maritime, pipeline, public transportation, over-the-road bus, and other transportation infrastructure assets. Required that the National Strategy for Transportation Security include a 3- and 10-year budget for federal transportation security programs that will achieve the priorities of the NSTS, methods for linking the individual transportation modal security plans and a plan for addressing intermodal transportation, and transportation modal security plans. Required the Secretary, in addition to submitting an assessment of the progress made on implementing the NSTS, to submit an assessment of the progress made on implementing the transportation modal security plans. Required that the progress reports include an accounting of all grants for transportation security, funds requested in the President’s budget for transportation security, by mode, personnel working on transportation security, by mode, and information on the turnover in the previous year among senior staff working on transportation security issues. Required the Secretary, at the end of each fiscal year, to submit to the appropriate committees an explanation of any federal transportation security activity that is inconsistent with the NSTS. Required that the NSTS include the Transportation Sector-Specific Plan (TSSP) required by HSPD-7. Required the Secretary to establish a Transportation Security Information Sharing Plan, and specifies the contents of the plan Required the Secretary, not later than 150 days after enactment and annually thereafter, to submit to the appropriate committees a report containing the plan. Required the Secretary, to the greatest extent practicable, to provide public and private stakeholders with transportation security information in an unclassified format. Required the Secretary, in a semiannual report, to provide to the appropriate committees a report that includes the number of public and private stakeholders that were provided with each report, a description of measures that the Secretary has taken to ensure proper treatment and security for any classified information to be shared with stakeholders, and an explanation of the reason for the denial of information to any stakeholder that has previously received information. Required the Secretary to establish a National Transportation Security Center of Excellence to conduct research and education activities and to develop or provide professional security training. Provided for civil and administrative penalties for violations of transportation security regulations prescribed by the Secretary Authorized the Secretary to develop Visible Intermodal Prevention and Response (VIPR) teams to augment the security of any mode of transportation in any location in the United States. Authorized to be appropriated such funds as may be necessary to carry out this section for fiscal years 2007 through 2011. Authorized the Secretary to train, employ, and utilize surface transportation inspectors. Required the Secretary to establish a program to provide appropriate information that the Department has gathered or developed on the performance, use, and testing of technologies that may be used to enhance surface transportation security to surface transportation entities. Required the Inspector General of the DHS, not later than 90 days after enactment, to submit a report to the appropriate committees on the federal trucking industry security grant program for fiscal years 2004 and 2005 that addresses the grant announcement, application, receipt, review, award, monitoring, and closeout process and states the amount obligated or expended under the program for fiscal years 2004 and 2005 for certain purposes. Required the Inspector General of the DHS, not later than 1 year after enactment, to submit a report to the appropriate committees that analyzes the performance, efficiency, and effectiveness of the federal trucking industry security grant program and the need for the program, using all years of available data, and that makes recommendation regarding the future of the program. Exec. Order No. 13,010, 61 Fed. Reg. 37,347 (July 15, 1996). 42 U.S.C. § 5195c. The White House, Office of Homeland Security, National Strategy for Homeland Security. Pub. L. No. 107-296, §§ 201(d), 214, 116 Stat. 2135, 2145-47, 2152-55 (2002). The White House, The National Strategy for the Physical Protection of Critical Infrastructures and Key Assets. 49 U.S.C. § 114(s). Pub. L. No. 109-59, § 6001(a), 119 Stat. 1144, 1839-57 (codified at 23 U.S.C. § 134, 135). 6 U.S.C. § 321. 6 C.F.R. §§ 29.1-29.9. Appendix III: Examples of Selected Protective Security Measures that Could be Implemented by Asset Owners and Operators Restrict physical access to critical systems and structures: Install fencing and other physical barriers to prevent access to critical bridge elements such as decks, piers, towers, and cable anchors. Utilize a full-time security officer to control access to restricted areas. Utilize security badges or other identification device to ensure access to restricted areas is properly controlled. Install locking devices on all access gates and utilize remote controlled gates where necessary. Eliminate parking under bridges or near critical structures. Protect tunnel ventilation intakes with barriers and install and protect ventilation emergency shut off systems. Utilize creative landscaping to increase standoff distance from critical areas. Surveillance and detection efforts: Provide inspections to identify potential explosive devices, as well as increased or suspicious potential criminal activity. Display signs warning that the property is secured and being monitored. Install CCTV systems where they cannot be easily damaged or avoided while providing coverage of critical areas (to monitor activity, detect suspicious actions, and identify suspects). Install enhanced lighting with emergency backup. Install motion sensors or other intrusion detection systems. Clear overgrown vegetation to improve lines of sight to critical areas. Security planning and coordination: Develop and implement a security plan that serves to identify critical systems and establishes procedures for their protection. Provide emergency telephones to report incidents or suspicious activity. Develop communication and incident-response protocols with applicable local, state, and federal law enforcement. Review locations of trashcans or other storage areas that could be used conceal an explosive device and ensure they are not near critical areas. Provide pass-through gates in concrete median barriers to enable rerouting of traffic and access for emergency vehicles. Use of an advanced warning system, including warning signs, lights, horns, and pop- up barricades to restrict access after span failure (manually activated or activated by span failure detectors). Shield the lower portions of cables on cable-stayed bridges and suspension bridges with protective armor to protect against damage from blast and fragmentation. Increase the standoff distance and reduce access to critical elements with structural modifications (extending cable guide pipe length, moving guard rails, etc.). Reinforce welds and bolted connections to ensure plastic capacity. Use energy absorbing bolts to strengthen connections and reduce deformations. Provide system redundancy to ensure alternate load paths exist should a critical structural element fail or become heavily damaged as a result of a terrorist attack. This project is developing technologies to mitigate the explosive and damaging force from an IED. In fiscal year 2008, the project conducted tests and evaluation of prototype technologies to evaluate blast mitigation performance and performed proof-of-concept demonstrations. In fiscal year 2009, the project plans to begin to develop models to further determine the vulnerability of infrastructure, bridges, and tunnels to various explosive threats. This project is developing rapid mitigation and recovery technologies for critical infrastructure to limit damage and consequences and to more quickly resume normal operations. The project will investigate rapid response and recovery technologies in addition to conducting basic research for the most vital infrastructure assets, such as underwater tunnels, bridges, levees, and dams. This study seeks approaches to address critical vulnerabilities in U.S. transportation tunnels. Beginning in fiscal year 2007, this project surveyed concepts for tunnel protection, including studies on advanced materials for tunnel hardening and identification of an inflatable plug system, based on European technology, to limit the spread of fire. Further development of this system has continued in fiscal year 2008, with full completion and demonstration of a prototype inflatable plug currently scheduled for fiscal year 2010. The following reports represent a sample of products completed at the request of the AASHTO Special Committee on Transportation Security: American Association of State Highway and Transportation Officials. Protecting America's Roads, Bridges, and Tunnels: The Role of State DOTs in Homeland Security. Project 20-59 (16). Washington, D.C., 2005. Blue Ribbon Panel on Bridge and Tunnel Security. Recommendations for Bridge and Tunnel Security. Project 20-59 (3). Washington, D.C.: Federal Highway Administration, September 2003. Transportation Research Board. A Self-Study Course on Terrorism- Related Risk Management of Highway Infrastructure. Project 20-59 (2). Washington, D.C., 2005. Transportation Research Board. Disruption Impact Estimating Tool- Transportation (DIETT): A Tool for Prioritizing High-Value Transportation Choke Points. Project 20-59 (9).Washington, D.C., 2005. Transportation Research Board. Guide to Making transportation tunnels safe and secure. Project 20-67. Washington, D.C., 2006. Transportation Research Board. Guidelines for Transportation Emergency Training Exercises. Project 20-59 (18). Washington, D.C., 2005. Transportation Research Board. National Needs Assessment for Ensuring Transportation Infrastructure Security. Project 20-59 (5). Washington, D.C., 2002. Transportation Research Board. Responding to Threats: A Field Personnel Manuals. Project 20-59 (6). Washington, D.C., 2003. In addition to the contact named above, Steve Morris, Assistant Director, and Gary M. Malavenda, Analyst-in-Charge, managed this assignment. Jean Orland, Ryan Lambert, Susan Langley, and Dan Rodriguez made significant contributions to the work. Stan Kostyla and Chuck Bausell assisted with design, methodology, and data analysis. Linda Miller provided assistance in report preparation; Tracey King provided legal support; Nikki Clowers provided expertise on physical infrastructure issues; Sara Veale provided expertise on coordination and collaboration best practices; Elizabeth Curda provided expertise on performance management; and Pille Anvelt and Avrum Ashery developed the report’s graphics.
|
The nation's highway transportation system is vast and open--vehicles and their operators can move freely and with almost no restrictions. Securing the U.S. highway infrastructure system is a responsibility shared by federal, state and local government, and the private sector. Within the Department of Homeland Security (DHS), the Transportation Security Administration (TSA) has primary responsibility for ensuring the security of the sector. GAO was asked to assess the progress DHS has made in securing the nation's highway infrastructure. This report addresses the extent to which federal entities have conducted and coordinated risk assessments; DHS has developed a risk-based strategy; and stakeholders, such as state and local transportation entities, have taken voluntary actions to secure highway infrastructure -- and the degree to which DHS has monitored such actions. To conduct this work, GAO reviewed risk assessment results and TSA's documented security strategy, and conducted interviews with highway stakeholders. Federal entities have several efforts underway to assess threat, vulnerability, and consequence--the three elements of risk--for highway infrastructure; however, these efforts have not been systematically coordinated among key federal partners and the results are not routinely shared. Several component agencies and offices within DHS and the Department of Transportation (DOT) are conducting individual risk assessment efforts of highway infrastructure vulnerabilities, and collectively have completed assessments of most of the critical highway assets identified in 2007. However, key DHS entities reported that they were not coordinating these activities or sharing the results. According to the National Infrastructure Protection Plan, TSA is responsible for coordinating risk assessment programs. Establishing mechanisms to enhance coordination of risk assessments among key federal partners could strengthen and validate assessments and leverage limited federal resources. DHS, through TSA, has developed and implemented a strategy to guide highway infrastructure security efforts, but the strategy is not informed by available risk assessments and lacks some key characteristics GAO has identified for effective national strategies. In May 2007, TSA issued the Highway Modal Annex, which is intended to serve as the principal strategy for implementing key programs for securing highway infrastructure. While its completion was an important first step to guide protection efforts, GAO identified a number of limitations that may influence its effectiveness. For example, the Annex is not fully based on available risk information, although DHS's Transportation Systems -Sector Plan and the National Infrastructure Protection Plan call for risk information to be used to guide all protection efforts. Lacking such information, DHS cannot provide reasonable assurance that its current strategy is effectively addressing security gaps, prioritizing investments based on risk, and targeting resources toward security measures that will have the greatest impact. GAO also identified a number of additional characteristics of effective national strategies that were missing or incomplete in the current Highway Modal Annex. Federal entities, along with other highway sector stakeholders, have taken a variety of actions to mitigate risks to highway infrastructure; however, DHS, through TSA, lacks a mechanism to determine the extent to which voluntary security measures have been employed to protect critical assets. Specifically, highway stakeholders have developed publications and training, conducted research and development activities, and implemented specific voluntary protective measures for infrastructure assets, such as fencing and cameras. However, TSA does not have a mechanism to monitor protective measures implemented for critical highway infrastructure assets, although TSA is tasked with evaluating the effectiveness and efficiency of federal initiatives to secure surface transportation modes. Without such a monitoring mechanism, TSA cannot determine the level of security preparedness of the nation's critical highway infrastructure.
|
The Federal Food, Drug, and Cosmetic Act, as amended, prohibits the “misbranding” of food, which includes, among other things, labeling that is false or misleading. In 1990, Congress amended the act to mandate that certain nutrition information be provided on packaged foods in a specified, standardized format—only recently have other countries, such as Canada, initiated mandatory nutrition labeling. The act, and FDA regulations implementing it, require food labels to include nutrient, ingredient, and other important content information that consumers can use to make healthy dietary choices, and to avoid allergens (such as peanuts) and other ingredients (such as sulfites) that can cause life-threatening reactions in people who are sensitive to them. For example, the act and FDA’s regulations, with some exceptions, require that food labels include the following: a Nutrition Facts panel that identifies the serving size; the number of servings per container; the number of calories per serving; and the amount of certain nutrients, such as fiber, vitamins, fat, and sodium; an ingredients list that identifies the product’s ingredients by their common or usual names, in order of predominance by weight; the required information in English; and a declaration of the source (e.g., tree nuts) of major allergens. Figure 1 depicts an example of a Nutrition Facts panel from FDA’s regulations illustrating nutrition information and visual display. The act and FDA regulations also require that health claims—that is, claims characterizing the relationship of certain nutrients to a disease or a health- related condition—on food labels be authorized by FDA. For example, a main dish that contains 140 milligrams (mg) or less of sodium per 100 grams may be labeled with the claim that “diets low in sodium may reduce the risk of high blood pressure, a disease associated with many factors,” provided there are no nutrients in the food at levels that would disqualify it from making this claim. In regulations, FDA has authorized the use of claims for 12 relationships between a nutrient and a disease or health- related condition. For purposes of compliance, with certain exceptions, a food is subject to enforcement action under FDA regulations if the number of calories or the amount of certain nutrients, such as fat and sugar, is more than 20 percent over the amount declared in the Nutrition Facts panel. The Institute of Medicine established the reference nutrient values that FDA used (along with the Dietary Guidelines for Americans) to establish the daily values for nutrients on the Nutrition Facts panel. In addition, for compliance and enforcement purposes, the amount of certain nutrients naturally occurring in the food must be at least equal to 80 percent of the value declared on the label; the amount of added nutrients in fortified or fabricated foods must be at least equal to the amount shown on the panel. According to FDA, these variations are allowed because, for naturally occurring nutrients, values cannot be precisely controlled and depend on weather and soil conditions, among other variables; in addition, values will vary because different laboratories use different methods and testing devices. FDA’s procedures for handling a product complaint require staff to obtain sufficient information from the complainant to evaluate the complaint and determine if it requires follow-up. Also, the complaint is to be documented in the Field Accomplishments and Compliance Tracking System (FACTS). For a food-labeling-related complaint, the information documented in FACTS should include, among other things, any injury, illness, or adverse event that was reported as having occurred as a result of incorrect labeling, and any follow-up actions. Complaints of significant illness or injury must receive immediate and thorough follow-up, while follow-up on those complaints that do not involve injury or illness may be deferred until the next scheduled inspection of the responsible firm, which may be in a few weeks, months, or several years. Similarly, FTC authorities prohibit unfair or deceptive acts or practices in or affecting commerce, including false or misleading advertising of food products. In some cases, FDA and FTC have certain overlapping jurisdiction for regulating food advertising, labeling, and promotion. In a 1971 memorandum of understanding, the agencies agreed that FTC would exercise primary responsibility for ensuring that food advertising is truthful and not misleading, and that FDA would have primary responsibility for ensuring that food labeling is truthful and not misleading. FDA’s use of oversight and enforcement tools has not kept pace with the growing number of food firms. As a result, FDA has limited assurance that companies in the food industry are in compliance with food labeling requirements, such as those prohibiting false or misleading labeling. FDA’s testing of nutrition information has been limited and has found varying degrees of compliance. Actions in response to labeling violations, such as issuing warning letters, have generally decreased or remained steady. In addition, FDA has not analyzed data on labeling violations and follow-up activities to inform its managers or the public. Furthermore, CFSAN has continued to maintain a duplicate food recall system that FDA had agreed to eliminate in response to a recommendation we made in a 2004 report. While the number of domestic food firms has increased, FDA has not increased the number of its inspections in response to this increase (see fig. 2). Also, FDA does not have reliable data on the total number of labels reviewed because investigators do not have to enter this information into the FACTS database, which documents other inspection details. In the absence of reliable data on the number of labels reviewed, and assuming that investigators were reviewing three labels each time, as FDA officials told us was the common practice, the number of labels reviewed would have declined with the decline in the number of inspections. FDA has conducted few inspections in foreign food firms and that number has declined significantly—from 211 in 26 countries in 2001 to 95 in 11 countries in 2007—even as the United States has received hundreds of thousands of different imported food product entry lines from tens of thousands of foreign food firms in more than 150 countries. (See app. II for information on the number of domestic and foreign food firms inspected under FDA’s jurisdiction during fiscal years 2001 through 2007.) Table 1 shows the number of countries and foreign food firms inspected over this period. Appendix III lists the countries and the number of inspections FDA conducted in each country, from fiscal years 2001 through 2007. In addition, FDA reported inspecting about 1 percent of the different food product entry lines that came into the United States annually during fiscal years 2002 through 2007. However, unlike investigators who perform inspections at manufacturing firms, the investigators who review labels on imported foods are not able to see the manufacturing process, the ingredients stored on shelves, the product formulation, and other documents that provide key information that helps to identify labeling violations. While FDA has tested some targeted nonrandom samples of food products to determine the accuracy of nutrition information on their labels, it has tested relatively few food products from some major exporting countries. In addition, FDA has done no random sampling since the 1990s, when some compliance rates varied considerably from the amount identified on the Nutrition Facts panel. From fiscal years 2000 through 2006, FDA collected targeted samples of 868 domestic products and 783 imported products for tests of compliance with nutrition labeling regulations. FDA was unable to provide information on samples taken and test results for fiscal year 2007 because, according to an agency official, the person who analyzed those data had retired from FDA. According to FDA officials, investigators often selected samples because they noticed obvious labeling violations, such as a candy bar with a Nutrition Facts panel that did not identify any fat or sugar. As table 2 shows, about 21 percent and 28 percent, respectively, of the domestic and imported foods tested were in violation. The number of samples of imported food FDA has tested for accuracy of nutrition labeling does not relate to the volume of imports or the rate of violations in products from a given country, as table 3 shows. One type of food with a high percentage of violations was infant formula—4 of the 10 formula products sampled were in violation—because they lacked the vitamins, minerals, or other nutrients required by law. While FDA has conducted targeted, nonrandom sampling of labels on imported and domestic food products suspected of having inaccurate information (beyond the allowable ranges) for nutrients listed on their labels, FDA has not conducted random sampling on nutrition labeling since the 1990s. In 1994 and again in 1996, FDA tested 300 randomly selected products to determine the extent to which nutrient information on the Nutrition Facts panel was within the allowable range. According to FDA’s analysis of these products, 87 percent (in 1994) and 91 percent (in 1996) of the nutrients were within the allowable range. However, compliance rates varied significantly for a few nutrients. For example, in 1994 and 1996, respectively, 48 percent and 47 percent of the samples were not within the allowable range for vitamin A; 48 percent and 12 percent of the samples were not within the allowable range for vitamin C; and 32 percent and 31 percent of the samples were not within the allowable range for iron. These variances are important because consuming too much or too little of certain vitamins and iron may have adverse health consequences. FDA officials cited resource constraints and other priorities as reasons for not updating these studies and told us that FDA has no plans for future studies. FDA has available several tools to ensure that food labeling complies with requirements: (1) issuing warning and untitled letters and holding regulatory meetings and (2) taking enforcement actions—seizures, injunctions, import refusals, and import alerts. However, we found that FDA’s efforts have generally declined or held steady. From fiscal years 2002 through 2007, FDA issued 463 warning letters to firms with serious violations that included food labeling violations—often with other food-safety-related violations—notifying them that enforcement actions might be forthcoming if corrections were not made. The number of warning letters issued annually that included food-labeling-related violations held relatively steady during the period. On the other hand, the number of letters issued for all FDA-regulated products (e.g., food, drugs, and medical devices) decreased by nearly half—from 806 letters in fiscal year 2002 to 434 in fiscal year 2007. However, as we conducted our study, FDA continued to find additional warning letters that had been issued for fiscal years 2002 through 2007. In addition, according to FDA, its Fiscal Year 2007 Enforcement Story reported 471 warning letters for 2007. Thus, the number of food-labeling-related warning letters, as well as total FDA warning letters, may be higher than we report. Figure 3 shows the number of warning letters issued annually for fiscal years 2002 through 2007. The labeling-related warning letters addressed violations for different product types—including candy, baked goods, seafood, and juice drinks— that were identified through inspections or testing product samples. About 52 percent (241 of 463) of the letters were for dietary supplements. Of the 463 food-labeling-related warning letters, 326 cited specific violations of the misbranding provision of the Federal Food, Drug, and Cosmetic Act; the other 137 letters cited other statutory provisions and regulations. As shown in table 4, the 326 letters that cited the misbranding provision included references to 677 violations in 15 different categories. FDA officials explained that they try to focus their oversight efforts on the labeling violations of public health significance and on the types of products with widespread or persistent violations. For example, on October 17, 2005, FDA issued 29 warning letters to manufacturers of cherry juice and other fruit products for unapproved claims related to diseases, and 25 letters on October 12, 2006, to makers of dietary supplement products that had drug claims or unauthorized health claims. FDA officials told us that warning letters are an important and very public tool for ensuring compliance with FDA regulations and alerting other companies of practices that are not acceptable. Furthermore, FDA, in accordance with Freedom of Information Act requirements, makes these letters available on its public Web site. However, we found several problems with FDA’s public dissemination of warning letters that call into question the accuracy of its numbers. For example, we tested the reliability of this database and found that it was missing over 220 warning letters. When we brought the missing letters to their attention, FDA officials told us they posted them. Although FDA officials assured us that the database was complete and accurate, in February 2008 and later, we found duplicate letters in the database as well as additional letters that had been issued during fiscal years 2006 and 2007. Therefore, the number of warning letters posted on FDA’s Web site for fiscal years 2002 through 2007 may be different from the number shown in figure 3. In April 2008, FDA officials told us they were continuing to work on the database and to discuss potential process improvements to help ensure that all letters are posted. In fiscal year 2001, FDA had issued nearly twice as many warning letters for all violations than in 2002. FDA officials attributed the decrease in warning letters, in part, to new policies that transferred the approval of warning letters from FDA centers and districts to the Office of Chief Counsel. FDA officials told us that the target turnaround time for issuing a warning letter—the elapsed time between the day officials identify the violation, either through an inspection, laboratory test, or illness outbreak investigation, and the day FDA issues a warning letter—is about 4 months. This is a nearly fourfold increase over the 30-workday target time we reported in February 2005. A longer lag time to issue a warning letter increases the number of days for which consumers may consume the misbranded food before FDA posts these serious problems on its Web site. In addition, FDA estimated that it has sent one third as many untitled letters—correspondence citing violations that FDA deems as not warranting a warning letter—as warning letters. We did not assess untitled letters because FDA did not centrally track the letters in a database, nor did it maintain copies centrally until fiscal year 2008. Regarding regulatory meetings, FDA could not tell us how many were held because these meetings are handled exclusively by the district offices and are not centrally tracked. FDA does not receive any information on the extent to which districts are using these meetings and whether the different field offices are using the same criteria for these meetings. FDA has taken few enforcement actions—seizures, injunctions, and import refusals—for food labeling violations and issued a number of labeling- related import alerts. FDA was able to provide us with data on seizures and injunctions for 10 years and on import refusals and import alerts for 6 years. Seizures: In fiscal years 1998 through 2007, FDA had initiated actions that resulted in court seizures of 21 products in domestic commerce for food- labeling-related violations. Of the 21 seizures, most were of imported products. Olive oil, dietary supplements, and mushrooms were the most frequently seized products. Injunctions: According to FDA documents, the courts enjoined two companies in response to possible labeling violations for fiscal years 1998 through 2007. On February 3, 2006, FDA obtained a consent decree of permanent injunction against Natural Ovens Bakery, Inc., for allegedly introducing misbranded foods, including dietary supplements, and misbranded and unapproved drugs into interstate commerce and for causing foods to become misbranded. According to FDA documents, the injunction was obtained after a 20-year history of noncompliance with FDA regulations, and 3 years after an April 8, 2003, warning letter that FDA's Minneapolis District Office had issued in response to inspections conducted in December 2002, February 2002, and September 2001. The other was a consent decree of permanent injunction, entered in September 2003 against a dietary supplement manufacturer—Hi-Tech Pharmaceuticals, Inc.—for allegedly labeling dietary supplements with drug claims, which violated food labeling requirements and caused FDA to have to regulate the supplements as drugs and, specifically, as unapproved new drugs. FDA considered this injunction to be food-labeling-related. Import refusals: FDA refused entry to 15,226 imported food product entry lines that had labeling violations from fiscal years 2002 through 2007. In fiscal year 2002, while FDA examined the fewest labels, it refused entry to the highest percentage of foods; conversely, in fiscal year 2005, FDA examined the greatest number of labels, and refused entry to the lowest percentage of foods over the 6-year period. In addition, over this period, 14,851 products that had labeling violations were released “with comments”—meaning that FDA allowed the shipment with a labeling violation to enter the United States with notice to the importer that subsequent shipments could be refused entry if the violation was not corrected. Releases with comment are intended to cover deficiencies FDA regards as minor, nonhealth-significant. If FDA finds additional imports of one of these products with the same violation 60 or more days after the earlier shipment is released with comments, FDA may consider detention, according to FDA officials. (See table 5.) For import refusals, the most frequent labeling violations cited were the lack of required nutrition information (25 percent); the failure to list the common or usual name of each ingredient (18 percent); the failure to accurately state the product’s weight, measure, or numerical count (13 percent); and the failure to provide the label in English (12 percent). (See table 6.) Of the nine countries with the greatest value of agricultural, fish, and seafood imports to the United States in fiscal year 2006, Canada was the largest—with a total value of $15.6 billion; Mexico was second with $9.8 billion, followed by China with $4.2 billion. As shown in table 7, during fiscal years 2002 through 2007, Canada also had the most food labels reviewed (45,377) and lowest rate of import refusals (2.6 percent) where a labeling violation was cited, while Australia had the fewest label reviews (697) and the highest rate of import refusals (14.3 percent) where a labeling violation was cited. Import alerts: As of January 28, 2008, FDA gave us information on active import alerts for 64 food products that officials characterized as labeling violations. For example, FDA issued import alerts for several different types of biscuits imported from India that did not use the common or usual name for ingredients. Once a product is on the import alert list, FDA does not remove it until the firm appears to have corrected the violation, according to FDA officials. Twenty of the 64 products on import alert were added during fiscal year 2007, and 1 of the remaining 44 had been in effect since April 2000. In technical comments on a draft of this report, FDA indicated that 64 alerts seemed too low and that it may not have provided us with all import alerts for labeling violations. However, FDA did not provide additional information or documentation on those alerts. FDA does not centrally track or analyze data on potentially serious labeling violations or firms’ actions to correct those violations. We repeatedly requested any routine reports on labeling compliance that FDA managers used to help them carry out their program oversight responsibilities. However, according to officials, they do not generate such routine reports due, in part, to resource limitations and to limitations in FDA information systems. For example, over the past decade, FDA has never analyzed the results of the laboratory tests on the accuracy of labeling information (e.g., the Nutrition Facts panel and declared allergens) on domestic and imported foods. An official said they had always wanted to develop computer programs that would identify trends, but did not have the staff to do so. Also, FDA does not routinely analyze and report on trends in labeling violations. As a result, FDA managers do not have important information to inform their decision making on setting priorities for overseeing compliance with labeling requirements and allocating resources for labeling program activities. Furthermore, FDA does not provide consumers and others with important information on its public Web site to help inform their food purchasing decisions. As we have previously noted, FDA’s Web site’s posting of warning letters has not been kept current and complete. In addition, although FDA maintains import refusals and warning letters, its Web site does not provide the public with summary information on, and trends in, serious labeling violations by, for example, product type, company, and country. In addition, from fiscal years 2001 through 2007, FDA documented approximately 2,600 complaints from consumers on food labeling issues in FACTS—its compliance tracking system. These data included complaints that ingredients—such as allergens—in the food were not listed on the label and may harm consumers’ health. However, the data concerning complaints were not entered into FACTS in a way that would facilitate analysis. Specifically, standard terminology was not used and information on complaint resolutions was captured in different data fields. As a result, FDA program managers cannot readily use these FACTS data to track the timely and appropriate resolution of consumer labeling complaints. According to our analysis of FDA’s Recall Enterprise System (RES) database, 409 of the 1,295 food product recalls that firms carried out during fiscal years 2003 through 2007 listed food labeling violations, such as failing to list added chemical preservatives on labels, as a factor. While food labeling was listed as a reason in each of the 409 recalls, it was not necessarily the only reason nor was it necessarily the most serious violation. In addition, almost 57 percent of the labeling-related recalls were for violations that FDA classifies as high risk—that is, posing a reasonable probability of causing serious adverse health consequences or death—such as labels that fail to identify certain allergens in the food, such as tree nuts, that are potentially deadly to individuals who are sensitive to them. However, CFSAN maintains an unofficial database of food recalls and reported that it was able to identify more labeling-related recalls than we did in using the official RES. In the course of our work, we learned that CFSAN has continued to maintain this unofficial database for food recalls apart from the official RES. In October 2004, we first reported CFSAN’s use of this duplicative recall database and the discrepancies between the unofficial data and the official data. At the time, CFSAN program staff told us they used the unofficial database to generate reports for Congress because it contained the most accurate data. We pointed out that keeping the second database raised significant questions about the validity and reliability of the official system. We also pointed out FDA’s substantial investment in the RES and the duplication of resources spent maintaining two separate data systems. Although FDA agreed with our recommendation to eliminate the duplicative recall database, it has continued using resources to maintain the second system—resources that could be used on other CFSAN work. When FDA provided the RES data electronically for our independent analysis, officials told us it was the official source for CFSAN recalls, including the food-labeling-related recalls. We developed criteria for selecting labeling-related recalls on the basis of various labeling terms and sections on food labeling in the Federal Food, Drug, and Cosmetic Act. FDA agreed that our criteria for identifying labeling-related recalls were valid. In December 2007, FDA provided final fiscal year 2007 data to complete our analysis. Subsequently, in April 2008, as part of our quality assurance procedures, we provided FDA with our list of labeling-related recalls to review for completeness. CFSAN officials informed us in May 2008 that by using their unofficial database, they identified about 250 food- labeling-related recalls that were not in our list. Of the 250, 171 were in the official system data but were not captured by the criteria we used. Regarding the remaining 79 recalls, we were unable to locate them in the RES data provided to us. In technical comments on a draft of this report, FDA noted that the 79 recalls had coding differences. However, FDA did not provide us with the codes that corresponded to the RES data. We had originally thought that these 79 recalls were missing from the official database and, therefore, were not posted on the FDA public Web site— thus, we drafted a recommendation that FDA post all recalls in a timely manner. However, after FDA commented that the differences could be due to coding, we deleted this recommendation. It appears that the 409 labeling-related recalls we identified may be a minimum number and, thus, may understate the number of recalls with labeling violations. Because we did not receive the unofficial database, we did not independently analyze it or assess its validity and reliability. FDA’s Science Board Advisory Committee report, the Commissioner’s May 2008 resource needs assessment, and the Food Protection Plan cite challenges to FDA’s efforts to carry out food safety and other food-related responsibilities, in part, because its resources have not kept pace with its increasing responsibilities—challenges that directly impact its oversight of labeling requirements. In addition, FDA does not have certain authorities that it reports would allow it to better leverage resources and carry out its food-related missions. These authorities could help FDA administer and enforce the food labeling requirements. According to the Science Board report, the demands on FDA have soared, but resources have not increased in proportion to demand. In the May 2008 resource needs assessment, FDA’s Commissioner identified the immediate need for additional resources—for improvements in FDA’s science, information technology, and program capabilities—to ensure the safety of FDA-regulated imports and protect the food supply. Likewise, the Food Protection Plan asserts FDA’s ever-expanding responsibilities—such as safeguarding the evolving food demands of consumers; overseeing the increasing volume, variety, and sources of imported food; and staying ahead of the emerging threats to food safety and security—and all of the skills, technologies, and initiatives that it is planning to meet these new challenges. However, as we have testified, it is unclear what the total costs will be to fully implement the plan; thus, we continue to have concerns about FDA’s lack of specificity on the resource needs. Although FDA received increased funding for new bioterrorism-related responsibilities following September 11, 2001, staffing levels for CFSAN have declined since then and funding (in constant dollars) has stagnated. Between fiscal years 2003 and 2007, the number of FTE employees in CFSAN headquarters dropped about 20 percent, from 950 to 763, and inspection and enforcement staff decreased by about 19 percent, from 2,217 to 1,806 (see app. IV). While funding in nominal dollars increased from $406.8 million in 2003 to $457.1 million in 2007, when adjusted for inflation, funding in the 2 years is nearly the same—$465.7 million and $465.8 million, respectively—in constant 2008 dollars. At the same time, as we have previously noted, the number of FDA-regulated domestic food firms increased more than 10 percent—from about 58,270 in 2003 to about 65,520 in 2007. Also, the number of different imported food product entry lines has tripled in the past 10 years, and imports account for 15 percent of the food supply. Appendix IV provides detailed information on FDA funding and FTEs for each center. For fiscal years 1999 through 2007, the FTE staff years for the Office of Nutrition, Labeling, and Dietary Supplements reached its highest level in 2002 (88) and its lowest in 2007 (65), according to data provided by FDA finance and other officials. Within the office, funding and staffing for food labeling activities, as estimated by an FDA finance official, have remained fairly steady since fiscal year 2005, the first year for which FDA staff were able to separate resources for labeling-related activities from other Office of Nutrition, Labeling, and Dietary Supplements work (see table 8). FDA’s Science Board reported on the growing disparity between FDA resources and responsibilities. Noting that the demands on FDA have soared, while resources have not increased proportionately, the committee concluded that the disparity has made it increasingly “impossible” for FDA to maintain its historic public health mission. In the May 2008 resource needs assessment, the FDA Commissioner identified the immediate need for additional staff to enable the agency to affect its food-safety-related goals. This would benefit administering and enforcing food labeling requirements. In addition, according to FDA officials, the agency generally does not address misleading food labeling because it lacks the resources to conduct the substantive, empirical research on consumer perceptions that it believes it would need to legally demonstrate that a label is misleading, as the agency believes is required by court rulings, such as Pearson v. Shalala, which is discussed in appendix V. The Food Protection Plan identified a number of legislative changes—new authorities FDA recognized were needed, including, among others, the authority to charge user fees for certain reinspections, accredit third-party inspectors for certain reviews, and mandate recalls when voluntary recalls are not effective. FDA has these authorities for certain other products it regulates but not for food labeling activities or most food oversight efforts. In addition, FDA has never used its detention authority under the Bioterrorism Act of 2002 to detain potentially dangerous food because, according to the agency, its other authorities and regulatory tools have been adequate to date to protect public health. Several FDA centers have the authority to collect user fees for particular activities. For example, FDA’s Center for Devices and Radiological Health has the authority to collect and retain user fees from firms for reviewing and approving premarket applications for medical devices. The center uses the fees to offset the costs of reviewing and approving these applications and to increase staffing levels. In its Fiscal Year 2009 Justification of Estimates for Appropriations Committees for FDA, HHS proposed a reinspection user fee on food industry firms that fail to meet important manufacturing and food safety requirements. This fee would cover the full cost of reinspections and the associated follow-up work. We have presented various ways to design user fees to encourage greater efficiency, equity, and revenue adequacy and to reduce the administrative burden on the agency and payers of the fees. For example, the extent to which a program is funded by user fees should generally be guided by who primarily benefits from the program. If a program primarily benefits the general public (e.g., national defense), it should be supported by general revenue, not user fees; if it primarily benefits identifiable users, such as customers of the U.S. Postal Service, it should be funded by fees; and if a program benefits both the general public and users, it should be funded in part by fees and in part by general revenues. The guide may provide useful direction to FDA as it proceeds with its proposed reinspection user fee. (Funding data presented in app. IV also show user fees collected by some FDA centers.) Regarding the authority to accredit qualified third-party inspectors, which the Food Protection Plan states will allow FDA to allocate inspection resources more effectively, FDA plans to use these highly qualified parties to, among other things, carry out certain voluntary reviews in foreign food facilities, where few inspections and label reviews are currently done. As we testified in May 2008, FDA’s Center for Devices and Radiological Health has accredited third-party organizations to conduct voluntary inspections of foreign firms that manufacture medical devices, and these third parties completed six inspections in 4 years. We noted that an incentive for firms to participate included the opportunity to reduce the number of inspections conducted to meet FDA’s and other countries’ requirements. Disincentives include bearing the cost of the inspections and the potential consequences that could include regulatory action. We further noted that the small number of inspections raised questions about the practicality and effectiveness of using accredited third-party inspectors to quickly help FDA increase the number of foreign firms inspected. The Food Protection Plan does not describe how FDA expects to design and implement the proposed accredited third-party inspection program to inspect foreign food firms or how this proposal will help it leverage resources. In contrast, USDA uses third-party Agricultural Commodity Meat Graders—contracted for their expertise—to carry out certain reviews in its livestock and meat grading and certification programs. FDA’s Food Protection Plan also asserts that the agency needs mandatory recall authority for food. It has this authority for infant formula and medical devices that present a health hazard. Other agencies, such as the National Highway Traffic Safety Administration and the Consumer Product Safety Commission, use their recall authority to help protect consumers from products that can cause serious injuries, such as unsafe infant car seats. We have previously proposed that Congress consider giving FDA mandatory food recall authority. The Bioterrorism Act of 2002 gave FDA the authority to administratively detain any article of food found during an examination, inspection, or investigation, if it has credible evidence or information indicating that the article of food presents a threat of serious adverse health consequences or death, for labeling and other violations. However, FDA has never used this authority. According to the agency, its other authorities and regulatory tools, such as its authority to refuse entry of imports under section 801 of the act, have been adequate to date to protect public health. In contrast, USDA has detention authority for meat and poultry products in interstate commerce that its FSIS uses to prevent shipments under its jurisdiction from entering U.S. commerce, if the agency has reason to believe that the food is adulterated or misbranded. USDA reported that, from July through September 2006, its import investigators detained 15 shipments—about 9,500 pounds—of imported meat products. FDA officials acknowledged that implementing the Food Protection Plan will require additional resources, and that FDA will need to partner with Congress to obtain the additional statutory authorities to transform the safety of the nation’s food supply. However, as we testified in May 2008, FDA’s congressional outreach strategy is general. When we asked FDA officials if they had a congressional outreach strategy, officials said that they had met with various committees to discuss the Food Protection Plan. When we asked if they had provided draft language to congressional committees on the various authorities, FDA officials explained that they had only provided technical assistance, such as commenting on draft bills, to congressional staff when asked. Key stakeholders—officials from health, medical, and consumer organizations in the United States and Europe—advocate a uniform front- of-package symbol to help consumers select healthy food and avoid misleading or confusing labeling. Some U.S. trading partners have implemented voluntary front-of-package nutrition symbols and several U.S. manufacturers and groceries are using front-of-package symbols. In addition, many stakeholders identified or petitioned FDA for other actions that they believe FDA should pursue to avoid misleading labeling and help consumers identify nutritious foods. Some stakeholders noted that taking such actions may require FDA to redirect resources. Consumers have reported understanding certain labeling terms, such as “sugar” and “vitamins,” and finding benchmarks (such as daily reference values) helpful in comparing products, but they generally found nutrition labeling confusing, especially certain technical and numerical information, according to a recent synthesis of nutrition studies. For example, consumers had difficulty in understanding the role that nutrients played in their diet, and the relationship between sugar and carbohydrates as well as the terms “cholesterol” and “fatty acids.” While a few studies suggest that many consumers look at Nutrition Facts panels when they buy food for the first time, some studies suggest that consumers may simply look at the information but not process it further. The National Academies’ Institute of Medicine, which is often called on to advise federal agencies on health issues, reported in 2006 that there is little evidence that the information on food labels has a significant impact overall on eating or food purchasing. The institute had previously recommended that FDA and others increase research on the nutrition label and pointed out that manufacturers’ use of nutrition symbols underscores the need to improve strategies for using the food label as an educational tool. In addition, in a November 2007 letter to FDA, the American Medical Association (AMA) stated that there is evidence that consumers have difficulty in making appropriate judgments about which foods are the healthiest. Several major health and consumer organizations in the United States, as well as in Canada and Europe, advocate mandatory, uniform front-of- package nutrition rating systems to help consumers select healthy foods. In the United States, the AMA and the American Heart Association advocate such a system, and the Institute of Medicine’s 2006 report recommended that food and beverage companies work with government, scientific, public health, and consumer groups to develop and implement an industrywide system. Furthermore, to help consumers choose more nutritious foods, the scientists with expertise in nutrition and public health who developed the 2005 Dietary Guidelines for Americans expressed concern that consumers did not have a scientifically valid system to show nutrient density on food labels, and recommended that HHS and USDA develop this system. In addition, the Center for Science in the Public Interest petitioned FDA in 2006 to develop a simple, uniform, science-based rating system that could be graphically represented on the front of food packages to give consumers consistent, reliable nutrition information. Although the European Union does not require nutrition labeling for all foods, it does require it on foods that have health or nutrition claims or that have voluntarily added vitamins or minerals, according to a European Union official. In addition, several countries, including the United Kingdom, the Netherlands, and Sweden, have implemented voluntary, front-of-package nutrition labeling systems, while Canada is proposing research on how such systems influence food purchases, among other things, and consulting stakeholders. The European Commission has proposed a mandatory, front-of-package labeling system. Figure 4 shows the front-of-package nutrition symbols for systems in the United Kingdom, the Netherlands, and Sweden, which help consumers in those countries identify healthy foods. Consumers and health organizations in many countries have a heightened interest in the benefits of choosing healthy foods, including several that have implemented (see fig. 4) or are considering front-of-package nutrition labeling systems. For example: The United Kingdom: The Food Standards Agency implemented a voluntary front-of-package traffic light symbol to help consumers distinguish between the healthiest choices (green light), less-healthy choices (amber light), and least healthy choices (red light) with respect to fat, saturated fat, salt, sugars, and usually calories, as well. Officials report that preliminary sales data suggest that this system is influencing consumers’ purchases toward healthier products. In addition, manufacturers are developing new products and reformulating less- healthy products so that their foods may move into the amber or green light category, according to U.K. officials. The United Kingdom’s National Heart Forum (an alliance of 50 heart health organizations) has endorsed the traffic light system. The Netherlands: The Netherlands uses a voluntary front-of-package “healthy choice” symbol, which was developed by the food industry and endorsed by the Ministry of Health. According to a Ministry official, standards for applying the symbol vary by food category, taking into account the characteristics of each category—for example, fiber is included in the criteria for bread products. A foundation was established—the Choices International Foundation—to introduce the symbol to other countries. The qualifying criteria for using the symbol will be reevaluated every 2 years by an independent scientific committee, according to the official. Sweden: The National Food Administration uses a voluntary front-of- `package keyhole logo to identify the healthiest foods within particular food categories. Products that carry the symbol are lower in fats, sugars, and sodium and contain more fiber than other foods within the same category. According to agency officials, the introduction of the keyhole logo resulted in the development of healthier products and the continuous reformulation of existing products. Canada: The House of Commons’ Committee on Health’s 2007 report, Healthy Weights For Healthy Kids, recommended that the country’s health agency—Health Canada—phase in a mandatory, standard, simple, front-of-package labeling requirement for prepackaged food, starting with foods advertised primarily to children. In addition, the Chronic Disease Prevention Alliance of Canada supports this recommendation. As of April 2008, Health Canada commented that it is taking several steps, including consulting with stakeholders and proposing consumer research on, among other things, front-of-package symbols. European Union: The European Commission has proposed legislation that would require prepackaged food to display information on calories, fat, saturated fat, carbohydrates, sugars, and salt on package fronts, according to documents released by the commission. A commission official told us that member states would still be able to promote additional national front-of-package labeling systems if they comply with requirements of the proposed legislation. The European Union’s Commissioner for Health stated that food labels can have a huge influence on consumers’ purchasing decisions, and confusing, overloaded, or misleading labels can be a hindrance to consumers. The European Heart Network (an alliance of 30 heart health organizations in 26 countries) and the European Consumers’ Organization also support mandatory front-of-package labeling. In the United States, health and consumer associations have developed nutrition symbols to help consumers. For example, the American Heart Association developed the heart-check logo to help consumers identify heart-healthy foods. Currently, over 800 products from over 100 companies use the logo, and one major line of foods was developed with the heart- check criteria as a key driver, according to the association. While most companies reformulate products before applying for the logo certification, the association also works with companies on 20 to 40 products a year to help them meet its criteria. In addition, the Whole Grains Council, a nonprofit consumer group working to increase consumption of whole grains, developed the Whole Grain Stamp to identify products with at least a half serving of whole grains, with the grams of whole grain specified. A “100%” banner can be placed on the stamps when all of the grain is whole grain. The stamps have been used on over 1,700 products from 180 companies in the United States, Canada, and the United Kingdom. In addition, manufacturers have developed numerous symbols to market their foods to health-conscious consumers, and supermarkets have used symbols to help consumers identify healthier foods. At a September 2007 FDA public hearing on front-of-package and other nutrition symbols, several manufacturers and supermarket chains reported increased sales and reformulations associated with their use of nutrition symbols. For example, Kraft has reported that the more than 500 products carrying its Sensible Solution symbol accounted for a sizable portion of its overall revenue growth. Hannaford, a northeastern supermarket chain, reported that it improved the nutrient quality of its store brand products before introducing its symbol for nutrition quality that it calls Guiding Stars, which is based on mathematical formulas giving a weighted value to many nutrients. Hannaford also reported increased sales for products with stars. According to the Institute of Medicine, however, the consistency, accuracy, and effectiveness of the proprietary graphics currently in use have not been evaluated or empirically validated, and they may fall short of their potential as guides to more nutritious choices. Many stakeholders also share a concern about the proliferation of such graphics. FDA officials told us that the agency assigned an individual part time to focus on research on nutrition symbols. In comments, FDA told us it has completed one study. In addition, FDA plans to issue a summary of the 2007 public hearing and to identify gaps in the information that stakeholders provided during or after the hearing, at the request of FDA. The Grocery Manufacturers/Food Products Association opposes mandatory front-of-package nutrition symbols and maintains that nutrition symbols should continue to be voluntary because the industry’s use of symbols to communicate nutrition information is truthful, not misleading, and consistent with FDA’s clear regulations for making representations about nutrition. According to the association, in recent years, many food companies have reformulated thousands of food products to improve their nutrient profiles, and many manufacturers are using symbols and related graphic designs on labels to supplement the Nutrition Facts panel. In addition, the Keystone Center, an industry-funded nonprofit organization, has held discussions to determine whether it should develop a voluntary front-of-package system. In 2007, the center convened a group of experts from industry, government, consumer, and academic organizations to study the various systems used in the United States. As of July 2008, this group had not released information on the status of its effort. According to FDA officials, FDA acts as an observer in this group. However, FDA has not yet collaborated with the relevant federal agencies and stakeholders with nutrition expertise to evaluate labeling approaches and options. Several medical, health, and consumer association stakeholders suggested FDA actions that they believe would mitigate misleading and confusing labeling. While some stakeholders noted that these actions may require FDA to redirect resources, they also believe such actions would help consumers identify healthy foods. Eliminate qualified health claims: Stakeholders, such as the AMA, have suggested that FDA eliminate the use of qualified health claims on food labels because consumers cannot distinguish among the four levels of scientific support that FDA uses—significant scientific agreement, scientific evidence that is not conclusive, limited scientific evidence that is not conclusive, and very little scientific and preliminary evidence. According to the stakeholders, these claims confuse or mislead consumers and may encourage the consumption of foods with little or no health benefits. This view was supported by findings from 2005 and 2007 FDA studies. In commenting on a draft of this report, FDA questioned whether it had the authority to eliminate the use of such claims. See appendix V for more information on FDA’s administration of health claims. Establish criteria for characterizing the amount of whole grains in food: The use of the term “whole grain” increased in popularity after the 2005 Dietary Guidelines underscored the importance of these foods in the American diet. Some studies suggest that consumers, as well as dieticians and other nutrition experts, cannot accurately identify which foods are primarily whole grain. In 2004, General Mills, Inc., petitioned FDA to establish criteria for the phrases “excellent source of whole grains,” “good source of whole grains,” and “made with whole grains” to help prevent false or misleading labeling of grain products. FDA denied the petition, but it acknowledged the need for action and stated that claims such as “good source” have been used only with regard to nutrients—not foods—and that FDA needs to consider how to classify different kinds of statements and whether public comments are needed. In 2006, FDA developed draft guidance that identified what foods it considered “whole grain.” FDA officials stated that they expect to continue work on this issue when they can hire additional staff. Prohibit foods that contain substantial amounts of saturated fat from being labeled as “trans fat free”: FDA has not objected to products being labeled as “trans fat free” that have less than 0.5 grams of trans fat per serving, and does not restrict the amount of saturated fat in “trans fat free” foods. However, as stakeholders pointed out, saturated fat, like trans fat, raises low density lipoprotein (LDL or “bad cholesterol”) levels in the blood, increasing the risk of heart disease. Initially, FDA proposed limiting “trans fat free” labeling to foods with less than 0.5 grams of saturated fat, but FDA later stated that insufficient scientific information existed to support whether 0.5 was the appropriate level. FDA is evaluating available research to determine how to best address the issue. Require the labels of foods commonly consumed in one sitting to show total calories, fat, and other nutrition information: Several health and consumer stakeholders believe consumers may be misled by Nutrition Facts panels for foods, such as large sodas, candy bars, muffins, and other foods, that are normally consumed in one sitting, but are labeled as two or more servings. In 2005, the Institute of Medicine recommended that FDA revise requirements so that foods typically consumed in one sitting prominently display the total calorie content of the product as well as the standard per-serving format. Industry- sponsored research found that the participants in four focus groups generally favored the listing of nutrients for the whole container, although some want nutrients listed for both the full container and per serving. In April 2005, FDA published an advance notice of proposed rulemaking requesting comments on this issue. In 2008, FDA noted that it needed to review the comments submitted in response to the 2005 notice, and to coordinate this area with its plans to revise the daily intake reference values (used to establish the daily values for the Nutrition Facts panel) described in a 2007 advance notice of proposed rulemaking. The Grocery Manufacturers/Food Products Association opposes requiring nutrition information for the entire contents of the package on the food label, noting that nutrition information for the entire package would give consumers “permission” or “encouragement” to eat the entire package. Clarify the definition of “natural” as it applies to food: The Sugar Association has petitioned, with the support of the Center for Science in the Public Interest and others, that FDA define the term “natural” on the basis of USDA’s definition, as articulated in its Foods Standards and Labeling Policy Book. USDA policy defines “natural” to permit only minimal processing, including roasting, drying, and fermenting, to preserve or make food edible. Under this USDA policy, foods that go through certain processes, such as chemical bleaching, that fundamentally alter the raw product, are not considered “natural.” Both groups assert that FDA allows manufacturers to label products as “100% natural” even if they contain highly processed ingredients, citing partially hydrogenated oils and high fructose corn syrup. However, the Corn Refiners Association believes that USDA and FDA should have different definitions of “natural” because, among other things, the two agencies regulate fundamentally different products—USDA-regulated meat and poultry products are understood to be less processed than FDA-regulated foods. FDA acknowledged in 1993 that clarifying the definition of “natural” would abate some of the complaints that the term’s use is misleading. More recently, FDA noted that it lacks resources to undertake a rulemaking to revisit the definition. With its current approach to oversight and enforcement, FDA cannot be assured that food firms are complying with labeling requirements. In light of the resource constraints and many responsibilities that FDA has reported, it is especially important that FDA start by making better use of the tools and data it has available. However, FDA’s use of warning letters and enforcement actions have at best held steady, despite increased responsibilities. FDA is not using the information that it has to inform managers’ decisions on setting priorities and allocating resources. FDA does not maintain in an accessible format, or analyze in routine reports, information it has on such areas as labeling violations discovered during inspections, the results of tests on the accuracy of labels, warning letters, recalls, and import refusals. Moreover, although information on whether and how labeling violations are addressed is critical for effectively overseeing the labeling program. FDA does not (1) centrally maintain information on regulatory meetings and (2) know whether field offices are applying the same criteria for meetings and whether meetings are effective. While FDA posts information for the public on its Web site—such as warning letters, import refusals, and import alerts—it does not ensure that the information is complete and posted promptly. As a result, the public may not have the information needed about products in violation of the law to inform their purchase decisions. Furthermore, CFSAN has continued to expend resources maintaining a duplicative data system for food-related recalls, which it agreed to eliminate in 2004. We reiterate our prior recommendation that FDA should eliminate this system. Going forward, to better administer and enforce labeling requirements, FDA has begun to pursue several authorities that are available to other centers within FDA and other regulatory agencies. In particular, CFSAN does not have the authority to charge user fees, accredit third-party inspectors, or require recalls for most food. As a result, CFSAN is not as well positioned as other programs that have these authorities to carry out its responsibilities. FDA’s Food Protection Plan recognized the need for additional resources and new authorities, to ensure the safety of the Nation’s food supply. However, as FDA proceeds in seeking new authorities it will need to ensure that any it chooses to pursue are designed and implemented efficiently and appropriately and, in particular, that any user fees it develops are well-designed and based on best practices and sound criteria, such as that specified in GAO’s Federal User Fees: A Design Guide. In addition, any FDA program for accrediting third parties would likely benefit from lessons learned in another FDA-accredited third-party program. Moreover, as we have previously testified, while FDA’s plan is a good first step, it does not contain a clear description of resources and strategies. Congress will need those details to assess the likelihood of the plan’s success. Finally, the many issues stakeholders raised about label information that they believe confuse consumers compete for FDA’s attention and resources. Nonetheless, FDA has information on the approaches that U.S. industry and other countries are taking to give consumers simplified nutrition information at a glance with front-of-package symbols. However, given FDA’s competing priorities and its minimal progress in addressing misleading labeling thus far, collaboration with other federal entities and stakeholders could afford an opportunity for FDA to better leverage resources to pursue front-of-package labeling or other initiatives for minimizing consumer confusion. We recommend that the Commissioner, FDA, take the following seven actions: Ensure that labeling office managers have the information they need to oversee compliance with food labeling statutes and regulations by maintaining, in a searchable format, data on food labeling violations, including the type of violation and information about corrective actions taken or, if no action was taken, the reason why; analyzing violation data in routine management reports; and tracking regulatory meetings related to food labeling violations and analyzing whether regulatory meetings are an effective use of resources. Ensure that the public has timely access to information on food labeling violations that may have serious health consequences by requiring all of the centers and offices to post on FDA’s public Web site, within a specified time frame, key information, such as all warning letters; statistics on serious enforcement actions (e.g., import refusals) by country, type of food, and the problem found (e.g., undeclared allergen); and information (e.g., product identification and exposure symptoms) on violations that FDA classifies as serious. Better leverage resources to carry out food safety and other regulatory responsibilities, including administering and enforcing labeling requirements, by providing Congress with specific, detailed information on the new statutory authorities identified in the Food Protection Plan, such as the authority to charge user fees, accredit third-party inspectors, and mandate food recalls, with specific information on how these authorities would help achieve its mission; posting on FDA’s public Web site periodic updates of the status of implementation of the Food Protection Plan, including goals achieved and time frames for completing the remaining work; and collaborating with other federal agencies and stakeholders experienced in nutrition and health issues, to evaluate labeling approaches and options for developing a simplified, empirically valid system that conveys overall nutritional quality to mitigate labels that are misleading to consumers. We provided a draft of this report to HHS for review and comment. In written comments, FDA stated that the report raised some important issues regarding its regulation of food labeling. FDA did not dispute the report’s data, analyses, or specific findings. It commented, however, that the report inappropriately references food labeling as part of its food safety mission, although it acknowledges that there may be some aspects of food labeling that can affect the safe use of food. That notwithstanding, FDA directs investigators to review at least three labels during food safety inspections. Moreover, food labeling responsibilities are part of FDA’s statutory mission, and the Federal Food, Drug, and Cosmetics Act and FDA’s regulations set out FDA’s labeling responsibilities. FDA also stated that within its overall public health mission, it has a multitude of competing priorities. We acknowledged FDA’s competing priorities in the report’s conclusions and framed the recommendations so as to help manage these competing priorities by better leveraging resources and using available tools and data for risk-based decisions. Regarding our first three recommendations for ensuring that managers have the information they need to oversee compliance with food labeling statutes and regulations—by (1) maintaining data on labeling violations and the corrective actions taken, in a searchable format, (2) analyzing that data in routine management reports, and (3) tracking regulatory meetings on labeling violations to assess whether they are an effective use of resources—FDA agreed that being able to track any and all information that would allow investigators to better do their jobs would be useful to the agency. However, FDA stated that data collection requires time and effort and it is important to make sure that data entry does not become so burdensome that it takes away from other investigative work. FDA did not commit to taking any actions in response to these recommendations. We maintain that FDA cannot make risk-based decisions, such as allocating resources efficiently and effectively, without careful analysis of this type of data on its regulatory programs—FDA’s systems already maintain substantial data on food labeling and related violations. Analyzing these data for routine reports could help inform labeling managers’ decisions and help them target labeling resources. We stand by these recommendations. With respect to our recommendation for ensuring the public has timely access to information on labeling violations that may have serious health consequences—that FDA require centers and offices to post key information (e.g., warning letters or import refusals) on FDA’s public Web site and specify time frames for doing so—FDA commented that it already posts and maintains much of this information, and that it would keep the information as up to date as possible, given resource and time limitations. However, as we discuss in this report, FDA’s target time for issuing warning letters and posting them is 4 months after violations are found. Providing information that is complete and timely can help the public avoid potentially dangerous food and make healthy food purchase decisions. The draft we sent to FDA for comment recommended that FDA post all recalls to its public Web site in a timely manner. We eliminated recalls from this recommendation because, in technical comments, FDA told us that the recalls in CFSAN’s unofficial database that we thought were missing from RES were the result of coding differences. We stand by this recommendation as amended. Our final three recommendations are aimed at better leveraging resources. Two are aimed at helping FDA keep the Food Protection Plan on track by (1) providing specific, detailed information to Congress on how the new authorities in the Food Protection Plan will help FDA achieve its mission and (2) posting periodic updates on the status and time frames for implementing the plan on FDA’s public Web site. FDA stated that the plan was designed to address food safety and defense concerns, although some of the actions presented in it may have some bearing on food labeling issues. It was not our intent to suggest that the plan’s primary focus was on food labeling; we have clarified this in the report. Nonetheless, in this report and in recent testimonies, we have expressed our concerns that FDA has not given Congress sufficient, detailed information on how it will implement the plan and use the new authorities—information Congress needs to support the initiatives. Furthermore, updates can reassure the public of FDA’s progress. FDA did not explicitly address what action, if any, it would take in response to these two recommendations. With respect to our last recommendation—that FDA collaborate with other federal agencies and stakeholders on evaluating options for developing a simplified, empirically valid system for conveying overall nutritional quality to help consumers—FDA agreed with the need to evaluate the communication effects of nutrition symbols and presented a research agenda. Because the agenda appears to be ambitious given FDA’s limited resources, our recommendation will continue to encourage FDA to collaborate with other federal agencies and stakeholders who may be able to contribute resources, as it evaluates options to develop a simple, valid system to communicate nutritional quality. FDA’s written comments and our detailed evaluation appear in appendix VI. FDA also provided technical comments, which we incorporated throughout the report, as appropriate. As agreed with your office, unless you publicly announce the contents of the report earlier, we plan no further distribution of it until 30 days from the date of this report. At that time, we will send copies of the report to the appropriate congressional committees, the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. This report examines (1) the Food and Drug Administration’s (FDA) efforts to ensure that domestic and imported foods comply with food labeling requirements, including those prohibiting false or misleading labeling; (2) the challenges FDA faces in its efforts to administer and enforce food labeling requirements; and (3) the actions that stakeholders from health, medical, and consumer organizations believe are needed to mitigate the effects of food labeling practices they consider misleading and to help consumers identify healthy food. For the purposes of this report, our definition of “food” includes conventional food, dietary supplements, infant formula, and medical food, but not animal feed, which the Federal Food, Drug, and Cosmetic Act includes in its definition of food. We did not determine whether any particular food labeling was false or misleading. We also did not evaluate how efficiently FDA used its resources or the impact of changing priorities; nor did we compare FDA resource trends with other federal agencies’ resource trends. Regarding data for labeling-related oversight, we analyzed the food firms inspected for 7 fiscal years (2001 through 2007); nutrient labeling samples for 7 fiscal years (2000 through 2006); warning letters and enforcement actions related to imports for 6 fiscal years (2002 through 2007); and seizures and injunctions for 10 fiscal years (1998 through 2007)—the periods for which reliable and comparable FDA data were available. Funding and staffing data for FDA, the Center for Food Safety and Applied Nutrition (CFSAN), and the Office of Regulatory Affairs (ORA) were available for 10 fiscal years (1998 through 2007). For the Office of Nutrition, Labeling, and Dietary Supplements, which began maintaining comparable data in 1999, we report funding and staffing for 9 fiscal years (1999 through 2007). Unless otherwise stated, data are presented by federal fiscal year. To determine FDA’s efforts to ensure that domestic and imported foods comply with food labeling statutes and regulations, including those related to false or misleading labeling, we analyzed FDA’s and CFSAN’s plans and reports, guidance and regulations related to food labeling, and policies and actions taken in response to petitions and complaints over the last 6 years. We also analyzed data from the Field Accomplishments and Compliance Tracking System (FACTS) and Operational and Administrative System for Import Support (OASIS) on domestic, foreign, and import inspections conducted by FDA, along with domestic inspections conducted by states under contract with FDA. To determine the number of warning letters issued by FDA, we worked with FDA's Freedom of Information Office and ORA to address several problems we found during the course of our review regarding the online database of warning letters. After addressing those problems, we then searched that database for warning letters that were related to food labeling and characterized each letter according to the product and the violations cited. We also searched FDA's Recall Enterprise System (RES) for recalls identified with food labeling violations as one of the reasons for the recall. Regarding violations of Nutrition Facts panel regulations, we analyzed data from FACTS for domestic and imported food, and also analyzed studies conducted on the accuracy of nutrient labeling. We analyzed data from this system on consumer complaints to determine the extent to which they were tracked. Finally, we also analyzed data from OASIS on food labeling violations for imported food and collected information on seizures and injunctions focused on food labeling violations. To identify challenges, we analyzed funding and staffing data for FDA, CFSAN, ORA, and the Office of Nutrition Labeling and Dietary Supplements and reviewed FDA oversight and enforcement authorities, and court rulings regarding FDA labeling. For comparison, we examined some of the same information for the U.S. Department of Agriculture’s Food Safety and Inspection Service and the Federal Trade Commission, which also oversee and enforce requirements related to food labeling, such as those prohibiting false or misleading information about food. We assessed the reliability of the data from FACTS and OASIS that we used in this report and found them to be sufficiently reliable for these purposes. To assess the reliability of these data, we (1) performed electronic testing for obvious errors in accuracy and completeness, (2) reviewed related documentation, and (3) worked closely with agency officials to identify any data problems. In addition, we assessed the reliability of the data from the RES. FDA recently informed us that CFSAN has continued to use an unofficial database that it agreed to eliminate in 2004, which contains additional information on recalls that would potentially fit our criteria for analysis. Despite any limitations of the RES, we believe these data to be sufficiently reliable to indicate a minimum number of recalls for the time period we reported. To determine stakeholders’ views, we analyzed petitions, public responses to petitions, and ideas presented during FDA’s November 2007 public labeling meetings. We discussed these and other suggestions with health and medical associations, including the American Cancer Society, American Diabetic Association, American Heart Association, American Dietetic Association, American Medical Association, and National Academies’ Institute of Medicine; the Center for Science in the Public Interest; the Grocery Manufacturers/Food Products Association; the Association of Food and Drug Officials; and selected states (California, Connecticut, Florida, New York, Texas, and Wisconsin) that the Association of Food and Drug Officials and others groups identified as being active in food labeling issues. In addition, we contacted officials of health or related departments in Canada, the United Kingdom, Sweden, the Netherlands, and the European Commission to collect information on their use or plans for use of nutrition symbols. We did not independently verify the statements of foreign law. We also analyzed consumer studies conducted by FDA, industry, and others to identify whether the findings supported or failed to support stakeholders’ views. These studies were identified by health, consumer, and industry experts and through literature searches. For the data we included in our report, we obtained frequency counts, survey instruments, and other documents, to review the wording of questions, sampling, mode of administration, research strategies, and the effects of sponsorship. We used only data that we judged to be reliable and valid. We conducted this performance audit from January 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Nearly half of the domestic firms that are subject to FDA regulation are food firms—manufacturers, processors, and other food businesses. Table 9 presents the number and percentage of domestic food firms that are subject to FDA’s food regulations and the total number of domestic firms in all industries (e.g., pharmaceuticals and medical devices) that are subject to FDA regulation, for fiscal years 2001 through 2007. Regarding firms inspected under all FDA regulatory programs, food-related firms have accounted for between 15 percent and 30 percent of foreign firms inspected and between 45 percent and 56 percent of domestic firms inspected. Table 10 presents the number and percentage of foreign and domestic food-related firms inspected and the total number of FDA- regulated firms inspected, for fiscal years 2001 through 2007, by FDA and states under contract with FDA. (This page is intentionally left blank.) Center for Devices and Radiological Health National Center for Toxicological Research $1,278.1 Includes GSA rent, other rent, rent-related activities, White Oak Consolidation, and the FDA Buildings and Facilities Appropriation. Includes tobacco program, Office of the Commissioner, Office of Policy, Office of External Affairs, Office of Operations/Orphan Grants Administration, Office of Management and Systems, and Central Services. Other activities funded in part by user fees, including Office of the Commissioner, Office of Policy, Office of External Affairs, Office of Operations/Orphan Grants Administration, Office of Management and Systems, Central Services, Export Certification, and Color Certification Fund. The Nutrition Labeling and Education Act of 1990 (NLEA) amended the Federal Food, Drug, and Cosmetic Act to include provisions that govern the use of health claims on food labeling. For conventional foods, the NLEA requires that any claim that expressly or by implication characterizes the relationship of a nutrient to a disease or health-related condition must be authorized by the Secretary of Health and Human Services (delegated to FDA) through a regulation. Under the NLEA, FDA may authorize a health claim for a conventional food if it determines, based on the totality of publicly available scientific evidence, that there is “significant scientific agreement” among experts—qualified by scientific training and experience to evaluate such claims—that the claim is supported by such evidence. Although the NLEA also provided for the use of health claims in dietary supplement labeling, Congress did not require dietary supplement health claims to be subject to the same statutory procedures and standards as conventional food health claims. Instead, dietary supplement health claims were to be subject to procedures and standards established in regulations issued by the Secretary of Health and Human Services (delegated to FDA). In 1991, FDA published a proposed rule in the Federal Register, proposing the implementation of the statutory procedures and standards for health claims for conventional food, and proposing to adopt those same procedures and standards for dietary supplement health claims. However, before the rule could be finalized, Congress passed legislation that generally prohibited FDA from implementing the NLEA with respect to dietary supplements until December 15, 1993. Therefore, in January of 1993, when FDA adopted the final rules for health claims for conventional foods, it did not finalize rules for dietary supplement health claims. However, 1 year later, after the prohibition of implementation of NLEA for dietary supplements had expired, FDA adopted a rule that subjected dietary supplement health claims to the same general requirements that applied to conventional foods. Under those rules, any person wanting to include a health claim on a conventional food or dietary supplement label must petition FDA for authorization before including the claim on the label. If FDA determines, based on the totality of publicly available information, that there is significant scientific agreement in support of that claim, it will authorize its use by issuing it in regulation. FDA’s health claim regulations for dietary supplements were the subject of several lawsuits in the 1990s. In a case known as Pearson v. Shalala, the U.S. Court of Appeals for the District of Columbia Circuit held that the First Amendment does not permit FDA to prohibit a potentially misleading health claim on the label of a dietary supplement, unless FDA considers whether a disclaimer on the product’s label could negate the potentially misleading nature of that claim. Specifically, the court stated that although inherently or actually misleading information in food labeling or advertising may be prohibited, potentially misleading information cannot face an absolute prohibition. Instead, potentially misleading information may be regulated only if those regulations directly advance a substantial government interest, and offer a reasonable fit between the government’s goals and the means chosen to accomplish those goals. The court found a substantial interest in protecting the public health and preventing consumer fraud. However, it found that FDA’s regulation requiring health claims to be supported by significant scientific agreement did not directly advance the interest in public health, and, even though the regulations directly advanced the interest in preventing consumer fraud, the fit between the goals of the regulations and the means employed—an outright ban without the possibility of a disclaimer—was not reasonable. Following the decision in Pearson, FDA announced its plan to respond, stating that it would deny, without prejudice, all petitions for the use of dietary supplement health claims that did not meet the significant scientific agreement standard while the agency conducted and completed a rulemaking to reconsider the procedures and standards governing such claims. Then, according to FDA, once a rule was finalized, the agency would revisit the petitions it had denied. However, in 2000, citing concerns over additional First Amendment challenges, FDA announced plans to modify that policy. FDA stated that it would continue to approve dietary supplement health claims that met the significant scientific agreement standard, but it would exercise its enforcement discretion and not take action against dietary supplement health claims that failed to meet the standard under certain circumstances. Specifically, upon the submission of a valid petition for preapproval of a dietary supplement health claim, if FDA did not find significant scientific agreement, but, in evaluating the weight of the evidence, did find that the scientific evidence in support of the claim outweighed the scientific evidence against it, and consumer health and safety were not threatened, the agency would inform the petitioner of conditions under which the agency would refrain from taking enforcement action against the health claim. If the scientific evidence against the health claim outweighed the scientific evidence in support of it, FDA would deny any use of the health claim. Then, in 2002, the agency announced the availability of guidance, updating its approach to implementing the Pearson decision. In large part, the procedures remained the same; however, FDA included health claims for conventional foods under the procedures, even though the Pearson case directly addressed only dietary supplements. FDA stated that it believed that such a move would precipitate greater communication in food labeling and thereby enhance public health. In addition, FDA stated that including health claims for conventional foods in its enforcement discretion policy would help avoid further constitutional challenges. Subsequently, in 2003, FDA announced the availability of two new guidance documents describing interim procedures that, among other things, addressed a then recent U.S. District Court for the District of Columbia decision that found the weight of the evidence standard that FDA first articulated in guidance in 2000 was inappropriate. According to the district court in that case, FDA should evaluate qualified health claims based on the presence of “credible evidence,” not the weight of the evidence. The 2003 guidance documents set forth new procedures for qualified health claims for conventional foods and dietary supplements. Specifically, qualified health claim petitions would be evaluated using an evidence-based ranking system that would rate the strength of the publicly available scientific evidence. A claim would be denied if there was no credible evidence to support it. Otherwise, based on the competent and reliable scientific evidence in support, a claim would be assigned to one of four ranked levels—the first level being “significant scientific agreement among qualified experts” and the remaining three levels being for claims supported by some lower level of credible evidence. Each of the three categories not ranked as supported by significant scientific agreement would correspond to one of three standardized qualifying statements (i.e., disclaimers). So long as the qualified health claim bore the appropriate language, met other applicable health claim regulations, and adhered to criteria established in FDA’s letter of enforcement discretion in response to the petition, FDA would exercise its enforcement discretion and refrain from acting against the health claim. In November of 2003, FDA published an Advance Notice of Proposed Rulemaking, recognizing the need to establish transparent, long-term procedures that have the effect of law. In that announcement, FDA presented several regulatory alternatives: (1) incorporate the interim procedures and evidence-based ranking system we have previously discussed into regulation; (2) subject health claims to notice-and-comment rulemaking, as before Pearson, but reinterpret the “significant scientific agreement” standard to refer to the evidence supporting the claim being made, instead of the underlying substance-disease relationship; or (3) treat qualified health claims as outside the NLEA and regulate them on a postmarket basis (i.e., pursue the product as misbranded if the health claim renders the label false or misleading because the claim lacks substantiation). FDA does not plan to work on this proposed rulemaking this year. In May of 2006, FDA issued guidance concerning FDA’s implementation of qualified health claims process. In that guidance, FDA reaffirmed the 2003 interim procedures and stated that “FDA is currently considering various options regarding the development of proposed regulations related to qualified health claims,” and “…n the meantime, the agency plans to review qualified health claim petitions on a case-by- case basis.” The following are GAO’s comments on the Department of Health and Human Service’s (HHS) letter dated August 19, 2008. 1. FDA commented that the report did not place food labeling in the appropriate context, given FDA’s overall public health mission and competing priorities. We believe the food labeling responsibilities are part of that mission. The Federal Food, Drug, and Cosmetic Act specifically describes FDA’s mission to include protecting the public health by, among other things, ensuring that “foods are safe, wholesome, sanitary, and properly labeled.” FDA also commented that the report failed to account for all the varied initiatives that FDA and HHS have undertaken to fight obesity and ensure that foods are labeled in a manner that fosters consumer education and healthy choices. The subject of this report is food labeling, not obesity. With respect to labeling initiatives to help consumers make healthy food choices, the report identifies several areas where stakeholders believe that FDA falls short. 2. Although FDA said that it does not consider food labeling part of its food safety mission, it does include reviewing labels as a required step in a food safety inspection. Also, overseeing industry compliance with labeling requirements is part of FDA’s food oversight responsibilities and labeling laws help consumers ensure that the food they buy is safe for them eat. That said, since FDA made this distinction, we revised the wording in some places in the final report. 3. FDA took issue with the report’s frequent references to the Food Protection Plan. FDA stated that the plan was developed to address food safety and defense, although it may have some bearing on food labeling issues. It was not our intent to suggest that the plan’s primary focus was on food labeling, and we have clarified this in the report. The report discusses the Plan’s potential to help FDA carry out its food regulatory responsibilities and discusses certain provisions that, if implemented, may be useful tools in monitoring and enforcing the food labeling requirements. 4. FDA correctly noted that the report does not evaluate how efficiently FDA used its resources or the impact of its changing priorities, although we did examine resources for food labeling. For example, the report provides 10 years of budget data on FDA, with detailed data for each center, including (1) total staffing and funding, (2) the portion of Office of Regulatory Affairs’ staffing and funding for inspections and other oversight, and (3) staffing and funding supported by user fees. However, because FDA was not able to provide risk-based priority plans or annual work projections for all labeling activities, we could not determine how efficiently labeling resources were used or the impact of changing priorities on labeling. 5. FDA contended that most misleading food labeling violations do not present a high risk to public health. However, FDA has not conducted the research to identify which food labels are misleading and therefore has little or no basis for determining the health impacts of misleading labeling violations. 6. FDA commented that it does not believe that tracking and analyzing data and providing routine reports on food labeling violations is the best use of its resources, given competing priorities. We maintain that risk-based decisions, such as allocating resources effectively, must include careful analysis of this type of data on regulatory programs. Moreover, FDA already collects most of these data so resource investment to generate the reports should be minimal and worth the benefits of ensuring that managers’ decisions are well-informed and risk-based. As FDA rolls out several initiatives for improving its information technology systems, which it states are under way, HHS may want to provide FDA managers with training on using the systems as management tools. 7. FDA said it agreed that being able to track any and all information that would allow its investigators to better do their jobs would be useful to the agency, but that data collection requires time and effort. FDA continued, it is important to make sure that data entry does not become so burdensome that it takes away from other investigative work. However, as we previously noted, FDA already collects most of these data. With a small resource investment, analyzing these data in reports can help managers make more informed decisions. 8. FDA implied that it may not have the resources to keep data on the public Web site up to date. However, providing consumers with information that is timely and complete can help them avoid potentially dangerous food and make healthy food purchases. 9. FDA commented that the report suggests in several places that the only context in which FDA detains food is under the Bioterrorism Act, and that that FDA has another type of detention authority that applies to imported articles. However, the report does discuss FDA’s other detention authority under section 801(a) of the Federal Food, Drug, and Cosmetic Act. The report refers to these actions as “import refusals," which is the term that FDA currently uses for these enforcement actions. We added a footnote in the text to note this. 10. FDA's statement—that its survey indicated that 70 percent of adults said they look at the Nutrition Facts panel the first time they purchase a food—is misleading. In that survey, 44 percent of respondents told FDA that they "often" read the panel the first time they purchase a food, and 25 percent "sometimes" read the panel at that time; while 31 percent "rarely" or "never" read the panel. 11. FDA commented that (1) court decisions, beginning with Pearson v. Shalala, hold that the First Amendment precludes FDA from prohibiting the use of qualified health claims unless FDA can show that the claim is inherently misleading, or if the claim is only potentially misleading, that the use of a disclaimer would not remedy the claim’s potential to mislead, and (2) that absent consumer research or other evidence that satisfies the criteria set by the court in Pearson v. Shalala, FDA does not have the authority to eliminate qualified health claims as a class of claims. We added language to the report to acknowledge FDA’s position. 12. FDA commented that, contrary to our report, FDA’s 2005 and 2007 qualified health claims experiments did not find that qualified health claims might encourage the consumption of foods with little or no health benefits. Our report states that, according to the stakeholders we consulted, “… these claims confuse or mislead consumers and may encourage consumption of foods with little or no health benefits.” It then states that “his view was supported by findings from 2005 and 2007 FDA studies.” This statement is consistent with FDA’s findings. According to its public Web site, those studies on qualified health claims found that “qualifying statements … were not understood by consumers” and “even when … understood as intended, qualifying statements had unexpected effects on consumers’ judgments about the health benefits and overall healthfulness …. ” In addition to the contact named above, Erin Lansburgh, Assistant Director; Beverly Peterson, Analyst-in-Charge; Kevin S. Bray; Abby Ershow; Bart Fischer; Jennifer Harman; Natalie Herzog; Luann Moy; Allison O’Neill; Minette Richardson; Carol Herrnstadt Shulman; and Marcia Whitehead made key contributions to this report.
|
Two thirds of U.S. adults are overweight, and childhood obesity and diabetes are on the rise. To reverse these health problems, experts are urging Americans to eat healthier. Food labels contain information to help consumers who want to make healthy food choices. The Food and Drug Administration (FDA) oversees federal labeling rules for 80 percent of foods. GAO was asked to examine (1) FDA's efforts to ensure that domestic and imported foods comply with labeling rules, (2) the challenges FDA faces in these efforts, and (3) the views of key stakeholders on FDA actions needed to mitigate misleading labeling. GAO analyzed FDA data, reports, and requirements on food labeling oversight and compliance and interviewed agency and key stakeholder group officials. FDA's oversight and enforcement efforts have not kept pace with the growing number of food firms. As a result, FDA has little assurance that companies comply with food labeling laws and regulations for, among other things, preventing false or misleading labeling. Specifically: (1) FDA does not have reliable data on the number of labels reviewed; the number of inspections, which include label reviews, has declined. For example, of the tens of thousands of foreign food firms in over 150 countries, just 96 were inspected by FDA in 11 countries in fiscal year 2007--down from 211 inspections in 26 countries in 2001. (2) FDA's testing for the accuracy of nutrition information on labels in 2000 through 2006 was limited. FDA could not provide data for 2007. (3) Although the number of food firms in FDA's jurisdiction has increased, the number of warning letters FDA issued to firms that cited food labeling violations has held fairly steady. (4) FDA does not track the complete and timely correction of labeling violations or analyze these and other labeling oversight data in routine reports to inform managers' decisions, or ensure the complete and timely posting of information on its Web site to inform the public. (5) In addition to its official recalls database, FDA's Center for Food Safety and Applied Nutrition has continued to waste resources on a second recall database that FDA had agreed to eliminate in 2004, as GAO had recommended. FDA has reported that limited resources and authorities challenge its efforts to carry out its food safety responsibilities--these challenges also impact efforts to oversee food labeling laws. FDA's Food Protection Plan cites the need for authority to, among other things, collect a reinspection user fee, accredit third-party inspectors, and require recalls when voluntary recalls are not effective. Stakeholders from health, medical, and consumer groups identified actions they believe will mitigate misleading labeling and help consumers identify healthy food. Several stakeholders support a simplified, uniform front-of-package symbol system to convey nutritional quality to consumers. The United Kingdom, Sweden, and the Netherlands have developed voluntary nutrition symbols, while the European Commission has proposed requiring front-of-package labeling of key nutrients.
|
In 1993, DOE and the Russian government began working together to secure sites housing weapons-usable nuclear material and, in 1995, DOE established the MPC&A program, which is now administered by NNSA. DOE’s Office of International Material Protection and Cooperation, within NNSA, consists of five offices whose collective efforts contribute to enhancing the security of nuclear material and warheads in countries of concern and to improving the ability to detect illicit smuggling of those materials (see fig. 1). Four of these offices implement DOE’s MPC&A program, which, among other things, provides security upgrades at nuclear sites in Russia and other countries, and the fifth office, the Office of the Second Line of Defense, works to improve detection of illegal nuclear trafficking activities at border crossings and seaports. The Office of Nuclear Warhead Protection works with the Russian Ministry of Defense, including the 12th Main Directorate—the Russian Defense Ministry’s organization for nuclear munitions, the Strategic Rocket Forces, and the Navy to install security upgrades at nuclear warhead storage sites. The Office of Nuclear Warhead Protection also oversees DOE’s security upgrades work at naval nuclear fuel sites. The Office of Weapons Material Protection upgrades MPC&A systems at sites within the Rosatom nuclear weapons complex and also oversees DOE efforts to sustain U.S.-funded security upgrades at nuclear sites within the former Soviet Union that are not in Russia, such as facilities in Ukraine and Uzbekistan. The Office of Material Consolidation and Civilian Sites works to install MPC&A upgrades at nonmilitary nuclear facilities throughout Russia and oversees efforts to consolidate nuclear material into fewer buildings and to convert excess weapons-usable nuclear material into less attractive forms. The Office of Material Consolidation and Civilian Sites also manages DOE’s efforts to provide nuclear security assistance to countries outside of the former Soviet Union. The Office of National Infrastructure and Sustainability manages a variety of crosscutting programs, including transportation and protective forces assistance, and oversaw the development of guidelines for DOE’s efforts to help ensure that Russia can sustain the operation of U.S.-funded security systems at its nuclear sites after U.S. assistance ends. DOD has also assisted Russia in securing nuclear warhead storage sites, both temporary sites, such as rail transfer points, and permanent sites containing storage bunkers. In 1995, DOD began assisting the Russian Ministry of Defense with enhancing transportation security for nuclear warheads and security at nuclear warhead sites. DOD’s efforts to help Russia secure its nuclear warhead storage sites and to improve the security of warheads in transit are implemented by the Defense Threat Reduction Agency. Oversight and policy guidance for this work is provided by DOD’s Office of the Undersecretary of Defense for Policy. Additional information on the history of U.S. efforts to help Russia and other countries secure nuclear material and warheads can be found in appendix II. DOE spent about $1.3 billion between fiscal year 1993 and fiscal year 2006 to provide security upgrades and other related assistance to facilities that house weapons-usable nuclear material in Russia and other countries and reports to have “secured” 175 buildings containing about 300 metric tons of weapons-usable nuclear material in Russia and the former Soviet Union. The number of buildings that DOE reports as secured, however, does not recognize that additional upgrades remain to be completed at some buildings because DOE considers a building to be “secure” after it has received only limited MPC&A upgrades (rapid upgrades), even when additional comprehensive upgrades have yet to be completed. Further, in response to terrorist actions and rising threat levels in Russia, DOE is examining the impact of an increased design basis threat it uses to measure the adequacy of security upgrades provided to Russian nuclear facilities and providing additional assistance to protective forces at Russian nuclear sites. Finally, DOE and Rosatom have developed a Joint Action Plan that includes 20 civilian and nuclear weapons complex sites housing buildings with weapons-usable nuclear material. While the plan details the remaining scope of work to be accomplished by 2008, it does not include two key sites involved in manufacturing of Russian nuclear warheads that contain many buildings with hundreds of metric tons of weapons-usable nuclear material where DOE has been denied access. From fiscal year 1993 to fiscal year 2006, DOE spent about $1.3 billion to enhance security at buildings that house weapons-usable nuclear materials in foreign countries. The majority of these buildings are located in Russia and fall into three categories: Rosatom weapons complex sites, civilian sites, and naval fuel sites. DOE has also helped to secure buildings with weapons-usable nuclear material in nine other countries. Figure 2 shows a breakdown of DOE’s spending on MPC&A efforts. As figure 2 shows, DOE spent about $684.7 million to provide security upgrades to civilian, naval fuel, and Rosatom weapons complex sites with weapons-usable nuclear material in Russia and an additional $131.5 million to provide security upgrades to sites located outside of Russia. DOE also spent about $493.9 million on additional and related MPC&A efforts in Russia, such as assistance for transportation security, providing equipment for protective forces at nuclear facilities, and efforts to consolidate nuclear material into fewer buildings and sites. According to DOE officials, these efforts are important to increasing the overall security of nuclear materials in Russia and other countries, and they support DOE’s goal of enhancing the security of vulnerable stockpiles of weapons-usable nuclear material. For example, because DOE believes that nuclear materials are most vulnerable while they are in transit, the department has provided Russia with specialized secure trucks, armored escort vehicles, and secure containers—called overpacks—to improve the security of nuclear material transported within and between nuclear sites in Russia. Further, DOE’s assistance to protective forces at Russian nuclear sites, which includes such items as bulletproof vests, helmets, and response vehicles, helps ensure that guards at those sites are properly equipped and trained so that they can quickly respond to alarms. Additional information on other DOE efforts to improve security at sites with weapons-usable nuclear materials can be found in appendix IV. At the end of fiscal year 2006, DOE reported to have “secured” 175 buildings containing about 300 metric tons of weapons-usable nuclear material in Russia and the former Soviet Union, but 51 of the 175 buildings DOE reported to have “secured” as of the end of fiscal year 2006 do not have completed MPC&A upgrades. These 51 buildings are located at sites in the Rosatom weapons complex. In its program metrics, DOE defined a building to be “secure” after it has received only limited MPC&A upgrades (called rapid upgrades), even when additional comprehensive upgrades, which would further improve security, have yet to be completed. The buildings with weapons-usable nuclear material where DOE is working to improve security fall into four categories: Rosatom weapons complex, civilian, naval fuel, and sites outside of Russia. As table 1 shows, all planned upgrades have been completed at naval fuel sites and sites outside of Russia. The vast majority of remaining buildings that have not yet received security upgrades are in the Rosatom weapons complex, where DOE has historically had access difficulties, including being denied access to key sites and buildings housing weapons-usable nuclear material. While DOE officials told us that rapid upgrades offer a limited measure of risk reduction against some threats, they also noted that rapid upgrades fall short of meeting all of DOE’s risk reduction goals for buildings with weapons-usable nuclear material. For example, rapid upgrades generally include only limited measures designed to address the insider threat of theft, such as establishing a two-person rule and providing certain types of tamper indication devices that would set off alarms at guard stations in the case of an unauthorized attempt to access nuclear materials. According to NNSA, which implements the MPC&A program at DOE, the greatest threat DOE faces in its effort to help Russia secure nuclear materials is the threat of insider theft. However, the majority of measures to address the insider threat at Russian nuclear material sites, such as computerized nuclear material inventory databases and barcoding of nuclear material containers, are provided in the comprehensive upgrades phase. In response to terrorist actions and rising threat levels in Russia, DOE recently analyzed the implications of an increased design basis threat it uses to measure the adequacy of security upgrades provided to Russian nuclear facilities. The design basis threat is defined as the attributes and characteristics of potential adversaries (a group or groups of armed attackers) against which a facility’s physical protection systems are designed and evaluated. According to DOE, the design basis threat is critical to determining an MPC&A system’s effectiveness. In 2005, DOE began examining the impact of increasing the number of adversaries against which Russian sites with U.S.-funded security upgrades should be able to defend themselves. DOE is currently reassessing the effectiveness of the security upgrades it has provided through the MPC&A program and has increased its emphasis on providing assistance to the protective forces at Russian nuclear material sites. Specifically, DOE is currently working with a number of sites to relocate guard forces closer to the target nuclear material to improve their response times to an incident. For example, at all four of the nuclear material sites we visited in Russia, Russian officials told us that they were working with DOE to relocate guard forces closer to buildings that contain weapons-usable nuclear material at their sites. However, DOE is limited in the scope of assistance it can provide to protective forces at nuclear facilities in Russia and other countries. For example, DOE is neither allowed to provide weapons or ammunition to these forces, nor is it allowed to pay the salaries of protective forces at these sites. According to DOE officials, the department has provided assistance to the protective forces at all nuclear material sites where the department has access and agreement to work, including helmets, winter uniforms, radios, and other equipment intended to improve their effectiveness in responding to alarms and their survivability against potential adversaries. Historically, DOE has had difficulty obtaining access to some sensitive sites in Russia, especially within the Rosatom weapons complex. For example, we reported in 2003 that DOE’s lack of access to many buildings that store weapons-usable nuclear material in the Rosatom weapons complex was the greatest challenge to improving nuclear material security in Russia. DOE requires access to these buildings to validate Russian security system designs and to confirm the installation of equipment as intended. DOE signed an access agreement with the Russian Ministry of Atomic Energy (now called Rosatom) in September 2001 that described administrative procedures to facilitate access, such as specifying which DOE personnel are allowed to make site visits and the number and duration of those visits. We reported in 2003 that this access agreement had done little to increase DOE’s ability to complete its work at many key sites in the Rosatom weapons complex. Since that time, DOE has worked with Rosatom through a Joint Acceleration Working Group and other mechanisms to develop alternative access procedures, such as the use of remote video monitoring, that have allowed work to progress at some sensitive buildings and sites that had previously been inaccessible to DOE project teams. In June 2005, DOE and Rosatom signed a Joint Action Plan detailing the remaining scope of work to be completed by the 2008 deadline. Rosatom and DOE are using this plan to guide cooperative activities and to develop a multiyear budget for DOE’s MPC&A program. DOE officials told us that they have been granted access to almost all of the sites and buildings covered in the plan and that all security upgrades should be completed, as scheduled, by the end of 2008. DOE plans to spend about $98 million to complete its planned security upgrades at 210 buildings containing weapons-usable nuclear material in Russia and other countries by the end of calendar year 2008. The DOE–Rosatom Joint Action Plan covers 20 Russian civilian and nuclear weapons complex sites. However, the Joint Action Plan does not include two key sites in the Rosatom weapons complex where Russian nuclear weapons are assembled and disassembled. Because of the nuclear weapons manufacturing work conducted at these sites, DOE believes these two sites contain many buildings with hundreds of metric tons of weapons- usable nuclear material. According to DOE officials, the department has offered numerous alternative access proposals to try to obtain access to install security upgrades at these two sites. For example, in November 2004, DOE provided senior Russian officials with access to some of the most sensitive sites in the U.S. nuclear weapons complex, including the Pantex nuclear weapons plant in Texas, which is the only U.S. nuclear weapons assembly and disassembly facility. However, Rosatom has refused to grant DOE officials reciprocal access to analogous Russian sites. Because of the sensitive nature of the work conducted at these sites, Rosatom has denied DOE’s requests for access, rejected DOE offers to provide assistance without access, and informed DOE that it is not interested in pursuing MPC&A cooperation at these sites. DOE officials expressed very little optimism that Rosatom would allow DOE to help improve security at these facilities in the near future. Through the end of fiscal year 2006, DOE and DOD spent about $920 million to help Russia improve security at 62 nuclear warhead sites. The agencies plan to help Russia secure a total of 97 nuclear warhead sites by the end of 2008. Coordination between DOE and DOD has improved since 2003, when we reported that the agencies had inconsistent policies toward providing security assistance to Russian nuclear warhead sites. In addition, DOE and DOD are currently taking similar approaches to managing large contracts to provide security upgrades at Russian nuclear warhead sites. DOD has used EVM to identify cost and schedule variances for its contracts to install security upgrades at Russian warhead sites at early stages so they can be addressed in a timely manner. DOE has not used EVM on its fixed- price contracts to install security upgrades at Russian nuclear warhead sites, but, during the course of our review, the department augmented its contract performance management system to include additional reporting mechanisms to identify and address schedule variances, which DOE officials believe constitute a comparable alternative to an EVM system. DOE believes the benefits of EVM techniques do not justify the additional costs to implement them on fixed-price contracts. Through the end of fiscal year 2006, DOE had spent about $374 million to improve security at 50 Russian nuclear warhead sites and plans to install security upgrades at 23 additional sites by the end of 2008. Additionally, DOD spent approximately $546 million to help Russia secure 12 warhead sites and to provide security for nuclear warheads in transit. DOD plans to complete security upgrades at 12 additional sites by the end of 2008. Figure 3 shows a breakdown of U.S. funding to improve security of Russian nuclear warheads through the end of fiscal year 2006. DOE plans to provide security upgrades at 23 additional sites, and DOD plans to provide upgrades at 12 additional sites by the end of 2008. DOE and DOD gained authorization and access to work at 15 of these sites as a result of an agreement reached at the summit between President Bush and Russian President Putin in Bratislava, Slovakia, in February 2005. After this summit, Russia offered access to 15 additional nuclear warhead sites of which DOE has agreed to install upgrades at 7 sites, and DOD will help secure the remaining 8 sites. Table 2 provides an overview of DOE and DOD’s progress in improving security at Russian nuclear warhead sites. Despite the agencies’ optimism that all sites within this scope will be secured by the end of 2008, they face challenges in meeting this goal. For example, DOE and DOD officials stated that work in Russia involves extensive bureaucracy, changing requirements to meet Russian demands and, at times, difficult relationships and coordination with Russian subcontractors. DOD officials told us that there have been performance issues with a certain Russian subcontractor, but finding alternatives is difficult because there are only a limited number of Russian subcontractors qualified for this type of work and cleared by the Russian MOD to work at nuclear weapons sites. Additionally, the harsh environmental conditions at some remote sites have caused delays in the installation of security upgrades. Specifically, DOD officials stated that adverse weather conditions delayed the installation of security upgrades at four Russian warhead sites by about 1 month. In addition, DOD spent over $125 million through the end of fiscal year 2006 to improve the security of nuclear warheads during transportation by rail to consolidation and dismantlement sites. According to DOD officials, security experts consider nuclear warheads to be highly vulnerable to theft during transport. DOD has attempted to address this threat by providing the Russian MOD with security enhancements for railcars, hardened shipping containers for nuclear warheads to protect against small arms fire and other threats, and payment of railway tariffs associated with transporting nuclear warheads to consolidation and dismantlement sites. Since 1995, DOD has supported maintenance on 200 specialized, secure railcars for transporting nuclear weapons and provided 15 armored railcars for guard forces protecting shipments of nuclear weapons. DOD is in the process of procuring up to 100 additional nuclear warhead transport railcars for use by the Russian MOD. DOE and DOD have mechanisms for sharing information and avoiding duplication of effort. Coordination between the agencies has improved since 2003, when we reported that the agencies did not have consistent policies toward providing security assistance to Russian nuclear warhead sites. We recommended in 2003 that the departments work together to develop a standardized approach to improving security at Russian nuclear warhead sites. Since our 2003 report, DOD and DOE have expanded their efforts to share information about their work at Russian nuclear warhead sites. Specifically, the departments coordinate their efforts through an interagency working group, which reports to the National Security Council. According to DOE and DOD officials, this group was instrumental in coordinating the U.S. response to proposals for security upgrades at additional Russian nuclear warhead sites stemming from the summit between Presidents Bush and Putin at Bratislava, Slovakia, in 2005. In addition, DOE and DOD participate in joint coordinating groups that include key representatives from DOE, DOD, and the various branches of the Russian MOD. All of these groups meet regularly to discuss ongoing work at Russian nuclear warhead sites and resolve problems or issues that arise in this effort. Furthermore, DOE and DOD have jointly developed common designs for security upgrades at similar Russian warhead sites to ensure a level of consistency in the assistance provided to these sites. DOD officials stated that having a standardized design between the two agencies allows DOE and DOD leverage with the Russian MOD, to deny requests if they are made for items not in the site design plan of either agency. Further, DOE and DOD seek to present a united image to Russian officials by writing letters jointly on common issues and answering Russian site proposals together. In their efforts to provide security upgrades at Russian nuclear warhead sites, DOE and DOD are taking similar approaches to managing large contracts. Generally, OMB requires federal agencies to use EVM or an alternative performance management system on major acquisition contracts to identify cost and schedule variances at early stages so they can be addressed in a timely manner. DOD has used EVM to evaluate its contracts to install security upgrades at Russian warhead sites. DOE does not require its contractors to implement EVM to evaluate its contracts to install security upgrades at Russian warhead sites, but, during the course of our review, augmented its contract performance management system to include additional reporting mechanisms for identifying and addressing schedule variances, which DOE officials believe represent a comparable alternative to an EVM system. DOD officials stated that EVM is one of many tools that provide empirical data to validate testimonial information about the status of security upgrades provided in its contractors’ monthly and quarterly reports. Additionally, EVM enhances program management capabilities by providing an early warning system for deviations from plans and quantifies technical and schedule problems in terms of cost. This provides DOD with an objective basis for considering corrective action. DOD officials told us that their use of EVM allowed them to identify schedule variances due to poor contractor performance at one Russian nuclear warhead site where the department is installing security upgrades. DOD officials stated that this early detection allowed them to reassign the work to a different Russian subcontractor and formulate a plan to make up for the lost time and work in order to meet their scheduled completion date and critical path milestones. Similarly, DOE recently proposed requirements that its large contracts for security upgrades at nuclear warhead sites be managed with a system similar to EVM. In September 2006, DOE initiated security upgrades at four large nuclear warhead storage sites in Russia. Until January 2007, DOE managed these fixed-price contracts according to the NNSA Programmatic Guidelines, which do not require the use of EVM or an alternative system to assess contract performance for cost and schedule variances. In part, as a result of our inquiry into its contracting practices, DOE altered its oversight mechanisms for these contracts in January 2007 and will now require monthly reports and other measures to more accurately ascertain the progress of contracted items, including the identification of schedule variances due to inclement weather and other unforeseen events and, subsequently, the development of recovery plans. According to DOE officials, these new reporting mechanisms represent a comparable alternative to an EVM system and will give DOE project managers additional opportunities to identify potential schedule slippages and enable appropriate management intervention to take place in a timely manner. DOE has developed sustainability guidelines to help Russia prepare to take financial responsibility for maintaining U.S.-funded security upgrades at nuclear material and warhead sites without DOE assistance by 2013 as the Congress mandated. DOE and Rosatom are developing a joint sustainability plan that will provide an agreed-upon framework to guide DOE’s sustainability efforts at nuclear material sites in Russia. However, DOE’s ability to ensure that U.S.-funded security upgrades at nuclear material sites are being sustained may be hampered by access difficulties, funding concerns, and other issues. Finally, access difficulties at some Russian nuclear warhead sites may also prohibit DOE and DOD from ensuring that U.S.-funded security upgrades are being properly sustained. In May 2004, DOE issued interim guidelines (referred to as Sustainability Guidelines) to direct its efforts to assist Russia in developing sustainable MPC&A systems at Russian nuclear material and warhead sites by 2013 as the Congress mandated. In December 2006, DOE issued a final version of its Sustainability Guidelines for the MPC&A program. These guidelines require DOE program managers to develop assessments of each site’s existing capabilities to sustain MPC&A systems and to identify requirements that should be met before a site transitions from DOE support to full Russian responsibility. According to DOE, these assessments will be used to develop site-specific sustainability plans that detail the remaining cooperative activities required to address each of the seven elements of sustainability. The guidelines also require DOE project teams to develop site-specific transition plans, which would detail how sustainability activities will be funded as the sites move toward transition to full Russian responsibility by 2013. DOE’s Sustainability Guidelines set forth seven key elements of a sustainable MPC&A program at sites receiving MPC&A upgrades, such as the development of site operating procedures, which form the foundation for all of DOE’s sustainability activities at nuclear material and warhead sites in Russia and other countries where DOE has provided security upgrades. DOE uses a variety of sustainability indicators for each of the seven elements to determine the degree to which the individual elements are being addressed at Russian sites. Table 2 shows the seven elements of sustainability outlined in DOE’s Sustainability Guidelines and some of the indicators DOE uses to assess the degree to which each element of sustainability is being met at a given Russian site. According to DOE, the Sustainability Guidelines provide general criteria for DOE project teams to follow when working with their Russian counterparts in developing sustainability programs for sites where DOE has installed MPC&A systems. DOE officials noted that some sites may not require assistance to address issues in each of the seven categories. For example, many sites that store naval nuclear fuel are administered by the Russian Navy, which has its own human resource management system and would not require DOE assistance to address the human resource management and site training sustainability element. In addition, DOE and Rosatom are currently developing a joint sustainability plan that is intended to govern sustainability activities at the sites under Rosatom’s control where DOE has installed MPC&A systems. DOE officials told us that this joint sustainability plan may be completed in March 2007. DOE officials believe that this plan will be an important step in gaining Rosatom’s buy-in to the concepts of sustainability and will lead to a specific path forward and detailed plan for funding sustainability activities for DOE, while transitioning to full Russian responsibility in 2013. According to DOE officials, the plan will be based largely on DOE’s Sustainability Guidelines and will include the seven key elements of sustainability outlined in those guidelines. DOE anticipates spending about $437.8 million to provide sustainability support to sites in Russia and other countries between fiscal year 2007 and fiscal year 2013. While DOE’s Sustainability Guidelines provide a framework for the department’s approach to sustainability implementation, the guidelines do not call for a tracking system to assist MPC&A management in assessing the progress being made toward DOE’s goal of providing Russia a sustainable MPC&A system by 2013. Currently, DOE’s Metrics Information Management System (MIMS) contains data detailing the department’s progress in implementing the MPC&A program by tracking the number of buildings and sites where DOE has installed security upgrades, among other things. DOE also uses MIMS to track some measures of progress in their sustainability efforts, such as the development of site-specific plans that document how MPC&A site management will plan, budget, direct, monitor, and evaluate all MPC&A systems. DOE managers use MIMS as a tool in their oversight of the MPC&A program. However, DOE officials acknowledged that the current MIMS data do not provide an accurate picture of the department’s progress toward its goal of preparing Russia to take full responsibility for funding the maintenance and sustainability of U.S.-funded upgrades by 2013. Expanding MIMS to include tracking for all sustainability elements could give DOE managers an improved tool for monitoring the MPC&A program’s progress toward the goal of preparing Russia to take full responsibility for funding the maintenance and sustainability of U.S.-funded upgrades by 2013. Further, DOE officials told us that improved tracking of sustainability implementation would be useful to allow the department to provide more accurate information to the Congress on DOE’s progress in its sustainability efforts. Several challenges could impact DOE’s ability to prepare Russia to sustain security upgrades on its own at sites that house weapons-usable nuclear material, including: (1) access difficulties at some sites, (2) the limited financial ability of some Russian sites to maintain DOE-funded MPC&A equipment, (3) the lack of certification of some DOE-funded MPC&A equipment, and (4) delays in installing the MPC&A Operations Monitoring (MOM) system at Rosatom facilities. According to DOE officials, Russia has denied DOE access at some sites after the completion of security upgrades, making it difficult for the department to ensure that funds intended for sustainability of U.S.- funded upgrades are being properly spent. For example, at one facility where DOE completed upgrades in 1998, DOE officials were denied access from 1999 through 2002. DOE officials told us that after commissioning the MPC&A system at this facility, the department had not developed specific plans for sustaining the U.S.-funded security equipment. Upon returning to the facility in September 2002, DOE officials found that the U.S.-funded security upgrades were in a severe state of disrepair. As a result, DOE has had to spend about $800,000 to correct problems resulting from the site’s inability to properly maintain the U.S.-funded security upgrades. According to DOE officials, these security upgrade replacement efforts are scheduled to be completed in fiscal year 2007. Despite improvements in the Russian economy, some sites may not be financially able to maintain DOE-funded security upgrades. The Russian economy has improved since the collapse of the Soviet Union in 1991 and the financial troubles of the late 1990s. In September 2006, the Deputy Head of Rosatom stated that Russia is no longer in need of U.S. assistance and that it is easier and more convenient for Russia to pay for its own domestic nuclear security projects. However, during our visit to Russia, officials at three of the four civilian nuclear research institutes we visited told us that they are concerned about their sites’ financial ability to maintain U.S.-funded security upgrades after U.S. assistance ends. Some of these sites do not receive regular funds from the Russian government to support the operation and maintenance of their MPC&A systems. As a result, Russian site officials told us that, after DOE financial support ends in 2013, they will likely face difficult choices about how to pay for maintenance of the security upgrades DOE has provided. Some U.S.-funded MPC&A equipment is not certified for use at Russian facilities, which means that the Russian government may not pay for its maintenance. Certification is a mandatory Russian regulatory requirement designed to ensure the functionality, safety, and security of specific equipment, products, and technology used in Russian nuclear sites. Certification of U.S.-funded MPC&A equipment must be obtained before it can be legally used at Russian nuclear sites. DOE has historically maintained that certification is a Russian responsibility, and current DOE policy generally precludes funding for certification of equipment. Despite repeated attempts to persuade Russia to fund equipment certification, DOE is paying for some equipment to be certified on a case-by-case basis. According to DOE officials, some sites have equipment or MPC&A systems that are not fully certified for use. For example, at eight sites that house weapons-usable nuclear material, DOE-funded equipment used to make accurate measurements of the type and quantity of nuclear material stored at these sites has not been certified for use. Unless this equipment receives certification in the near future, DOE may be forced to pay for maintenance longer than it intends. Rosatom and DOE also have established a Joint Certification Working Group that is developing a joint plan to certify key equipment items. DOE developed the Equipment Certification and Vendor Support project in 1998 to provide DOE project managers with accurate information on the Russian certification process. DOE spent $23.6 million on this project through the end of fiscal year 2006. There have been delays installing the MOM system at some Rosatom facilities. In February 2001, we recommended that DOE develop a system, in cooperation with Russia, to monitor, on a long-term basis, the security systems installed at the Russian sites to ensure that they continue to detect, delay, and respond to attempts to steal nuclear material. In response to this recommendation, DOE developed the MOM system, consisting of off-the-shelf video cameras and other equipment designed to allow Russian officials to ensure that MPC&A systems are properly staffed, personnel are vigilant, and key security procedures are enforced. DOE officials told us in 2002 they anticipated that the MOM system would be an integral part of DOE’s sustainability assistance to Russian sites. However, through the end of fiscal year 2006, only five sites with weapons-usable nuclear material where DOE installed security upgrades had the MOM system. While DOE also plans to install equipment at two additional sites in fiscal year 2007, none of the seven sites where DOE has installed or plans to install MOM systems is controlled by Rosatom. Rosatom has been unwilling to allow DOE to install MOM systems at sites under its control. Unfortunately, DOE was unable to anticipate Rosatom’s resistance to the MOM system and, in 2002, the department pre-purchased MOM equipment for use at Rosatom facilities. As a result, DOE has had to pay for storage and upkeep of 367 MOM cameras and other equipment since 2002. DOE officials told us that if Rosatom decides not to allow MOM equipment at its sites, the excess equipment may be used by other DOE programs, such as the Second Line of Defense program, which works with Russia to combat nuclear smuggling by installing radiation detection equipment at key border crossings. Through fiscal year 2006, DOE had spent a total of $20.5 million on the MOM project, including about $270,000 to pay for storage and upkeep of unused MOM equipment that has been in storage since 2002. DOE and DOD plan to provide Russia with assistance to sustain security upgrades at nuclear warhead sites, but access difficulties may prevent the agencies from carrying out their plans. Specifically, neither department has reached an agreement with the Russian MOD on access procedures for sustainability visits to 44 permanent warhead storage sites where the agencies are installing security upgrades. Site access is needed to ensure that U.S. funds are being used to help Russia maintain security upgrades at these sites. If DOE and DOD cannot reach an agreement with the Russian MOD on access procedures for sustainability activities at these 44 sites, or develop acceptable alternatives to physical access, the agencies will be unable to determine if U.S.-funded security upgrades are being properly sustained and may not be able to spend funds allotted for these efforts. DOE and DOD have formed an informal working group to more effectively coordinate their efforts on sustainability of security upgrades at Russian nuclear warhead sites. DOE and DOD have agreed in principle that the seven elements of sustainability outlined in DOE’s Sustainability Guidelines will be applied to the agencies’ efforts to help the Russian MOD sustain security upgrades at nuclear warhead sites. DOE and DOD’s joint plan to address sustainability at Russian nuclear warhead sites uses a three-phased approach, (1) addressing processes and procedural issues, (2) establishing regional training and maintenance centers, and (3) providing site-level assistance, such as warrantees and spare parts. First, DOE is assisting the Russian MOD with the development of regulations, operating procedures, and an independent inspections process to help ensure that security systems continue to operate as intended. Similarly, DOD has supported the development of a personnel reliability program for the 12th Main Directorate of the MOD and DOE is planning to support a similar program for the Russian Navy and Strategic Rocket Forces. Second, DOE and DOD have funded the construction of regional training and maintenance centers. For example, DOE recently completed construction of the Kola Technical Center, near Murmansk, Russia, which serves as the centralized training and maintenance facility for all Russian MOD sites in the Murmansk region, both naval nuclear fuel sites and nuclear warhead storage sites. The Kola Technical Center was commissioned in fall 2005, and Russian MOD officials told us that the facility will help them prepare to assume full financial responsibility for maintenance and sustainability when U.S. assistance ends. Finally, at the site level, once DOE and DOD come to agreement with the Russian MOD on verification of sustainability assistance, they will assist in sustaining the upgraded security systems with a focus on training and developing the Russian MOD’s capability to maintain the modernized systems. Initially, DOE and DOD will rely on contractor support for repair of failed security systems while the Russian MOD’s capability is being developed, gradually transitioning to full Russian system support. Although DOE and DOD are working closely to provide sustainability assistance at Russian nuclear warhead storage sites, differences exist in the length of time DOE and DOD intend to fund sustainability activities at these sites. Specifically, DOE intends to fund sustainability until 2013, while DOD plans to halt funding in 2011. This has the potential to cause difficulties for the Russian MOD when it comes to funding sustainability earlier at sites where DOD installed security upgrades. In addition, DOD plans no further support with respect to sustainability for warhead transportation upgrades it has provided to the Russian MOD, because, according to DOD officials, the Russian MOD has not requested assistance for this activity. DOE and DOD have made significant progress in helping Russia and other countries improve security at vulnerable sites housing weapons-usable nuclear material and nuclear warheads. Since our 2003 report, DOE has worked with Russia to resolve many of the access difficulties that we reported, especially at sites within the Rosatom weapons complex. However, in our view, DOE’s current metric for reporting progress on the number of buildings secured by its MPC&A program provides the Congress with a potentially misleading assessment of the security at these facilities. Specifically, DOE should not report to the Congress that buildings with weapons-usable nuclear material in Russia and other countries are “secure” until all DOE risk reduction goals have been achieved, and all planned upgrades at those buildings are completed. Currently, DOE considers buildings to be “secured” after only limited MPC&A upgrades (rapid upgrades) are installed, even when additional comprehensive upgrades are planned. Rapid upgrades do not include the majority of measures DOE uses to address the threat of insider theft at Russian nuclear sites, which DOE considers to be one of its most pressing concerns. DOE provides most upgrades designed to address the insider threat during the comprehensive upgrades phase. Further, DOE officials told us that comprehensive upgrades are necessary to achieve all risk reduction goals at buildings with nuclear material, calling into question DOE’s decision to report buildings without such upgrades completed as “secure.” As DOE nears the completion of its security upgrade work in its MPC&A program, the sustainability of U.S.-funded nuclear security upgrades in Russia and other countries has become increasingly important for ensuring that the substantial investment of U.S. funds over the past 15 years is not wasted. DOE and Rosatom have been cooperating to develop a joint sustainability plan for the majority of sites where DOE has installed MPC&A upgrades. We believe this is a critical step in gaining agreement on what remains to be done before DOE transfers full responsibility for sustainability of MPC&A upgrades to Russia in 2013. While DOE uses its Metrics Information Management System to track some measures of progress in its sustainability efforts, DOE officials acknowledged that the current MIMS data do not provide an accurate picture of the department’s progress toward its goal of preparing Russia to take full responsibility for funding the maintenance and sustainability of U.S.-funded upgrades by 2013. Creating a new management information system for sustainability or expanding MIMS to include tracking for all sustainability elements could give DOE managers an improved tool for monitoring the MPC&A program’s progress on sustainability and would aid the department in providing the Congress with a more accurate assessment of the progress made toward DOE’s goal of providing Russia with a sustainable MPC&A system by 2013. To increase the effectiveness of U.S. efforts to secure nuclear material and warheads in Russia and other countries, we recommend that the Secretary of Energy, working with the Administrator of NNSA, take the following two actions: revise the metrics used to measure progress in the MPC&A program to better reflect the level of completion of security upgrades at buildings reported as “secure;” and develop a sustainability management system or modify the Metrics Information Management System to more clearly track DOE’s progress in developing a sustainable MPC&A system across all sites where it has installed MPC&A upgrades, including evaluations of progress for each of the seven key elements of sustainability outlined in DOE’s Sustainability Guidelines. DOE generally agreed with our findings and recommendations. DOD had no written comments on our report. DOE and DOD also provided technical comments, which we incorporated, as appropriate. In its comments, DOE provided additional information about the metric it uses to track progress in the MPC&A program, its reasons for not using EVM on fixed-price contracts, and on its efforts to work with Rosatom on sustainability issues. DOE agreed that the current metric it uses to track progress in the MPC&A program may be confusing. DOE wrote that it is changing the metric to one that more accurately identifies the level of completion for upgrades. Similarly, DOE officials told us in January 2007 that they were taking steps to modify the progress metric. However, in February 2007, DOE issued its Fiscal Year 2008 Budget Request, which did not include modifications to clarify the confusions DOE agrees are present in its progress metric. As a result, DOE’s most recent budget justification continues to present the Congress with an unclear picture of the progress made in improving security at buildings with weapons-useable nuclear material in Russia and other countries because DOE’s progress metric does not recognize that additional upgrades remain to be completed at some buildings that the department lists as being “secure.” As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Energy and Defense; the Administrator, National Nuclear Security Administration; the Director, Office of Management and Budget; and interested congressional committees. We also will make copies available to others upon request. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO contact and staff acknowledgments are listed in appendix VI. We performed our review of U.S. efforts to assist Russia and other countries in securing nuclear materials and warheads at the Departments of Energy (DOE), Defense (DOD) and State (State); the National Nuclear Security Administration (NNSA) in Washington, D.C.; the Defense Threat Reduction Agency in Fort Belvior, Virginia; Oak Ridge National Laboratory in Oak Ridge, Tennessee; Los Alamos National Laboratory in Los Alamos, New Mexico; and Sandia National Laboratories in Albuquerque, New Mexico. We visited Russia to discuss the implementation of U.S. nuclear material and warhead security assistance programs with Russian officials. We also spoke with officials from the U.S. embassy in Moscow, DOE’s Moscow office, and the DOD’s Defense Threat Reduction Office in Moscow. While in Russia we met with officials from the Federal Agency for Atomic Energy of the Russian Federation (Rosatom), Rostekhnadzor (the Russian nuclear regulatory authority), and the Ministry of Defense (MOD)— including representatives from the 12th Main Directorate, Navy, and Strategic Rocket Forces. We requested visits to the Institute of Nuclear Materials, Institute of Physics and Power Engineering, Interdepartmental Special Training Center, Russian Methodological Training Center, and All- Russian Scientific Research Institute of Technical Physics (also known as Chelyabinsk-70 and Snezhinsk), but Rosatom denied us access to all facilities under its control, including these. In fact, we were denied access to some Russian sites GAO officials had visited during past reviews of U.S. nonproliferation programs. Rosatom officials told us that because our names were not on the list of 185 individuals provided by DOE for access under the terms of a 2001 access arrangement, we would not be allowed to visit any Rosatom facilities. Rosatom officials did not deny our request for access until we had already arrived in Russia to begin our fieldwork for this review. In addition, the Russian MOD denied our request to visit a naval nuclear fuel facility, Site 49, and a naval nuclear warhead facility near Murmansk, Russia, due to military exercises scheduled near these sites during the time of our visit. We were able meet our audit objectives by visiting four sites—civilian, educational, and research institutes that are not under Rosatom’s control— where DOE had provided security upgrades through NNSA’s Materials Protection, Control, and Accounting (MPC&A) program: Karpov Institute for Physical Chemistry, Kurchatov Institute, Joint Institute for Nuclear Research, and Moscow State Engineering and Physics Institute. During our visits to these sites, we discussed the implementation of the MPC&A program, sustainability of U.S.-funded MPC&A upgrades, and the future of DOE cooperation with Russian officials. In addition, we visited a training facility near Murmansk, Russia, built with DOE funds to provide training to Russian MOD personnel in the Murmansk region. To assess the progress DOE has made in helping Russia and other countries secure nuclear material, we had discussions with officials from NNSA’s MPC&A program, DOE’s contractors at Oak Ridge, Los Alamos, and Sandia National Laboratories, and experts from nongovernmental organizations that specialize in nuclear nonproliferation. We reviewed various program documents, including the MPC&A Programmatic Guidelines, MPC&A Program Management Document, project work plans, and the DOE- Rosatom Joint Action Plan. We also analyzed financial information detailing program expenditures, projected costs and schedule estimates, and contract data for expenditures of the MPC&A program through the end of fiscal year 2006. To assess the reliability of these data, we questioned key database officials about data entry access, internal control procedures, and the accuracy and completeness of the data, following up with further questions, as necessary. Although any caveats and limitations to the data were noted in the documentation of our work, we determined that the data we received were sufficiently reliable for the purposes of this report. To assess the progress DOE and DOD have made in assisting Russia with securing nuclear warheads, we reviewed documents and had discussions with officials from NNSA’s MPC&A program, DOE’s contractors at Oak Ridge and Sandia National Laboratories, DOD’s Office of the Undersecretary of Defense for Policy, and the Defense Threat Reduction Agency. We spoke with officials from the Russian MOD and visited a training facility near Murmansk, Russia, built with DOE funds to provide training to Russian MOD personnel. We analyzed financial information detailing program expenditures, projected costs and schedule estimates, and contract data from both DOE and DOD through the end of fiscal year 2006. To assess the reliability of these data, we questioned key database officials about on data entry access, internal control procedures, and the accuracy and completeness of the data, following up with further questions, as necessary. Although any caveats and limitations to the data were noted in the documentation of our work, we determined that these data were also sufficiently reliable for the purposes of this report. In addition, we reviewed guidance on government contracting, including the Office of Management and Budget (OMB) Circular No. A-11, DOD Earned Value Management (EVM) Implementation Guide, and DOE Order 413.3A. After reviewing this guidance, we requested copies of DOE and DOD’s ongoing contracts valued over $20 million for work to help Russia and other countries secure nuclear material and warheads. To determine how DOE’s large contracts were being managed, we reviewed contract documents and identified a requirement for quarterly reporting in the contracts. We contacted the Contracting Officers identified in the contracts to request information on how the contracts are managed in respect to applicable criteria required by OMB and DOE directives. Additionally, we reviewed DOD’s large contracts for installing security upgrades at Russian nuclear warhead sites and reviewed documentation from DOD’s contractors, Bechtel National, Inc., and Raytheon Technical Services. After analyzing these contracts and other related documentation, we determined that both of DOD’s contracts reflected an EVM system. DOD provided us with certification documentation for Bechtel and Raytheon’s EVM systems, a requirement called for by federal guidance for all EVM systems. Since the scope of work within the Bechtel contract was at or near completion, we evaluated only the contract performance management for Raytheon, in order to determine how DOD was executing and managing its large contracts for security upgrades at Russian warhead sites. DOD provided Raytheon’s cost performance reports which GAO contracting experts assessed for cost and schedule variances in contracted work. After review of Raytheon’s cost performance reports, we determined that shortfalls in scheduled work were resulting in a schedule variance equivalent to around $13 million. To assess the efforts undertaken by DOE and DOD to ensure the sustainability and continued use of U.S.-funded security upgrades, we had discussions with officials from NNSA’s MPC&A program; DOE’s contractors at Oak Ridge, Los Alamos, and Sandia National Laboratories; DOD’s Office of the Undersecretary of Defense for Policy; and the Defense Threat Reduction Agency. We analyzed program documents, including DOE’s May 2004 interim Sustainability Guidelines, DOE’s December 2006 final Sustainability Guidelines, DOE-DOD Joint Sustainability Task Force documents, DOE-Rosatom Joint Sustainability Working Group documents, and project work plans. We interviewed program officials responsible for the development of DOE’s Sustainability Guidelines and program managers responsible for implementing them. We also discussed the sustainability of U.S.-funded upgrades with Russian officials at sites we visited. We performed our review from April 2006 to February 2007 in accordance with generally accepted government auditing standards. The Congressass the Soviet Threat Reduction Act of 1991, (Pub. L. No. 102-228), poprly referred to as the Nnn-Lr Act. The Soviet Union dissolve. DOD egin it Coopertive Thret Redction progrm to asst Russ nd other former Soviet republic in ecring nd dintling wepon of mass detrction nd their delivery tem. DOE egin working with Russn ncler fcilitie to improve ecrity of ncler mteril. DOE egin it MPC&A progrm. DOD nd Rosatom ign greement to build Fissile MteriStorge Fcility to tore ncler mteril from dintled Russn ncler wrhe. DOD egin to work with the Russn Minitry of Defene to enhnce trporttion ecrity for ncler wrhe nd ecrity t ncler wrhetorge ite. December-DOE issu the firt MPC&A Progrmmtic Gideline. DOE expnd the cope of it effort with the Russn Nvy from protecting nl rector fel to helping ecre ncler wrhe. December-DOD complete contrction of the Fissile MteriStorge Fcility. September-DOE revi it MPC&A Progrmmtic Gideline. June-DOE nd the Federl Agency for Atomic Energy of Russ ign Joint Action Pln. May-DOE issu interim Susinability Gideline (Guidelines for Sustaining Effective Operations of Material Protection, Control, and Accounting Systems in the Russian Federation). December-DOE revi it MPC&A Progrmmtic Gideline. February 24-At their summit meeting in Brti, U.S. Preident Busnd Russn Preident Ptin issued joint tement on contined coopertion to prevent ncler terrorim. July-Rosatom inform DOD tht it has egn loding the Fissile MteriStorge Fcility. October-DOE egin MPC&A coopertion with the Chin Atomic Energy Athority y engging in Joint Technology Demontrtion on integrted ncler mteril mgement cility in Beijing. July 17-Preident Busnd Ptin reffirmed their commitment to completing ecrity pgrde t ncler mterind wrheite in Russ y the end of 2008. October 24-DOE nnonce completion of ecrity enhncement ll 50 Russn Nvy ncleite (oth ncler mterind wrheite) where pgrde were plnned. December-DOE issu finSusinability Gideline. December 31-DOE nd DOD pln to complete ll ecrity pgrde t ncler mterind wrheite in Russ y thite. January 1-Dte tht the Congress require DOE to hve developed “flly susinable” MPC&A tem with Russ. March 8-Soviet Nuclear Weapons: Priorities and Costs Associated with U.S. Dismantlement Assistance, GAO/NSIAD-93-154. March 8-Nuclear Nonproliferation: Status of U.S. Efforts to Improve Nuclear Materials Controls in Newly Independent States, GAO/NSIAD/RCED-96-89. April 13-Weapons of Mass Destruction: Effort to Reduce Russian Arsenals May Cost More, Achieve Less Than Planned, GAO/NSIAD-99-76. February 28-Nuclear Nonproliferation: Security of Russia’s Nuclear Material Improving; Further Enhancements Needed, GAO-01-312. March 6-Nuclear Nonproliferation: Limited Progress in Improving Nuclear Material Security in Russia and the Newly Independent States, GAO/RCED/NSIAD-00-82. March 24-Weapons of Mass Destruction: Additional Russian Cooperation Needed to Facilitate U.S. Efforts to Improve Security at Russian Sites, GAO-03-482. From fiscal year 1993 through fiscal year 2006, DOE spent a total of $131.5 million on efforts to help countries outside of Russia secure facilities with nuclear material (see fig. 4). Responsibility for managing DOE’s MPC&A efforts in countries outside of Russia has shifted among a number of offices within DOE and NNSA. Responsibility for sustainability of upgrades at sites in the former Soviet Union now rests with the Office of Weapons Material Protection within the Office of International Materials Protection and Cooperation in NNSA. The Office of Materials Consolidation and Civilian Sites within the Office of International Materials Protection and Cooperation in NNSA is responsible for implementing MPC&A efforts outside of the former Soviet Union, such as DOE’s efforts in China and India. DOE provided security upgrades to two buildings at one facility—the Sosny Scientific and Technical Center (now known as the Joint Institute of Power and Nuclear Research-Sosny)—in Belarus. DOE began work at this site in April 1994, and the initial phase of MPC&A upgrades was completed in December 1997. After this, DOE was unable to conduct additional work in the country due to sanctions the United States had placed on Belarus. However, in May 2003, the Department of State modified its position and allowed a team from DOE to visit Sosny solely to review the status of the MPC&A systems provided with U.S. funds. The DOE team visited the site in June 2003 and noted several security deficiencies that required immediate improvement. Shortly thereafter, DOE received approval from the Department of State to return to Belarus to perform a comprehensive vulnerability assessment at the Sosny site. According to DOE officials, the Department of State’s Nonproliferation and Disarmament Fund allocated $250,000 for design work and $1.6 million for further upgrades in 2003 and 2005, respectively. Since there is currently no government-to-government agreement between the United States and Belarus, the project is being administered via the International Scientific and Technical Center’s Partners Program. However, no funding has been spent yet because the Belarusian government suspended the project due to concerns over sharing information with a foreign entity. In the fall of 2006, Belarus indicated that it was again ready to move forward with the project. DOE sent a team to Sosny in December 2006 and was able to re-establish relations, as well as, develop a statement of work for the design of a communications system for the site and a project work plan for material control and accounting. Additional trips are planned for February and April 2007. DOE hopes to complete a second phase of MPC&A upgrades at the site in fiscal year 2008. In total, DOE spent about $3.6 million through the end of fiscal year 2006 to provide MPC&A assistance to Belarus. DOE has a cooperative engagement program with China on issues related to nuclear material security. The purpose of the engagement is to increase awareness of our respective approaches to nuclear security issues, as well as MPC&A methodologies and applicable technologies, and to work cooperatively to improve security in these areas when and where appropriate. DOE is pursuing this objective through dialogue and technical collaboration with the China Atomic Energy Authority in China’s civilian nuclear sector and is attempting initial engagements with the China Academy of Engineering Physics in China’s defense nuclear sector. DOE is pursuing bilateral cooperation with the Chinese civilian nuclear sector under the Statement of Intent signed with the China Atomic Energy Authority in January 2004 and the DOE-China Atomic Energy Authority Peaceful Uses of Nuclear Technology Agreement. In February 2004, DOE and the China Atomic Energy Authority agreed to conduct a Joint Technology Demonstration on integrated nuclear material management in Beijing. The purpose of this demonstration project was to promote the adoption of modern security practices and technologies at civilian nuclear facilities by demonstrating established physical protection, nuclear material control and accounting, and international safeguards technologies that provide a first line of defense against nuclear material theft, diversion, and sabotage. The Joint Technology Demonstration took place in Beijing in October 2005. Following the completion of the technology demonstration project, DOE is currently discussing ideas for future bilateral work with the China Atomic Energy Authority and the Chinese Institute of Atomic Energy. Through fiscal year 2006, DOE had spent about $4.7 million on MPC&A cooperation with China. DOE provided security upgrades at one facility in Georgia, the Andronikashvili Institute of Nuclear Physics in Tbilisi. Work began at this site in January 1996 and was completed in May 1996, at a cost of about $0.2 million. All fresh and spent nuclear fuel was transferred from the facility to a secure nuclear site in Scotland in April 1998 under a multinational effort known as Operation Auburn Endeavor. DOE’s MPC&A program currently has no ongoing work in Georgia. DOE’s cooperative security engagement program with India is in its initial stages. DOE is investigating near-term opportunities to engage India on issues related to nuclear material security with the intent of initiating a cooperative program with India on nuclear security best practices. Potential issues for discussion include the theoretical framework for developing and implementing a design basis threat; the methodology for designing effective physical protection systems; a vulnerability assessment methodology; regulatory infrastructure for material control and accounting, and physical protection; and general nuclear security culture. DOE spent about $100,000 on MPC&A cooperation with India through the end of fiscal year 2006. DOE provided security upgrades at one facility in Latvia, the Latvian Academy of Sciences Nuclear Research Center (also known as the Latvian Institute of Nuclear Physics at Salaspils). Work began at this site in July 1994 and was completed in February 1996. Since fiscal year 1994, DOE has spent about $900,000 to install and maintain security upgrades at this facility. In May 2005, 2.5 kilograms of fresh highly enriched uranium (HEU) fuel were removed from the Salaspils reactor and returned to Russia. According to the Federal Agency for Atomic Energy of the Russian Federation (Rosatom), the HEU fuel will be downblended into low- enriched uranium nuclear fuel for use in civilian nuclear power plants. DOE’s MPC&A program currently has no ongoing work in Latvia. DOE provided security upgrades at one facility in Lithuania, the Ignalina Nuclear Power Plant. Work began at this site in October 1995 and was completed in August 1996. Since fiscal year 1996, DOE has spent about $900,000 to install and maintain security upgrades at this facility. DOE counted one building at this facility as secure in its progress metric for the MPC&A program that tracks the number of buildings with weapons-usable nuclear material secured, even though the facility never possessed such material. During the course of our review, we brought this to the attention of DOE management, and they agreed to remove the facility from the progress report in DOE’s fiscal year 2008 budget justification document. DOE’s MPC&A program currently has no ongoing work in Lithuania. DOE provided security upgrades to four sites in Kazakhstan: the Institute of Atomic Energy–Kurchatov, the Institute of Nuclear Physics at Alatau, the BN-350 breeder reactor at Aktau, and the Ulba Metallurgical Plant. In total, DOE spent about $45.3 million from fiscal year 1994 through fiscal year 2006 to provide MPC&A assistance to Kazakhstan. The Institute of Atomic Energy-Kurchatov, formerly called Semipalatinsk- 21, is a branch of the Kazakhstan National Nuclear Center. Two nuclear research reactors are located at the site. DOE began providing both physical security and material control and accounting upgrades to the site in October 1994, and the site was commissioned in September 1997. The perimeter security system at the site was commissioned in July 1998. DOE plans to continue to assist the Institute of Atomic Energy-Kurchatov with spare parts, extended warranties, and training to sustain its MPC&A systems in fiscal year 2007. The Institute of Nuclear Physics is a branch of the Kazakhstan National Nuclear Center located in the town of Alatau. The site operates a 10- megawatt research reactor used to manufacture radioisotopes as a radiation source for industrial and medical use, among other activities. DOE began work at the site in September 1995 and completed upgrades in October 1998. DOE plans to continue to assist the Institute of Nuclear Physics at Alatau with extended warranties and training to sustain its MPC&A systems in fiscal year 2007. DOE provided upgrades to two buildings at the BN-350 reactor site at Aktau. MPC&A upgrade work began in September 1994 and was completed in December 1998. In May 2002, HEU fuel was transferred from the BN-350 breeder reactor in Aktau to the Ulba Metallurgical Plant with the assistance of a nongovernmental organization involved in nonproliferation efforts— the Nuclear Threat Initiative. The HEU fuel will be downblended into low- enriched uranium nuclear fuel for use in civilian nuclear reactors. The Ulba Metallurgical Plant contains a low-enriched uranium fuel fabrication facility, among other resources. The fuel fabrication facility produces nuclear fuel pellets with a capacity of 1,000 metric tons per year. Security upgrades work began in September 1994 and was completed in September 1997. DOE plans to continue to assist the Ulba Metallurgical Plant with extended warranties and spare parts to sustain its MPC&A systems in fiscal year 2007. In addition, on November 21, 1994, 581 kilograms of HEU was transferred from the Ulba Metallurgical Plant to the United States in a highly secret project code-named “Sapphire.” The project was carried out with cooperation from the Kazakhstani government and DOE and DOD. The large stockpile of HEU, reportedly left over from the Soviet Union's secret Alfa submarine program, had been stored at the Ulba Metallurgical Plant in unsecured and unsafeguarded facilities without electronic means of accounting. Experts estimate the nuclear material was sufficient to make 20-25 nuclear bombs. The HEU was downblended into low-enriched uranium for use in civilian nuclear power plants in the late 1990s. DOE provided MPC&A assistance to four sites in Ukraine: Kharkiv Institute of Physics and Technology, Kiev Institute of Nuclear Research, Sevastopol National Institute of Nuclear Energy and Industry, and South Ukraine Nuclear Power Plant. In total, DOE spent about $37.7 million from fiscal year 1993 through fiscal year 2006 to provide MPC&A assistance to Ukraine, including installation of security upgrades, maintenance of installed MPC&A systems, and training for site personnel. The Kharkiv Institute of Physics and Technology conducts nuclear fuel cycle research and has important experimental physics facilities including a number of electron and ion accelerators. DOE provided upgrades to one building at this site. Security upgrades work began in May 1995 and was completed in January 1999. DOE plans to continue to assist the Kharkiv Institute of Physics and Technology with extended warranties and training to sustain its MPC&A systems in fiscal year 2007. The Kiev Institute of Nuclear Research was established in 1970 and is operated by the Ukrainian Academy of Sciences. The institute’s primary function is to perform research in low- and medium-energy nuclear physics. Security upgrades work began at one building at this site in December 1993 and was completed in October 1997. DOE plans to continue to assist the Kiev Institute of Nuclear Research with extended warranties and training to sustain its MPC&A systems in fiscal year 2007. The Sevastopol National Institute of Nuclear Energy and Industry’s mission is to support Ukraine’s nuclear power industry by training nuclear power plant personnel. The facility operates a 200-kilowatt, light-water cooled, research reactor. Security upgrades work began at one building at this facility in May 1996 and was completed in January 1999. DOE plans to continue to assist the Sevastopol National Institute of Nuclear Energy and Industry with extended warranties and training to sustain its MPC&A systems in fiscal year 2007. In addition to these facilities, DOE provided MPC&A upgrades to a fourth site that does not possess weapons-usable nuclear material, the South Ukraine Nuclear Power Plant. DOE began security upgrades work at this site in August 1994 and completed its upgrades work in January 1999. DOE counted this facility as secured in its progress metric for the MPC&A program, even though the facility never possessed such material. During the course of our review, we brought this to the attention of DOE management, and they agreed to remove the facility from their progress report in DOE’s fiscal year 2008 budget justification document. According to DOE officials, no further MPC&A assistance is planned at this site. In Uzbekistan, DOE’s project goal is to continue to enhance capabilities and commitment to operating and maintaining security improvements at two institutes: the Institute of Nuclear Physics in Tashkent and the Foton facility. In total, DOE spent about $4.4 million from fiscal year 1995 through fiscal year 2006 to provide MPC&A assistance to Uzbekistan. Founded in 1956 as part of the Uzbekistan Academy of Sciences, the Institute of Nuclear Physics operates a 10-megawatt research reactor. Often described as the largest facility of its kind in central Asia, the site has an ambitious program to become the primary nuclear research and isotope production facility for the region. The facility maintains fresh and irradiated nuclear fuel storage facilities to support continued reactor operations. Security upgrades at the site began in June 1995 and were provided by a joint team from the United States, Australia, Sweden, and the United Kingdom. Australia and Sweden agreed to provide assistance in the area of material control and accounting, while the United States and United Kingdom agreed to provide physical protection upgrades. Upgrades were provided in two phases. Phase I upgrades were completed in August 1996. After the attacks of September 11, 2001, DOE began to work with the facility to develop a plan to further improve its security system. Additional upgrades focused on the facility perimeter and included the installation of new fencing and exterior intrusion detection sensors. In addition, the Department of State provided about $0.6 million in fiscal year 2002 through its Nonproliferation and Disarmament Fund to supply cameras and lighting for the facility’s perimeter. All Phase II upgrades were completed in September 2002. A commissioning ceremony was held in October 2002. In 2006, DOE announced the removal of 63 kilograms of HEU in the form of spent nuclear fuel from the facility. The HEU spent fuel was returned to Russia through DOE’s Global Threat Reduction Initiative. DOE plans to continue to assist the Institute of Nuclear Physics with extended warranties and training to sustain its MPC&A systems in fiscal year 2007. The Foton facility has a small research reactor containing less than 5 kilograms of HEU. MPC&A upgrades at the site began in January 2005 and were completed in May 2005. Physical security upgrades at the Foton facility focused on the research reactor building and included such things as intrusion detection sensors, improved access controls, and a central alarm station. DOE plans to continue to assist the Foton facility with extended warranties to sustain its MPC&A systems in fiscal year 2007. In addition to DOE’s efforts to provide security upgrades at sites with weapons-usable nuclear material and warheads in Russia and other countries, the department implements other crosscutting efforts to support the efforts of its MPC&A program, such as assistance for transportation security, equipment for protective forces at nuclear facilities, and efforts to consolidate nuclear material into fewer buildings and sites. According to DOE officials, these efforts support DOE’s goal of improving security of vulnerable stockpiles of weapons-usable nuclear material by contributing to the overall security systems at nuclear materials sites in Russia and other countries. As table 4 shows, through the end of fiscal year 2006, DOE spent about $493.9 million on these efforts. DOE’s Material Consolidation and Conversion project supports the transfer of HEU from Russian sites where it is no longer needed in order to secure locations within Russia for eventual conversion to low-enriched uranium. According to DOE, consolidation and conversion efforts significantly reduce the requirements and costs of securing material. For example, in 2006, DOE announced the completion of a 2-year cooperative effort to remove HEU from the Krylov Shipbuilding Research Institute, a Russian research facility located near St. Petersburg. DOE teams worked with their Russian counterparts to validate the inventory of nuclear material and confirm that it was securely packaged for transport. DOE paid for the HEU to be shipped to another facility in Russia where it will be converted (downblended) to low-enriched uranium, which will eliminate it as a proliferation concern. Through the Material Consolidation and Conversion project, DOE has also supported the secure storage and conversion of Russian-origin HEU that has been returned to Russia from countries such as Bulgaria, the Czech Republic, Latvia, Serbia, and Uzbekistan. DOE reported in July 2006 that more than 8,000 kilograms of HEU had been downblended into low-enriched uranium under the project. Through the end of fiscal year 2006, DOE had spent about $128.8 million on the project. In the aftermath of the September 11, 2001, terrorist attacks, DOE increased funding for its efforts to secure nuclear material during transit. By providing upgraded security for transport and guard railcars, specialized secure trucks and escort vehicles, and secure containers— called overpacks—DOE seeks to reduce the risks of theft and sabotage of nuclear material transported within and between nuclear facilities in Russia. The goal of the Secure Transportation project is to reduce the risk of theft or diversion of material or warheads during transportation operations in Russia by improving security for railcars and trucks, Russian nuclear material and warhead transport infrastructure, and communications interface with response forces. Through fiscal year 2006, DOE had spent about $88.1 million to improve the transportation security of nuclear material in Russia, by providing 76 cargo trucks, 86 escort vehicles, as well as 66 cargo railcars, 25 guard railcars, and 283 security overpacks. This included 54 refurbished cargo railcars, 25 new manufactured guard railcars, 12 new manufactured cargo railcars, and approximately 78 cargo trucks and 89 escort trucks to support both on-site and off-site nuclear material shipments. DOE provides a variety of training and technical support to both the Russian Navy and Rosatom to help these entities operate and maintain U.S.-funded security upgrades and MPC&A systems. One of the primary accomplishments of the project was the construction of the Kola Technical Center near Murmansk. The facility was designed and constructed by DOE to be a central training and maintenance center to support naval nuclear fuel and warhead sites in the Murmansk region. DOE completed construction of the Kola Technical Center in June 2005 at a cost of $24 million. We visited the facility during our trip to Russia. Russian officials told us that the Kola Technical Center is critical to help the Russian MOD transition to full financial responsibility for sustainability after U.S. funding ends. In addition, DOE provides support to Rosatom’s regional training facilities through the Rosatom Training and Technical Support Infrastructure project. These facilities, such as the Interdepartmental Special Training Center and the Russian Methodological and Training Center, seek to train specialists and guard forces to safeguard materials at Russian nuclear sites. Additionally, these centers seek to assist Rosatom by providing effective and sustainable training and technical support infrastructures. To date, DOE has spent $42.5 million on the establishment of these training and technological support centers. The Russian Federation Inspection Implementation project seeks to enhance nuclear material inspections by establishing a sustainable infrastructure with sufficient resources to enforce MPC&A regulations through federal and industry oversight. Under this project, DOE provides inspection support to Rostekhnadzor, Rosatom, and other Russian ministries and agencies. The project enhances MPC&A nuclear material inspections at the ministerial, agency, and site-level by providing comprehensive training, inspection, and technical assistance, as well as sufficient information technology to aid inspectors in conducting systematic inspections. For example, DOE assists Russian organizations in developing a systematic inspection approach that assures the MPC&A objectives are met and assists organizations in defining the inspection program by benchmarking proposed inspection methodologies against U.S. and other inspection approaches. Through fiscal year 2006, DOE has sponsored 83 inspections by Rosatom and Rostekhnadzor, and 980 Russian personnel have attended inspection courses. DOE’s goal for the project is to maintain a cadre of about 125 trained inspectors. DOE had spent about $43.1 million on this project through the end of fiscal year 2006. The objective of the Protective Force Assistance project is to ensure that a sufficient number of organized, equipped, and trained response forces are present and able to protect against threats to highly-desirable nuclear material at Russian and Ukrainian sites and during transit. The project includes efforts in Russia and Ukraine, although the bulk of the efforts and money are spent in Russia. As of fiscal year 2006, DOE spent about $26.7 million to purchase a variety of equipment, such as bulletproof vests, helmets, response vehicles, and cold-weather uniforms for use by the forces that protect sites that store weapons-usable nuclear material in Russia. As of fiscal year 2006, DOE spent about $3.4 million to purchase the same type of equipment for Ukrainian sites. The Federal Information System (FIS) is a computerized management information system designed to track the location and movement of nuclear material between organizations throughout Russia. The FIS provides information on the quantity of nuclear material located at facilities that report to Rosatom. The system is centralized and automated to ensure that information can be received, tracked, and monitored by Rosatom. The development of the FIS is important to the MPC&A program because, prior to its development, Russian nuclear facilities generally used paper-based systems to track nuclear material inventories. The FIS will allow the Russian government to maintain an accurate and complete inventory of its weapons-usable nuclear material. As of fiscal year 2006, DOE reported that 21 organizations and facilities throughout Russia report to the FIS. Through the end of fiscal year 2006, DOE had spent about $29.1 million to develop the FIS. The purpose of the Regulatory Development project is to assist Russian regulatory and operating agencies and services in developing a sustainable MPC&A regulatory system for civilian nuclear materials site security and to also provide assistance to regulatory agencies in Ukraine and Kazakhstan. The regulatory framework establishes legal requirements for MPC&A activities for relevant ministries, agencies, services, operating organizations, and facilities. DOE works with Rosatom, Rostekhnadzor— Russia’s civilian nuclear regulatory authority, and other agencies to develop consistent MPC&A requirements across ministries, operating organizations, and facilities. In doing so, DOE aims to create incentives for effective MPC&A procedures and sanctions for noncompliance with regulations in order to foster a strong MPC&A culture and help sustain U.S.-funded security upgrades. Through the end of fiscal year 2006, the project has achieved enactment of 67 regulations, which is 35 percent of the total planned. In addition, DOE has worked with the Russian MOD to develop a comprehensive regulatory base that ensures MPC&A practices are implemented consistently throughout all branches and services of the Russian MOD. DOE spent about $27 million through fiscal year 2006 on its regulatory development projects. The MPC&A Education project supports efforts in Russia to train existing and future MPC&A experts. The project consists of two educational degree programs at the Moscow Engineering Physics Institute and one degree program at Tomsk Polytechnic University. The first educational degree program is the MPC&A Graduate Program available only at the Moscow Engineering Physics Institute. DOE worked with both the Moscow Engineering Physics Institute and Tomsk Polytechnic University to develop an undergraduate engineering program, focusing on more technical, hands- on aspects of nuclear security. For each of these degree programs, DOE works with the two universities to develop curriculum; identify and acquire training aids; develop and publish textbooks; and strengthen instructor skills. In addition, DOE works with the Monterey Institute of International Studies to support the instruction of nontechnical nonproliferation courses at universities and high schools located outside of Moscow. Through the end of fiscal year 2006, DOE had spent about $13.4 million on the project. The Material Control and Accounting Measurements project provides support to Russia for developing a national system of reference materials (standards), nuclear material measurement methods, instruments, and infrastructure to support the accurate measurement and accounting of weapons-usable nuclear material at Russian facilities. Reference materials, measurement methods, and instruments are needed to accurately measure the quantity and isotopic composition of nuclear material during inventories and transfers for input into accountability databases. Accurate material control and accounting measurements are key components to any MPC&A system. Through fiscal year 2006, DOE had spent about $10.8 million under this project and has purchased and distributed transportable equipment that allows for the testing of uranium and plutonium. The MPC&A Security Culture project supports the overall MPC&A goal of assisting Russia with enhancing its capabilities and strengthening its commitment to operating and maintaining improved nuclear security by fostering the development of training centers and developing an outreach strategy to enhance partner countries’ awareness and understanding of MPC&A benefits, e.g., an MPC&A security “culture.” The main objective of this project is to establish an infrastructure that emphasizes the importance of MPC&A and increase the commitment throughout Russia to operate and maintain MPC&A systems with minimal U.S. support by reinforcing the necessary attitudes and beliefs required to instill a strong MPC&A culture. Accomplishments under this project include training 1,800 staff in security culture and initiating a pilot security culture coordinator project at nine sites. Through the end of fiscal year 2006, DOE had spent about $1.3 million on the MPC&A Security Culture project. In addition to its efforts to improve the security culture at Russian nuclear sites, DOE recently conducted a series of workshops for Russian officials on MPC&A best practices at U.S. nuclear sites. The workshops included presentations by U.S. MPC&A experts. In conducting this workshop series, DOE intends to further enhance the security culture at Russian sites by working to educate Russian site officials on the methods used at U.S. facilities, so that these best practices can be applied at Russian sites. The MPC&A Taxation and Customs project began in 1999 to meet a congressional mandate that U.S. nuclear safety and security programs not pay taxes in Russia. The MPC&A program must obtain a certified tax exemption when providing technical equipment and services. The Taxation and Customs project assists DOE project teams’ understanding of taxation and customs issues and ensures compliance with Russian laws. The project stays abreast of Russian taxation and customs legislation, as well as guidance on bureaucracy and requirements for tax exemption, by holding workshops for Russian sites; tracking the tax-exemption process; and maintaining a taxation Web site for DOE project teams. Through the end of fiscal year 2006, DOE had spent about $0.5 million on the project. In addition to the individual named above, R. Stockton Butler, Jeffery Hartnett, Lisa Henson, and Jim Shafer made significant contributions to this report. Other assistance was provided by John Delicath, Jennifer Echard, Brandon Haller, Gregory Marchand, Keith Rhodes, and Karen Richey. Nuclear Nonproliferation: Better Management Controls Needed for Some DOE Projects in Russia and Other Countries. GAO-05-828. Washington, D.C.: August 29, 2005. Cooperative Threat Reduction: DOD Has Improved Its Management and Internal Controls, but Challenges Remain. GAO-05-329. Washington, D.C.: June 30, 2005. Weapons of Mass Destruction: Nonproliferation Programs Need Better Integration. GAO-05-157. Washington, D.C.: January 28, 2005. Weapons of Mass Destruction: Additional Russian Cooperation Needed to Facilitate U.S. Efforts to Improve Security at Russian Sites. GAO-03-482. Washington, D.C.: March 24, 2003. Nuclear Nonproliferation: Security of Russia’s Nuclear Material Improving; Further Enhancements Needed. GAO-01-312. Washington, D.C.: February 28, 2001. Nuclear Nonproliferation: Limited Progress in Improving Nuclear Material Security in Russia and the Newly Independent States. RCED/NSIAD-00-82. Washington, D.C.: March 6, 2000. Weapons of Mass Destruction: Effort to Reduce Russian Arsenals May Cost More, Achieve Less Than Planned. NSIAD-99-76. Washington, D.C.: April 13, 1999. Nuclear Nonproliferation: Status of U.S. Efforts to Improve Nuclear Materials Controls in Newly Independent States. NSIAD/RCED-96-89. Washington, D.C.: March 8, 1996. Soviet Nuclear Weapons: Priorities and Costs Associated with U.S. Dismantlement Assistance. NSIAD-93-154. Washington, D.C.: March 8, 1993.
|
Safeguarding nuclear warheads and materials that can be used to make nuclear weapons is a primary national security concern of the United States. Since 1993, the Departments of Energy (DOE) and Defense (DOD) have worked to improve security at sites housing weapons-usable nuclear material and warheads in Russia and other countries. In 1995, DOE established the Materials Protection, Control, and Accounting (MPC&A) program to implement these efforts. GAO examined the (1) progress DOE has made in improving security at nuclear material sites in Russia and other countries, (2) progress DOE and DOD have made in improving security at Russian nuclear warhead sites, and (3) efforts DOE and DOD have undertaken to ensure the continued effective use of U.S.-funded security upgrades. To address these objectives, among other things, GAO analyzed agency documents, conducted interviews with key program officials, and visited four Russian nuclear sites. Through fiscal year 2006, DOE and DOD spent over $2.2 billion to provide security upgrades and other assistance at sites in Russia and other countries that house weapons-usable nuclear materials and warheads. With regard to securing nuclear material, DOE reports to have "secured" 175 buildings and plans to improve security at 35 additional buildings by the end of 2008. However, DOE's reported total of buildings "secured" does not recognize that additional upgrades remain to be completed at some buildings because DOE considers a building "secured" after it has received only limited MPC&A upgrades, even when additional comprehensive upgrades are planned. Further, DOE and Russia have developed a Joint Action Plan that includes 20 sites and details the remaining work to be accomplished by 2008. However, the plan does not include two sites containing many buildings with vast amounts of nuclear material where Russia has denied DOE access. DOE and DOD report to have improved security at 62 Russian warhead sites and plan to help secure 35 additional sites by the end of 2008. The departments have improved their coordination mechanisms since our 2003 report, in which GAO reported that the agencies had inconsistent policies for installing site security upgrades at Russian warhead sites. Additionally, DOE and DOD are using similar approaches to manage large security upgrade contracts at warhead sites. DOD has used earned value management (EVM), which at early stages can identify cost and schedule shortfalls. DOE has not used EVM on its fixed-price contracts, but, during the course of GAO's review, augmented its contract oversight to increase reporting frequency, which DOE officials consider a comparable alternative to EVM. DOE has developed broad guidelines to direct its efforts to help ensure that Russia will be able to sustain (operate and maintain) U.S.-funded security systems at its nuclear material and warhead sites after U.S. assistance ends and is working with Russia to develop a joint sustainability plan. However, DOE lacks a management information system to track the progress made toward its goal of providing Russia with a sustainable MPC&A system by 2013. DOE and DOD's abilities to ensure the sustainability of U.S.-funded security upgrades may be hampered by access difficulties, funding concerns, and other issues. Finally, DOE and DOD plan to provide Russia with assistance to sustain security upgrades at nuclear warhead sites but have not reached agreement with Russia on access procedures for sustainability visits to 44 sites. As a result, the agencies may be unable to determine if U.S.-funded security upgrades are being properly sustained.
|
JCIDS; the Planning, Programming, Budgeting and Execution process; and the Defense Acquisition System broadly make up DOD’s overall defense acquisition management framework. JCIDS was implemented in 2003 to guide future defense programs from a joint capabilities perspective. JCIDS is one of the first steps in DOD’s acquisition processes; JCIDS participants work to identify and determine whether to validate the need for capabilities proposed by the services, the defense agencies, and the combatant commands. Once a requirement is validated, the services rely on the DOD’s Planning, Programming, Budgeting and Execution process, through which DOD allocates financial resources across the department—including the services—to identify funding for validated capability solutions. DOD then manages the development and procurement of proposed capabilities through the Defense Acquisition System. DOD implemented the JCIDS process in 2003 in an effort to assist the JROC by changing DOD’s requirements validation process from a service-specific perspective to a joint capabilities perspective. The JROC, which is chaired by the Vice Chairman of the Joint Chiefs of Staff, consists of a general or admiral from each of the military services and may include combatant commanders or deputy commanders when directed by the JROC Chairman. The JROC is charged with assisting the Chairman of the Joint Chiefs of Staff with a number of tasks, including (1) identifying, assessing, and approving joint military requirements to meet the national military strategy; (2) establishing and assigning priority levels for joint military requirements; and (3) reviewing the estimated level of resources required to fulfill each joint military requirement and ensuring that the resource level is consistent with the requirement’s priority, among others. The JROC also assists acquisition officials in identifying alternatives to any acquisition programs that meet joint military requirements for the purposes of certain statutory provisions, addressing matters such as cost growth. In 2009, the Weapon Systems Acquisition Reform Act expanded the role of the JROC by directing it to assist the Chairman of the Joint Chiefs of Staff in ensuring that trade-offs among cost, schedule, and performance objectives are considered for joint military requirements and establishing an objective for the overall period of time within which an initial operational capability should be delivered to meet each joint military requirement. The JROC reviews requirements for programs designated as JROC- interest based on their expected cost and complexity and, under guidance in effect through December 2011, also reviewed programs at the request of certain senior DOD officials. Within JCIDS, the JROC is supported in its duty to review and validate joint capability needs by the Joint Capabilities Board and six Functional Capabilities Boards. The Joint Capabilities Board is chaired by the Director of the Joint Staff’s Directorate for Force Structure, Resources, and Assessment, and each Functional Capabilities Board is chaired by a general/flag officer or civilian equivalent. The Joint Capabilities Board reviews capability documents before they are passed on to the JROC for its review and also serves as the validation authority for certain programs that do not reach JROC- interest thresholds, although the JROC may review any JCIDS document or other issues requiring joint resolution. Functional Capabilities Boards are responsible for reviewing proposed requirements specific to joint capability areas, such as protection, logistics, or battlespace awareness. In JCIDS, the JROC and its supporting organizations review requirements documents related to capability gaps and the major defense acquisition programs intended to fill those gaps prior to key acquisition milestones. These requirements documents—initial capabilities documents, capability development documents, and capability production documents for materiel solutions and change recommendations for nonmateriel solutions—are submitted into the JCIDS process by capability sponsors.The initial capabilities document identifies a specific capability gap, or set of gaps, and if a materiel solution is required, helps inform the initial stages of the acquisition process, which include an analysis of the alternative solutions to fulfilling the capability need and the selection of a preferred system concept. When the technology development phase of the acquisition process is complete, a program sponsor completes a capability development document that includes more detail on the desired capabilities of the proposed system and defines the system’s key performance parameters or attributes against which the delivered increment of capability will be measured. Finally, the sponsor prepares a capability production document to describe the actual performance of the system that will deliver the required capability. Figure 1 depicts how JCIDS reviews align with the acquisition process. The House Armed Services Committee and a panel established by the committee have discussed long-standing challenges with the JCIDS process and the JROC’s fulfillment of its statutory responsibilities. The House Armed Services Committee, in a report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2008, described a legislative provision that would allow for joint decision making as opposed to service-centric budget considerations by incorporating clear priorities and budget guidance into the JROC process. In 2009, the House Armed Services Committee established a panel on defense acquisition reform because of a sense that the acquisition system was not responsive enough for today’s needs, not rigorous enough in protecting taxpayers, and not disciplined enough in the acquisition of weapon systems for tomorrow’s wars. The panel received testimony that the Joint Staff lacked some of the analytical expertise necessary to ensure that the JCIDS process rigorously vets proposed requirements. Additionally, since 2008 we have reported on these challenges. We reported in 2008 that the JCIDS process was not effective in prioritizing capability gaps, and we noted that capability needs continued to be proposed and defined by the services with little involvement from the joint community. We recommended that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to develop an analytic approach within JCIDS to better prioritize and balance the capability needs of the military services, combatant commands, and other defense components. DOD partially agreed with our recommendation but did not fully implement it, and prioritization remains service driven. More recently, in June 2011 we reported that the JROC did not always consider trade-offs among cost, schedule, and performance objectives; prioritize requirements; consider redundancies across proposed programs; or prioritize and analyze capability gaps in a consistent manner. We recommended that the JROC require higher-quality resource estimates from requirements sponsors to ensure that best practices are being followed, provide a sound basis to ensure that trade-offs are considered, prioritize requirements across proposed programs, and address potential redundancies during requirements reviews, among other steps. DOD partially agreed with our recommendations and commented that improvements to the quality of resource estimates would be addressed in upcoming changes to the JCIDS process. In May 2011, we also reported that combatant command officials raised concerns that JCIDS focuses more on long-term service-centric capability gaps than on combatant commands’ more immediate and largely joint gaps. JCIDS was designed as a deliberate process to meet longer-term joint needs. To address urgent needs, DOD established a separate process—the joint urgent operational needs process—in 2005. The joint urgent operational needs process was intended to respond to needs associated with combat operations in Afghanistan and Iraq and the War on Terror. The revised JCIDS guidance canceled separate guidance for joint urgent operational needs and incorporates and describes the joint urgent operational needs process. Urgent operational needs, as defined by the new JCIDS guidance, are capability requirements needed for ongoing or anticipated contingency operations that if left unfulfilled could potentially result in loss of life or critical mission failure. For this report, we focus on requirements that have not been identified as urgent and instead follow the deliberative JCIDS process. The Joint Staff is undertaking efforts to improve the ability to prioritize capability needs from a joint perspective through JCIDS and to align those needs with available budget resources. However, implementation processes for JCIDS’s new approach to managing requirements and considering affordability are still evolving and have not been fully developed and clearly documented. Determining priorities among joint requirements has been a responsibility of the JROC since Congress amended section 181 of Title 10 of the U.S. Code in 2008 to require the JROC to assist in establishing and assigning priority levels for joint military requirements and to help ensure that resource levels associated with those requirements are consistent with the level of priority. DOD officials acknowledge that JCIDS has been ineffective in helping the JROC carry out these responsibilities. We have previously reported that JCIDS’s ability to align resources to balance competing needs could be improved if it had an analytic approach that provided a means to review and validate proposals to ensure that the most important capability needs of the department are being addressed. We further said that such an approach should establish criteria and measures for identifying capability gaps and determining the relative importance of capability needs. Finally, the approach should result in measurable progress in allocating resources in order to eliminate redundancies, gain efficiencies, and achieve a balanced mix of executable programs. DOD officials told us that downward pressure on the defense budget has led the Joint Staff to change how the JCIDS process is used to strengthen its ability to support JROC members in making trade-off decisions among requirements and balancing risks across the force within expected resources. In fall 2011, according to officials, the incoming Vice Chairman of the Joint Chiefs of Staff, as the Chairman of the JROC, began to make changes in the JCIDS processes to focus on what capabilities currently exist and weigh the benefits of investing in new capabilities with their estimated costs early in the review process. The Joint Staff issued draft guidance in October 2011 and began implementation based on the draft guidance. The Joint Staff issued the final guidance in January 2012. Descriptions of these changes and their implementation follow: New capabilities will be considered as part of a “capability portfolio approach.” Under the portfolio approach, officials stated that JROC members are to ensure that proposed investments in capabilities address joint needs or they will not be validated to proceed to the acquisition process. In addition to validating capability proposals, according to Joint Staff officials, the JROC has begun examining how the services’ existing programs can support joint operations to reduce duplication of capabilities. As of December 2011, according to Joint Staff officials, the last five closed meetings of the JROC began conversations about how to meet requirements by considering available capabilities and the costs and benefits of proposed programs. As a result, according to Joint Staff officials, at least five classified programs have been reviewed and altered by either comparing redundant capabilities, reducing capacity, adjusting delivery schedules, or directing follow-on analysis before moving programs forward. To support JROC decision making, officials reported that Functional Capabilities Boards will be tasked with examining requirements, their associated capability gaps, and proposed solutions within their capability portfolios, and independently assessing how a proposed capability fits into its corresponding joint capability area. The Functional Capabilities Boards previously had responsibility for identifying, assessing, and prioritizing (if required) joint capability needs proposals within assigned joint capability areas but, according to officials, have not always carried out these responsibilities. Previously, Functional Capabilities Boards acted primarily as technical reviewers of requirements documents, and program sponsors briefed the JROC on the attributes of the program. However, the new guidance does not specify how the independent assessment is to be conducted, and it is too soon to tell how the Functional Capabilities Boards will respond to the new requirement. Officials reported that Functional Capabilities Boards are expected to develop methodologies on a case-by-case basis. DOD officials said that the analytic approach and uses of analytic information will evolve over time. DOD has previously attempted to manage capabilities departmentwide through a portfolio approach, but has never fully implemented the approach. In 2006, DOD established an effort to manage resources across capability areas by establishing capability portfolio managers to enable DOD to develop and manage capabilities across the department rather than by military service or individual program, and by doing so, to improve the interoperability of future capabilities, minimize capability redundancies and gaps, and maximize capability effectiveness. However, as we reported in 2008, capability portfolio managers make recommendations on capability development issues within their portfolios but do not have independent decision-making authority. In 2011, Joint Staff officials told us that they were unaware of capability portfolio managers’ active involvement in the JCIDS process. Attendance at JROC meetings will be limited to key decision makers and stakeholders. Beginning in October 2011, the JROC Chairman began to limit attendance at JROC meetings to facilitate candid discussion among senior leaders about priorities for joint requirements and alternative solutions. Joint Staff officials told us that previously, meetings were open to a broad range of interested parties and service sponsors provided briefings on their proposals for new capabilities. Under the new approach being implemented, a representative of one of the Functional Capabilities Boards will provide a briefing on the proposal to the JROC. The representatives would then present the board’s independent assessment of benefits, costs, and risk for the JROC to discuss and decide upon. According to JCIDS officials, JROC members are expected to make decisions from the perspective of the joint force and avoid taking a service-centric approach. Affordability of proposals will be a primary factor in validation decisions. JROC members have always been expected to consider the resource implications of validation decisions, but officials stated that until recently, these considerations have not been a focus because capabilities were not competing with each other for funding. According to officials, the Functional Capabilities Boards have been directed to take similar steps to ensure that capability proposals not only meet technical requirements but also represent the most efficient alternative for providing a capability within the joint capability area without creating duplication or overlap. According to JCIDS officials, the JROC is also reconsidering previous validation decisions and asking for changes to proposals to minimize costs. We reviewed recent Functional Capabilities Board briefings to the JROC, which provided information on how needs might be met with current capabilities and alternatives that might meet needs while minimizing costs. However, it is too soon to assess how the JROC will consider affordability of programs when making validation decisions. The Vice Chairman of the Joint Chiefs of Staff has sought some policy changes and, according to officials, provided other direction to implement changes in the JCIDS process, but the new approach to managing requirements and considering affordability is still evolving and has not been fully developed and clearly documented. According to Joint Staff officials, the dynamic fiscal environment and the evolutionary method being used to develop the new approach and implementation processes make it important that decision makers maintain flexibility in decision making. We believe that the new approach has promise in positioning the JROC to more accurately identify capability gaps and trade-offs, but it has not been fully developed to include steps to ensure that the approach is fully implemented, that the intent is fully communicated to all stakeholders involved, and that the results of the new approach will be measurable. We have previously reported that key practices for results-oriented management involve leadership from top officials as well as the involvement of stakeholders at all levels throughout a period of transition. We have also reported that in order to demonstrate a successful results-oriented framework, officials must include clearly defined measures to assess intended outcomes. As shown in table 1, key practices for supporting change should include, among other actions, obtaining and sustaining support from senior leadership to facilitate the transformation, establishing clear lines of communication between all affected parties, and demonstrating value and credibility of new processes through the use of metrics. According to Joint Staff officials, the Chairman of the JROC has been a driving force in positioning the JROC to take on the responsibility of aligning needs and balancing risk and resources, which fulfills one of the key steps for results-oriented management, but the approach is still new and officials have not completed all of the steps that facilitate institutional acceptance and implementation of the new approach. The Joint Staff has begun the process of change by articulating a clear rationale for change—that the JROC can more effectively represent the warfighters’ requirements and make strategic trade-off decisions as budgets stay flat or decrease by taking a more active role in shaping an affordable joint force. However, best practices for managing a results-oriented change state that goals and procedures should be communicated to stakeholders throughout the organization so that they understand how they should implement the new approach and how the organization will measure progress. The Joint Staff issued guidance that outlines new procedures intended to establish an approach for prioritizing capabilities from a joint perspective and to increase the timeliness of JCIDS reviews by categorizing proposals according to the level of urgency of the need and streamlining procedures for urgent needs. However, the guidance does not clearly outline criteria and measures for demonstrating progress toward meeting the goal of aligning needs with available resources or clearly communicate the goals and the analytic approach envisioned to support JROC decision making. Further, the guidance does not describe how the proposed change will affect the services, combatant commands, and other stakeholders. Finally, the JCIDS guidance does not establish criteria and measures for demonstrating progress toward the goal of creating a balanced portfolio of programs that takes into account needs, risks, and available resources, nor do other documents provided to us by DOD. Measures such as the proportion of requirements that address joint priority needs versus service-specific needs, the savings obtained through the elimination of redundant capabilities, and the comparison of estimated costs of a proposed program with the actual costs of operating the program over its life cycle could be helpful in assessing whether the process is balancing requirements with available resources or whether further adjustments to the process are needed. Considering new capabilities across the department in the context of joint capability areas can help DOD begin to identify priorities for future investment. However, unless the Joint Staff takes steps to define and institutionalize the new approach by adhering to the key principles of results-oriented management, it is not clear whether the current momentum in implementing an analytic approach through JCIDS will be sustained. Even though sustainment costs make up a significant portion of the total ownership costs of a weapon system, the JROC has not always had complete information on such costs when validating documentation used in the decision to initiate program development. “Sustainment” as a category represents a range of activities intended to ensure the availability of weapon systems and to support their operations, including some logistics and personnel services. During the identification of capability gaps and consideration of selected alternatives, it is difficult for the sponsors to provide detailed information on program capabilities and cost estimates. As a major defense acquisition program moves toward the development stage, JCIDS requires that more complete and accurate sustainment information be presented in capability development documents. According to DOD officials, decision makers need more accurate cost information to assess whether the benefit of a proposed capability justifies the cost of sustaining the capability over its life. JCIDS guidance requires that sponsors of potential major defense acquisition programs include sustainment information in capability development documents, which detail proposed solutions to fulfill capability needs. The JROC has generally relied on sponsor-provided assessments of sustainment information in the capability development documents to make its validation decisions, but these documents have not always included all the information suggested in JCIDS guidance or sufficient detail to enable the JROC to assess the quality of the information. A DOD regarding the development of sustainment information suggests manual that when sustainment requirements and underlying assumptions are not clearly documented, subsequent decisions about the project may be based on incorrect assumptions. Prior GAO workjoint warfighting capabilities and proposals to fulfill the gaps should be clearly identified to decision-making bodies, such as the JROC, to inform deliberations. Further, information should be complete so those making the important decisions may do so as effectively as possible. suggests that gaps in JCIDS guidance requires sponsors of major defense acquisition programs to address sustainment based on four metrics—materiel availability, operational availability, reliability, and ownership cost (renamed operation and support cost in January 2012). The guidance includes a series of criteria, which are listed as review criteria in the guidance. The criteria provide additional information on each metric. For example, JCIDS guidance lists as review criteria for the materiel availability metric whether there is a clear definition and accounting for the intended service life of the program, an identification of planned downtime, and a comparison of downtime value with experiences of other analogous systems, among other criteria. Table 2 outlines examples of key review criteria within each of the four sustainment metrics under the guidance in effect through December 2011. Program sponsors provide initial information on the sustainment metrics for proposed capability solutions when they submit a capability development document, one of three capability documents the JROC considers in its review and potential validation of capability proposals. Officials from both the Joint Staff and the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics review the capability development documents and, according to officials, verify that all required sustainment elements have been included before the documents are validated by the JROC. Officials also provide their independent assessments of the quality of the cost estimates to the JROC.from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics said they work with program sponsors to ensure that life cycle sustainment planning and costs are as accurate as possible. Officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics and the Joint Staff told us that they consider reported sustainment information important to a program’s development and review all reported information. The JCIDS Manual notes that listed criteria, information, and activities cannot necessarily be applied to all systems. Sponsors have a degree of latitude in determining which items are applicable for their specific concept, technology, system, or combination of these. For example, a program sponsor for a major defense acquisition program is required to report a measure for operational availability, but would not necessarily have to report on the respective criteria, such as addressing downtime associated with failure, including recovery time or movement of maintenance teams to the work site. Because the guidance does not specifically require program sponsors to report on the individual criteria, they generally include some, but not all, of the individual criteria. Our analysis of six capability development documents found that all of the documents provided information on all of the required sustainment metrics. However, we found that the completeness of information reported for all of the metrics’ key criteria varied. Specifically, none of the documents included complete information for each of the four sustainment metrics’ review criteria elements. In addition, each of the documents had some common omissions; for example, none of the six capability documents we reviewed included information on all nine of the ownership cost metric’s criteria elements. Further, several of these documents only included information on one criteria element for a single metric, and none reported information on all of the elements for any of the metrics. Finally, when information on the metrics’ key criteria was provided, the level of detail varied among the documents. For example, for some criteria, some documents provided a paragraph of supporting information and analysis whereas others provided single-sentence responses that did not provide as substantial detail for the supporting criteria. According to officials, for the capability documents we reviewed, there were cases in which information on some of the suggested criteria should have been included but were not. Joint Staff officials and officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics noted that while criteria for each of the required sustainment metrics may not be applicable to every program, it would be beneficial to the JROC if the services reported on the criteria for each metric outlined in the guidance or indicated a reason why a specific criterion was not applicable. Officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics told us that when they conduct their reviews of sponsor-reported sustainment information, not all of the supporting documentation they need for a thorough independent assessment is available in the capability development document. These officials said that they can generally find detailed documentation on sustainment planning and costs from sources outside of the JCIDS process, but that the information is not always readily available within the JCIDS database. Ultimately, review efficiency could be improved if all the information were available in the JCIDS database. Updated JCIDS guidance issued in January 2012 still does not clearly require program sponsors to report on the individual criteria for each of the sustainment metrics. The Joint Staff is developing a new reporting tool intended to provide a standard format for reporting sustainment information. The tool will require program sponsors to at least minimally address each of the four sustainment metrics in order to submit the capability development document for review through JCIDS, according to officials. However, officials stated that this tool will not require that sponsors address each of the individual criteria elements within the four sustainment metrics. Without complete and detailed information on each of the individual criteria elements, the JROC may not be in the best position to weigh the costs and benefits of a proposal within a capability portfolio. The quality and completeness of the data that sponsors provide through the capability development document in the JCIDS process will become more important as the JROC increases its examination of the benefits of programs balanced against their associated costs. As we have previously reported, incomplete and inaccurate sustainment information has been a long-standing problem for DOD. In November 2009, a DOD team assessing weapon system acquisition reform reported that DOD lacked valid, measurable sustainment information to accurately assess how programmatic decisions affected life cycle costs and made recommendations to improve weapon system life cycle sustainment.Until the JROC requires program sponsors to report complete sustainment information, including both the overall metrics and the supporting criteria, the JROC may not always have the complete and detailed information it needs to make the most informed decisions. The prospect of declining budgets has amplified the need for DOD to prioritize among capability gaps and to use its resources to maximize the capabilities of the joint force. The Chairman of the JROC has begun to take steps to better balance risks across the joint force by examining proposals for new capabilities within the context of existing joint capability areas and to consider affordability, including sustainment costs, as a factor in validating requirements proposals. The revised approach is new and evolving, but in order for it to achieve the intended results of prioritizing capability needs and aligning those needs with available resources, the Joint Staff needs to take steps to fully develop the approach and document it more explicitly. Specifically, DOD does not yet have a documented implementation plan with measures of success that support change. In addition, having good sustainment information is a key element needed to improve JCIDS’s success over the long term. Sustainment costs historically represent 70 percent of a system’s life cycle costs, but DOD has been making decisions with incomplete information on sustainment and does not require that sponsors address all of the criteria outlined in JCIDS guidance. Until the JROC has developed and fully documented an approach for prioritizing capability needs and aligning these needs with available resources and has complete sustainment information associated with the operation of new capabilities, it will not be in the best position to align resources with priorities or balance costs with benefits in affordable investment plans. To help sustain momentum for efforts to bring a capability portfolio approach to the JCIDS process and to improve the quality of sustainment information reported in capability development documents, we recommend that the Vice Chairman of the Joint Chiefs of Staff, as the JROC Chairman, take the following two actions: Revise and implement guidance to reflect changes to the JCIDS process as well as to establish criteria and measures for determining the relative importance of capability needs across capability areas and assessing progress. Explicitly require that program sponsors address each of the criteria outlined for the individual sustainment metrics when submitting capability development documents. The Joint Staff provided written comments on a draft of this report. In its comments, the Joint Staff said our report represented a fair and objective assessment of the JCIDS process. It partially agreed with both of our recommendations, citing ongoing and planned changes to the joint requirements development process. However, the comments did not detail any specific steps that DOD plans to take to address our recommendations. The comments are reprinted in their entirety in appendix III. The Joint Staff also provided technical comments, which we have incorporated into the report as appropriate. The Joint Staff partially concurred with our recommendation that the Vice Chairman of the Joint Chiefs of Staff revise and implement guidance to reflect changes to the JCIDS process as well as to establish criteria and measures for determining the relative importance of capability needs across capability areas and assessing progress. In its written response, the Joint Staff described recent initiatives to substantially change the joint requirements development process to require that capability requirements be evaluated within a capability portfolio by Functional Capability Boards, the Joint Capabilities Board, and the JROC. The Joint Staff also discussed its planned efforts to improve prioritization of capability needs and stated that JROC reviews will incorporate an evolving portfolio assessment tool. The Joint Staff expects that the departmentwide priorities outlined in DOD’s strategic guidance as well as a revised process for assessing capability gaps and combatant command priorities will enable the JROC to make more informed decisions about priorities. While we agree that the Joint Staff has taken important steps to enable prioritization of capabilities, such as addressing prioritization in a new enclosure in its revised JCIDS Manual, the enclosure does not explicitly outline implementation processes. We continue to believe that clear guidance that establishes criteria for determining priority levels and measures for demonstrating progress will be essential in sustaining momentum toward the goal of creating a balanced portfolio of programs that takes into account needs, risks, and available resources. Moreover, providing guidance that fully documents the new procedures for assigning priority levels to capability gaps is an essential step toward clarifying how the procedures will be implemented. The Joint Staff also partially concurred with our recommendation that the Vice Chairman of the Joint Chiefs of Staff explicitly require that program sponsors address each of the criteria outlined for the individual sustainment metrics when submitting capability development documents. According to its written comments, the Joint Staff criteria for the sustainment metrics were designed to guide the development of requirements, but were not intended to be prescriptive because individual programs are unique and criteria applicable to one problem may not apply to another. We agree that each of the criteria may not be applicable to every program. However, if program sponsors addressed each criterion in some manner, including explaining that a criterion is not applicable to the program, the JROC would be assured that program sponsors considered all criteria when developing program proposals. Further, the Joint Staff commented that JCIDS reviews of capability development documents by Joint Staff and Office of the Secretary of Defense officials ensure that a document is thoroughly vetted for consideration by the JROC. It suggested that the inclusion of analyses and rationale for sustainment metrics development in capability development documents might be duplicative because this information is contained in acquisition documents that exist outside of JCIDS. However, as we noted in our report, the documents that contain the analysis and rationale for the required sustainment metrics are not necessarily reviewed by or available to the JROC members during their consideration of a capability development document. We continue to believe that the inclusion of a sponsor- provided rationale for each metric criterion would enhance the thoroughness and efficiency of the JROC’s review of sustainment information through JCIDS. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Vice Chairman of the Joint Chiefs of Staff, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Section 862 of the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 requires the Comptroller General to carry out a comprehensive review of the Joint Capabilities Integration and Development System (JCIDS) and to submit to the congressional defense committees a report on the review. This appendix, in conjunction with the letter, addresses each of the reporting provisions as described in the act. Specifically, section 862 requires the following contents for the review: Purpose. The purpose of the review is to evaluate the effectiveness of JCIDS in achieving the following objectives: Timeliness in delivering capability to the warfighter. Efficient use of the investment resources of the Department of Defense (DOD). Control of requirements creep. Responsiveness to changes occurring after the approval of a requirements document environment, the emergence of new capabilities, or changes in the resources estimated to procure or sustain a capability). (including changes to the threat Development of the personnel skills, capacity, and training needed for an effective and efficient requirements process. Matters considered. In performing the review, the Comptroller General is required to gather information on and consider the following matters: The time that requirements documents take to receive approval through JCIDS. The quality of cost information considered in JCIDS and the extent of its consideration. The term requirements document is defined in the statute as “a document produced in JCIDS that is provided for an acquisition program to guide the subsequent development, production, and testing of the program and that—(A) justifies the need for a materiel approach, or an approach that is a combination of materiel and non-materiel, to satisfy one or more specific capability gaps; (B) details the information necessary to develop an increment of militarily useful, logistically supportable, and technically mature capability, including key performance parameters; or (C) identifies production attributes required for a single increment of a program.” See § 862(c)(2). The extent to which JCIDS establishes a meaningful level of priority for requirements. The extent to which JCIDS is considering trade-offs between cost, schedule, and performance objectives. The quality of information on sustainment considered in JCIDS and the extent to which sustainment information is considered. An evaluation of the advantages and disadvantages of designating a commander of a unified combatant command for each requirements document for which the Joint Requirements Oversight Council (JROC) is the validation authority to provide a joint evaluation task force to participate in a materiel solution and to provide input to the analysis of alternatives; participate in testing (including limited user tests and prototype testing);provide input on a concept of operations and doctrine; provide end user feedback to the resource sponsor; and participate, through the combatant commander concerned, in any alteration of the requirement for such solution. Section 862 also provided definitions for JCIDS, requirements document, requirements creep, and materiel solution. Tables 3 through 12 contain our response to each of the requirements mandated by the Ike Skelton National Defense Authorization Act for Fiscal Year 2011. The provision mandating our report defined the JCIDS process by referring to the JCIDS guidance in effect from March 2009 through December 2011. Accordingly, our response to the mandated elements as presented in this appendix generally focuses on JCIDS under that guidance. However, we also provide some information relating to JCIDS as described in the revised guidance issued in January 2012. In addition, our assessments generally focused on those programs that were determined to be JROC-interest, or those that were designated as major defense acquisition programs or major automated information systems and capabilities that have a potentially significant impact on interoperability in allied and coalition operations. Generally, these programs have greater costs or are more complex than smaller programs, and therefore provide an opportunity to assess the effectiveness of more aspects of the JCIDS process. To assess the extent to which the Joint Staff has developed and implemented an analytic approach to prioritize capability needs, we reviewed relevant legislation and Joint Staff guidance on the roles and requirements of the JCIDS process and the JROC as it pertains to assigning levels of priority to capability proposals. Specifically, we reviewed section 181 of the U.S. Code, Title 10; Chairman of the Joint Chiefs of Staff Instruction 3170.01G; Chairman of the Joint Chiefs of Staff Instruction 3170.01H (which we reviewed in draft form); and the JCIDS Manual. We then compared the current and prior versions of the instruction and manual to identify changes in the guidance with respect to prioritization of capability proposals. We met with officials from the Joint Staff; Department of the Air Force; Department of the Army; Department of the Navy; and Office of the Under Secretary of Defense for Acquisition, Technology and Logistics to discuss their perspectives on the implementation of changes to JCIDS with respect to prioritizing capability requirements. In order to understand how the JROC is implementing its new approach for prioritizing capabilities, we reviewed briefing materials presented at a JROC forum in November 2011. To corroborate our understanding of the documents we reviewed, we conducted interviews with Joint Staff and Office of the Under Secretary of Defense for Acquisition, Technology and Logistics officials. To understand the Joint Staff’s recent internal review of JCIDS, we reviewed the charter and recommendations, and met with Joint Staff officials to discuss how those recommendations from the review might affect the JROC’s prioritization of capability proposals. We also reviewed prior reports by GAO, the House Armed Services Committee, and the Defense Business Board that discussed prioritization of capability proposals through JCIDS, and compared the JROC’s current efforts to prioritize with what has been reported in the past. We assessed whether the guidance in the JCIDS Manual and JCIDS instruction (in draft form during our review and issued in January 2012) on prioritization meets the intent of recommendations contained in our prior reports. To assess the extent to which the JROC has considered aspects of the availability and operational support requirements of weapon systems— called sustainment—when validating the requirements of proposed capability solutions, we reviewed relevant DOD and Joint Staff policy documents and related guidance outlining the requirement to develop and report sustainment metrics for capability documents. Specifically, we reviewed the reporting requirements for major defense acquisition programs processed through JCIDS.requirements for capability development documents and to understand the JCIDS process, we reviewed the JCIDS Manual enclosure pertaining to sustainment and instructions from the Chairman of the Joint Chiefs of Staff. We also reviewed prior GAO work on related topics. Further, we interviewed DOD and Joint Staff officials to discuss preparation, presentation, and consideration of sustainment data. We also conducted a case study analysis of select capability development documents that included sustainment information. We sought a universe of all capability development documents subject to reporting the sustainment key performance parameter validated since 2007, when JCIDS began requiring program sponsors to include this information. We initially obtained a universe of 22 JCIDS capability development documents from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics/Logistics and Materiel Readiness. We relied on this provided list of documents because the Joint Staff’s Knowledge Management/Decision Support database did not produce reliable results of all requirements documents containing sustainment information. We narrowed the list by eliminating programs that have been downgraded, truncated, or canceled; programs in which the sustainment data were not at the key performance parameter level or were more narrowly defined by only one type of platform; programs whose information was contained in capability production documents rather than capability development documents; and programs whose capability development documents were entered in JCIDS before the full implementation of the sustainment key performance parameter requirement. Additionally, several programs did not have supporting documentation that would allow a review; Joint Staff officials we met with stated that these documents were not available either because of a misidentification of the type of capability document that was being reviewed (a capability production document as opposed to a capability development document), or because the document was not included in the Joint Staff’s database for JROC review. These factors led to a refined universe of 12 requirements documents. We then randomly selected capability development documents for two programs per service—Army, Air Force, and Navy—resulting in a total of six programs to serve as case studies. Because this was a nonprobability sample of programs, the results are not generalizable to all programs; however, they are illustrative of the kinds of issues that are possible in such programs. In order to assess reported sustainment information in the six selected cases, we performed a content analysis of the documentation available for the six cases. Two GAO analysts independently reviewed each of the six capability development documents, assessing whether each of the individual elements of the JCIDS Manual sustainment metrics was included, coding the inclusion of each metric as “yes,” “no,” “partial,” and “don’t know.” The two analysts then discussed and reconciled all initial disagreements regarding the assigned codes. We then discussed the results of this content analysis with officials from the Joint Staff and the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics/Logistics and Materiel Readiness to verify that the results of our analysis were valid. We conducted this performance audit from April 2011 through February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Margaret Morgan, Assistant Director; Melissa Blanco; Mae Jones; Kate Lenane; Jennifer Madison; Ron Schwenn; Michael Shaughnessy; Michael Silver; Jennifer Spence; Amie Steele; and Kristy Williams made key contributions to this report.
|
The Department of Defenses Joint Requirements Oversight Council (JROC) is charged with assisting in the prioritization of capability needs from a joint perspective and helping guide investments. The JROC is supported by the Joint Capabilities Integration and Development System (JCIDS) process. However, a congressional committee and GAO have expressed concerns about the extent to which JCIDS has been effective in prioritizing capability needs. The Ike Skelton National Defense Authorization Act for Fiscal Year 2011 required GAO to provide a report on the effectiveness of JCIDS in several areas. In addition to responding to this direction, GAO has more broadly evaluated the extent to which (1) the Joint Staff has developed and implemented an analytic approach to prioritize capability needs and (2) the JROC has considered aspects of the availability and operational support of weapon systemscalled sustainmentwhen validating the requirements of proposed capability solutions. To do so, GAO analyzed capability documents, reviewed relevant guidance and law, and interviewed officials. After studying the Joint Capabilities Integration and Development System (JCIDS) process since September 2010, the Joint Staff began initiating actions in October 2011 to better prioritize capability needs and align those needs with available budgetary resources. Specifically, according to Joint Staff officials, the Joint Requirements Oversight Council (JROC) has begun to consider the benefits and affordability of new capabilities within the context of joint capability areas and to evaluate possible duplication before validating new capability requirements. The Joint Staff has begun to implement a new approach to support JROC prioritization of capability needs, but the new approach is still evolving and has not been fully developed and clearly documented. New guidance does not clearly outline goals of the new approach, develop and communicate the analytic approach envisioned to support JROC decision making, or set out criteria and accompanying measures of progress. GAO previously reported that JCIDSs ability to prioritize needs could be improved if it had an analytic approach to reviewing and validating proposals that would help ensure that the most important capability needs of the department are addressed. Until the Joint Staff takes steps to fully develop, document, and institutionalize the new analytic approach, it is not clear whether the current momentum for improving the JCIDS process will be sustained. JCIDS guidance in effect through December 2011 required that sponsors of potential major defense acquisition programs address sustainment information in capability development documents according to four metricsmateriel availability, operational availability, reliability, and ownership cost. Each of these metrics includes a set of potentially reportable criteria or data, which are listed as review criteria and are suggested, but not clearly required by the guidance, to be included in the metric. Based on GAOs analysis of six capability development documents, GAO found that all of the documents provided information on the four required sustainment metrics, but the completeness of information for all of the metrics key criteria varied. Further, in some cases information that should have been included, according to Department of Defense officials, was not provided. The Joint Staff issued updated JCIDS guidance in January 2012, but the guidance still does not clearly require program sponsors to report on the individual criteria for each of the four sustainment metrics. Without complete and detailed information on each of the individual criteria elements, the JROC may not have the information it needs to make the most informed decisions when validating the requirements of proposed solutions intended to mitigate capability gaps. GAO recommends that the Vice Chairman of the Joint Chiefs of Staff (1) revise and implement JCIDS guidance to reflect recent changes to the process and establish criteria and measures for determining the relative importance of capability needs and (2) require program sponsors to address each criterion in JCIDS guidance related to sustainment in capability documents. DOD partially concurred with GAOs recommendations.
|
With the passage of the Aviation and Transportation Security Act (ATSA) in November 2001, TSA assumed responsibility for civil aviation security from the Federal Aviation Administration and for passenger and checked baggage screening from air carriers. As part of this responsibility, TSA oversees security operations at the nation’s more than 400 commercial airports, including establishing requirements for passenger and checked baggage screening, and ensuring the security of air cargo transported to, from, and within the United States. While TSA has operational responsibility for conducting passenger and checked baggage screening, TSA has regulatory, or oversight, responsibility for air carriers who conduct air cargo screening. While TSA took over responsibility for passenger checkpoint and baggage screening, as directed by ATSA, air carriers have continued to conduct passenger prescreening, which includes the process of checking passenger information against federal watch list data before flights depart. In accordance with the Intelligence Reform and Terrorism Prevention Act of 2004, TSA is developing a program to take over this responsibility from air carriers for passengers on domestic flights, and CBP has issued a proposed rule that would enable it to perform its identity-matching function for passengers on international flights traveling to or from the United States prior to flight departure. The prescreening of airline passengers—the process of identifying passengers who may pose a security risk before they board an aircraft—is one of many important layers of security that is intended to help officials focus security efforts on those passengers representing the greatest potential threat to civil aviation. Within DHS, TSA is responsible for ensuring that passenger prescreening is conducted before domestic flights—flights operating entirely within the United States—take off, while CBP has responsibility for conducting passenger prescreening for international flights operating to or from the United States. TSA is developing a program, in accordance with ATSA and the Intelligence Reform and Terrorism Prevention Act of 2004, through which TSA would assume the watch list matching function currently conducted by air carriers prior to domestic flight departures. TSA has named this prospective prescreening program Secure Flight. In accordance with security directives issued by TSA, air carriers—and not the U.S. government—currently match passenger-supplied reservation information (referred to as passenger name record (PNR) data), against the No Fly and Selectee Lists to prescreen passengers before domestic flights depart. According to TSA, the No Fly List includes the names of individuals considered to be known or suspected threats to civil aviation and are therefore precluded from boarding an aircraft traveling to, from, or within the United States, while the Selectee List includes the names of individuals who require additional security screening—which includes physical inspection of the person and a hand search of their luggage—prior to being permitted to board an aircraft. These lists are extracted from the Terrorist Screening Center’s (TSC) consolidated terrorist screening database (TSDB) and are exported to the air carriers through TSA. The current domestic prescreening process also requires that air carriers operate the Computer-Assisted Passenger Prescreening System (CAPPS), which identifies passengers for additional screening based on certain behavioral characteristics. The existing identity-matching component of DHS’s international aviation passenger prescreening process involves separate matching activities conducted by air carriers (prior to a flight’s departure and pursuant to TSA requirements) and by CBP (generally after a flight’s departure). As with domestic passenger prescreening, air carriers conduct an initial match of self-reported PNR data against the No Fly and Selectee Lists before international flight departures. CBP’s process, in effect, supplements the air carrier identity matching for international flights by comparing additional passenger information collected from passports (this information becomes part of Advanced Passenger Information System (APIS) data), against the No Fly and Selectee Lists and other government databases. Under current federal regulations for CBP’s prescreening of passengers on international flights, air carriers are required to provide the U.S. government with PNR data as well as APIS data to allow the government to conduct, among other things, identity matching procedures against the No Fly and Selectee Lists—which typically occur just after or at times just before the departure of international flights traveling to or from the United States, respectively. To address a concern that the federal government’s identity matching may not be conducted in a timely manner, in 2004, Congress mandated that DHS issue a proposed rule requiring that the U.S. government’s identity-matching process occur before the departure of international flights. CBP published this proposed rule in July 2006, and, if implemented, it will allow the U.S. government to conduct passenger prescreening in advance of flight departure, and will eliminate the need for air carriers to continue performing an identity- matching function for international flights. One of the most significant changes mandated by ATSA was the shift from the use of private-sector screeners to perform airport screening operations to the use of federal screeners (now referred to as TSOs). Prior to ATSA, passenger and checked baggage screening had been performed by private screening companies under contract to airlines. ATSA required TSA to create a federal workforce to assume the job of conducting passenger and checked baggage screening at commercial airports. The federal screener workforce was put into place, as required, by November 2002. Passenger screening is a process by which personnel authorized by TSA inspect individuals and property to deter and prevent the carriage of any unauthorized explosive, incendiary, weapon, or other dangerous item onboard an aircraft or into a sterile area. Passenger screening personnel must inspect individuals for prohibited items at designated screening locations. As shown in figure 1, the four passenger screening functions are: X-ray screening of property, walk-through metal detector screening of individuals, hand-wand or pat-down screening of individuals, and physical search of property and trace detection for explosives. Typically, passengers are only subjected to X-ray screening of their carry- on items and screening by the walk-through metal detector. Passengers whose carry-on baggage alarms the X-ray machine, who alarm the walk- through metal detector, or who are designated as selectees—that is, passengers selected by the CAPPS or other TSA-approved processes to designate passengers for additional screening—are screened by hand- wand or pat-down and have their carry-on items screened for explosives traces or physically searched. Checked baggage screening is a process by which authorized security screening personnel inspect checked baggage to deter, detect, and prevent the carriage of any unauthorized explosive, incendiary, or weapon onboard an aircraft. As shown in figure 2, checked baggage screening is accomplished through the use of explosive detection systems or explosive trace detection systems, and through the use of alternative means, such as manual searches, canine teams, and positive passenger bag match, when the explosive detection or explosive trace detection systems are unavailable. The passenger and checked baggage screening systems are composed of three elements: the people (TSOs) responsible for conducting the screening of airline passengers and their carry-on items and checked baggage, the technology used during the screening process, and the procedures TSOs are to follow to conduct screening. Collectively, these elements help to determine the effectiveness and efficiency of passenger and checked baggage screening. TSA’s responsibilities for securing air cargo include, among other things, establishing security rules and regulations covering domestic and foreign passenger air carriers that transport cargo, domestic and foreign all-cargo carriers that transport cargo, and domestic indirect air carriers. TSA is also responsible for overseeing the implementation of air cargo security requirements by air carriers and indirect air carriers through compliance inspections, while air carriers are required to inspect air cargo for weapons, explosives, or stowaways. Air carriers (passenger and all-cargo) are responsible for implementing TSA security requirements, predominantly through a TSA-approved security program that describes the security policies, procedures, and systems air carriers are required to implement. These requirements include measures related to the acceptance, handling, and inspection of cargo; training of employees in security and cargo inspection procedures; testing employee proficiency in cargo inspection; and access to cargo areas and aircraft. If threat information or events indicate that additional security measures are needed to secure the aviation sector, TSA may issue revised or new security requirements in the form of security directives or emergency amendments applicable to domestic or foreign air carriers. The air carriers must implement the requirements set forth in the security directives or emergency amendments in addition to those requirements already imposed and enforced by TSA. Air cargo ranges in size from one pound to several tons, and in type from perishables to machinery, and can include items such as electronic equipment, automobile parts, clothing, medical supplies, other dry goods, fresh cut flowers, fresh seafood, fresh produce, tropical fish, and human remains. Cargo can be shipped in various forms, including large containers known as unit loading devices that allow many packages to be consolidated into one container that can be loaded on an aircraft, wooden crates, assembled pallets, or individually wrapped/boxed pieces, known as break bulk cargo. Participants in the international air cargo shipping process include shippers, such as individuals and manufacturers; freight forwarders or regulated agents, who consolidate shipments and deliver them to air carriers; air cargo handling agents, who process and load cargo onto aircraft on behalf of air carriers; and passenger and all-cargo carriers that store, load, and transport air cargo. International air cargo may have been transported via ship, train, or truck prior to its loading onboard an aircraft. Figure 3 identifies cargo being loaded onto an aircraft for transport. According to DHS’s budget execution reports, TSA’s appropriations for aviation security have totaled about $20 billion since fiscal year 2004. In fiscal year 2004—the first year for which data was available—TSA received about $3.9 billion for aviation security programs. In fiscal year 2007, TSA received about $5.7 billion. The President’s budget request for fiscal year 2008 includes about $5.7 billion to continue TSA’s aviation security efforts. This total includes about $5.0 billion specifically designated for aviation security and about $0.79 billion for aviation- security related programs. Figure 4 identifies reported aviation security funding for fiscal years 2004 through 2007. Of the approximately $5.7 billion requested for aviation security in the President’s fiscal year 2008 budget request, almost $4.4 billion, or about 77 percent, is for passenger and checked baggage screening. This includes approximately $4 billion to support passenger and checked baggage screening operations, such as TSO salaries and training, and $176 million for the procurement and $259 million for the installation of checked baggage explosive detection systems. Additional information on the President’s budget request for fiscal year 2008 as it relates to airline passenger prescreening, airline passenger and checked baggage screening, and air cargo security is provided later in this statement. TSA and CBP have separate efforts under way to strengthen domestic and international passenger prescreening, respectively. However, these programs are in development and face management and technical challenges. Further, while TSA and CBP have been developing their respective identity-matching programs separately, the two agencies are now taking steps to align their prescreening programs to minimize duplication and provide a single set of requirements for air carrier participation. However, key policy and technical decisions have not yet been made to clarify how these two programs will be aligned. For over 4 years, TSA has faced significant challenges in developing and implementing its advanced passenger prescreening program, now known as Secure Flight, and has not yet taken the identity-matching function over from air carriers as mandated by Congress. According to TSA, the Secure Flight program—which is to perform the functions associated with determining whether passengers on domestic flights are on the No Fly and Selectee Lists—is intended to (1) decrease the chance of compromising watch list data by centralizing its use within the federal government; (2) provide earlier identification of potential threats, allowing for the expedited notification of law enforcement and other organizations responsible for threat management; (3) provide a fair, equitable, and consistent matching process across all air carriers; and (4) offer consistent application of an expedited and integrated redress process for passengers misidentified as a threat. However, during the past 3 years, we reported on multiple occasions that the Secure Flight program (and its predecessor, CAPPS II) had not met key milestones or finalized its goals, objectives, and requirements. Further, in February 2006, we reported that, taken as a whole, the development of Secure Flight had not been effectively managed and the program was at risk of failure. We found that TSA had not conducted critical activities in accordance with best practices for large- scale information technology programs, and had not followed its own systems development life cycle guidance in managing the program’s development. Former program officials stated that TSA had instead used a rapid development method that was intended to enable it to develop the program more quickly. However, as a result of this approach, the development process had been ad hoc, with project activities conducted out of sequence. For example, program officials declared the design phase complete before requirements needed to guide the design of Secure Flight had been detailed. In addition, TSA had not maintained up-to-date program schedules or developed cost estimates for the program. In March 2005, we recommended that TSA take numerous steps to strengthen the program’s development, such as finalizing system requirements and developing detailed test plans to help ensure that all Secure Flight system functionality is properly tested and evaluated. We also recommended that TSA develop a plan for establishing connectivity among the air carriers and other stakeholders to help ensure the secure, effective, and timely transmission of data for use in Secure Flight operations. In early 2006, acknowledging the challenges it faced with the program, TSA suspended the development of Secure Flight and initiated a reassessment, or rebaselining, of the program, to be completed before moving forward. In January 2007, TSA announced that it had completed its rebaselining efforts, which included reassessing program goals and capabilities, and developing a new schedule and cost estimates—actions that we recommended in March 2005. The Assistant Secretary of Homeland Security for TSA stated that TSA had made significant progress in upgrading the design and development of the Secure Flight program, and that program documentation had been revised to reflect TSA’s plans for reliably delivering Secure Flight capabilities. In December 2006, the DHS Investment Review Board—a group of DHS senior executives charged with reviewing certain programs at key phases of development to help ensure they meet mission needs at expected levels of costs and risks—completed its review of Secure Flight and approved the program to proceed into capability development and demonstration phases. According to the Investment Review Board, this approval was based on rescoping Secure Flight using a new business model better focused on mission; putting a new team in place with appropriate technical and management skills; and improving its management approach to privacy, security, and quality assurance. However, the board also noted that this important screening capability was needed sooner than its planned mid- 2009 implementation time frame, and requested that TSA determine the feasibility of accelerating the program schedule to deliver initial capability by mid-2008. As we have reported, earlier attempts to accelerate the Secure Flight program have led to developmental problems and program delays. Accordingly, as TSA moves forward, it will need to employ a range of program management disciplines, which we previously found missing, to control program cost, schedule, performance, and privacy risks. As part of our ongoing work assessing the Secure Flight program, we will be reviewing DHS’s and TSA’s efforts to develop and implement the program, including progress made during its rebaselining efforts. Regarding TSA’s communications with air carriers about Secure Flight system requirements, we reported in March 2005 that air carriers had expressed concerns regarding the uncertainty of Secure Flight system and data requirements, and the impact that these requirements may have on the airline industry and traveling public. Further, based on preliminary results for our ongoing work, officials from 9 of the 15 air carriers we interviewed from February 2006 to January 2007, reported that they were enhancing their respective identity-matching systems or planned to do so. While these efforts may improve the accuracy of each air carrier’s individual identity-matching system, the improvements will only apply to their respective systems and could further exacerbate differences that currently exist among the air carriers’ various identity-matching systems. These differences may result in varying levels of effectiveness in the matching of passenger information against the No Fly and Selectee Lists, which was a key factor that led to the government’s effort to take over the identity-matching function through Secure Flight. Also, officials from 7 of 15 air carriers stated that TSA had not communicated with them about Secure Flight requirements within the past 6 months while the program was being rebaselined. TSA officials stated that in October 2006 they had resumed discussions with air carriers regarding Secure Flight requirements, and as of January 2007, had discussed plans for Secure Flight with officials from 8 air carriers and the Air Transport Association. TSA officials stated that they also plan to take into account current air carrier capabilities and programs as they proceed with Secure Flight development, and to update guidance previously provided to air carriers to reflect the current concept of operations for the rebaselined Secure Flight program. In February 2006, we also reported that TSA was in the early stages of coordinating with TSC and CBP on broader issues of integration and interoperability related to other people-screening programs used by the government to combat terrorism. However, TSA needed to provide these stakeholders with detailed information about its concept of operations for Secure Flight to enable them to plan for and provide the support necessary for the program. For example, a TSC official stated that without specific information on Secure Flight requirements, TSC could not make decisions about needed resources, such as personnel needed to operate its call center that would be used to help resolve potential matches against the No Fly and Selectee Lists. In January 2007, TSC officials stated that while they had been participating in meetings with Secure Flight officials, they had not yet received the specific operational and technical information needed to plan for supporting Secure Flight operations. During Secure Flight rebaselining efforts, TSA officials also stated that they were coordinating with CBP to more closely align their respective identity-matching programs. However, this collaboration is ongoing and key policy and technical decisions regarding how the programs will be coordinated have not been announced. We discuss TSA and CBP’s coordination of their domestic and international prescreening programs later in this statement. We have also previously reported that TSA, as part of its requirements development process, had not clearly identified the privacy impacts of the envisioned system or the full actions it planned to take to mitigate them. Specifically, because TSA had not made final determinations about its requirements for passenger data, and Secure Flight’s system development documentation did not fully address how passenger privacy protections were to be met, it was not possible to assess potential system impacts on individual privacy protections at that time. We have also previously reported that TSA violated provisions of the Privacy Act by not fully disclosing its use of personal information during systems testing. In March 2005, we recommended that TSA specify how Secure Flight will protect personal privacy. TSA officials stated that they are aware of, and plan to address, the potential for Secure Flight to adversely affect passenger privacy protections, and the need to provide a redress process whereby aviation passengers adversely affected by the identity matching process may express their concerns, seek correction of any inaccurate data, and request other actions to reduce or eliminate future inconveniences. Concurrent with its rebaselining efforts, TSA reported that it has developed a Secure Flight privacy program that is rooted in the Fair Information Practices—a set of internationally recognized privacy principles that underlie the Privacy Act. TSA officials further stated that the rebaselined Secure Flight program will result in a more transparent and privacy-enhanced program by addressing concerns identified by us and others in the following areas: program oversight, program scope, data collection activities, redress requirements, relationships with other TSA credentialing programs, and technical requirements. TSA officials also stated that they have embedded privacy contractor experts in the program teams to address privacy issues as they arise. In addition, in January 2007, officials from Secure Flight and TSA’s Office of Transportation Security Redress stated that Secure Flight will use the TSA redress process that is currently available for individuals affected by the air carrier identity- matching processes, but the details of how this process will be integrated with other Secure Flight requirements have not yet been completed. We will continue to assess TSA’s efforts to manage system privacy protections and establish a redress process for resolving misidentified passengers as part of our ongoing review of the program. We believe that TSA’s efforts to reassess Secure Flight’s development and progress was an appropriate step given the problems that faced the program in early 2006. However, since TSA only recently announced that it has completed its rebaselining efforts, and just recently provided more details of its rebaselined program, it is too early to determine the extent to which TSA has addressed the long-standing issues that have affected the program. According to DHS’s budget execution reports, TSA received about $126 million for fiscal years 2004 through 2006—including funds spent on the CAPPS II predecessor program—and $15 million for fiscal year 2007 for Secure Flight. For fiscal year 2008, the President’s budget request includes $53 million for TSA to continue this program. According to the TSA’s budget justification, the increase of $38 million is requested to provide for the development and the authority to operate the Secure Flight system. Additionally, the funding request would provide for procuring hardware, starting operations and training, and developing a network interface between Secure Flight and CBP. We will continue to monitor Secure Flight’s development as part of our ongoing review of the program. As originally envisioned, once Secure Flight became operational, TSA would be operating a domestic passenger prescreening system, while CBP would be operating an international passenger prescreening system. However, air carriers raised concerns regarding having to support different data requirements for two separate government prescreening programs. Further, we reported that both programs could result in potentially different results for passengers flying on domestic and international flights, results that could cause additional costs to air carriers, and confusion and inconvenience to passengers. For example, if the programs are not aligned, air carriers might have to implement different information connections, communications, and programming for each prescreening program, resulting in added costs and inefficiencies. Also, if the two separate programs use different passenger data elements or identity-matching technologies, air carriers may receive conflicting notifications to handle a passenger differently for an international than for a domestic flight. Passengers may also be inconvenienced since a passenger may be delayed on one leg of a multileg trip, which includes both a domestic and an international flight segment, and possibly miss a flight. The air carrier community has asked CBP and TSA to coordinate their efforts to ensure that the programs are compatible and are developed as a single approach to avoid the need for air carriers to implement two separate screening systems to meet CBP and TSA requirements. In a joint letter to the Secretary of DHS dated October 27, 2005, the Air Transport Association of America and the Association of European Airlines urged DHS to coordinate international and domestic airline passenger prescreening programs so that air carriers are not unduly burdened by the costs and inefficiencies posed by working with two different prescreening programs. The letter also stated that the Air Transport Association of America and the Association of European Airlines believed that there had been a lack of coordination between CBP and TSA in aligning their respective passenger prescreening programs. Air carrier industry groups reiterated this concern in comments they provided in response to CBP’s proposed rule for conducting passenger prescreening on international flights. We have also previously reported that since both agencies are developing and implementing passenger prescreening programs, CBP and TSA could mutually benefit from the sharing of technical testing results and the coordination of other developmental efforts. Coordination and planning in the development of these two programs would also enhance program integration and interoperability, potentially limit redundancies, and increase program effectiveness. We have recently recommended that DHS take additional steps and make key policy and technical decisions that are necessary to more fully coordinate these programs. Recognizing these concerns, DHS has directed TSA and CBP to coordinate their prescreening activities so that they provide “One DHS Solution” to the commercial aviation industry consistent with applicable authorities and statutes. CBP and TSA officials stated that they are taking steps to coordinate their prescreening efforts, including meeting routinely with DHS’s Office of Screening Coordination and with aviation and travel industry stakeholders to develop joint data requirements, processes, and methods for disseminating information to other government and law enforcement organizations in the event of a positive identity match against the No Fly and Selectee Lists. DHS officials told us that they envision a joint approach that will allow for standardization between the two programs to the extent possible, reduce unnecessary programming by aircraft operators, and provide consistent treatment for passengers across all aircraft operators. However, despite this coordination, key policy and technical decisions have not yet been made regarding how these programs will be aligned, including determining how differences in the data used to conduct identity matching and the identity matching techniques used will be resolved. Further, it is unclear how the different implementation schedules for the two programs—CBP has already issued a proposed rule to implement a new passenger prescreening program for passengers on international flights, while TSA’s schedule shows that Secure Flight will not begin operations until 2009—will affect coordination efforts. Given DHS’s commitment to align the two prescreening programs, and the security and efficiency benefits of doing so, it will be important for CBP and TSA to take the steps necessary to successfully coordinate these programs. Until international and domestic prescreening efforts are more fully aligned, the extent to which potential problems of duplication and conflicting results in international and domestic passenger prescreening will be addressed remains unclear. TSA has taken steps to strengthen the three key elements of the passenger and checked baggage screening systems—people (TSOs), screening procedures, and technology—but continues to face management, planning, and funding challenges. For example, TSA developed a Staffing Allocation Model to determine TSO staffing levels at airports that reflect current operating conditions, and provided TSOs with additional training intended to enhance the detection of threat objects, particularly improvised explosives. TSA also proposed modifications to passenger checkpoint screening procedures based on risk (threat and vulnerability information), among other factors, but could do more evaluation of proposed procedures before they are implemented to ensure they achieve their intended results. Additionally, TSA is exploring new technologies to enhance the detection of explosives and other threats, but continues to face management and funding challenges. For example, in May 2006, TSA reported that under current investment levels, the installation of optimal checked baggage screening systems would not be completed until approximately 2024. TSA, in collaboration with key stakeholders, has identified several funding and financing strategies for installing optimal checked baggage screening systems, such as continued appropriations for the procurement and installation of EDS machines. TSA has implemented several efforts intended to strengthen the management and performance of its TSO workforce, which TSA has identified as its most important asset in accomplishing its mission. We reported in February 2004 that staffing shortages and TSA’s hiring process had hindered the ability of some Federal Security Directors (FSD)—the ranking authority responsible for leading and coordinating security activities at airports—to provide sufficient resources to staff screening checkpoints and oversee screening operations at their checkpoints without using additional measures such as overtime. Since that time, TSA has developed a Staffing Allocation Model to determine TSO staffing levels at airports. In determining staffing allocations, the model takes into account the workload demands unique to each airport based on an estimate of each airport’s peak passenger volume. This input is then processed against certain TSA assumptions about screening passengers and checked baggage—including expected processing rates, required staffing for passenger lanes and baggage equipment based on standard operating procedures, and historical equipment alarm rates. In August 2005, TSA determined that the staffing model contained complete and accurate information on each airport from which to estimate staffing needs, and the agency used the model to identify TSO allocations for each airport. At that time, the staffing model identified a total TSO full-time equivalent allocation need of 42,303—a level within the congressionally mandated limit of 45,000 full-time equivalent TSOs. According to TSA, when TSA runs the model, it does so without imposing a limitation on the maximum number of full-time equivalent TSOs, either the 45,000 congressional limit or any budgetary limits that affect the number of TSOs that can be hired. In addition to the levels identified by the staffing model, TSA sets aside TSO full-time equivalents for needs outside of those considered by the staffing model in the annual allocation run for airports. For example, in order to handle short-term extraordinary needs at airports, TSA established a National Screening Force of 615 TSOs who can be sent to airports to augment local TSO staff during periods of unusually high passenger volume, such as the Super Bowl. Additionally, certain airports may, during the course of the year, experience significant changes to their screening operations, such as the arrival of a new airline or opening of a new terminal. TSA established a reserve of 329 TSO full-time equivalents during fiscal year 2006 that can be used to augment the existing force. The President’s fiscal year 2008 budget request includes $35 million for operational expenses for a National Deployment Office—an office that would be responsible for deploying the National Screening Force and other TSOs to those airports experiencing significant staffing shortfalls. According to TSA, TSA’s approach to allocating TSOs has allowed the agency to stay within the 43,000 full-time equivalent TSO budgetary limit for fiscal year 2006—a staffing level that TSA’s Assistant Secretary stated is sufficient to provide passenger and checked baggage screening services. According to the President’s fiscal year 2008 budget request, the $2.6 billion requested for the federal TSO workforce represents an increase of about $131 million over fiscal year 2007 for cost of living adjustments and a travel document checker initiative. Under this initiative, about 1,330 full-time equivalent TSOs would be placed at the 40 highest risk category X and I airports to conduct document checking for passengers approaching the passenger screening checkpoint. According to the budget request, the $2.6 billion is to fund the personnel, compensation, and benefits of approximately 43,688 full-time equivalent TSOs and about 1,045 full-time equivalent Screening Managers. Table 1 shows the total TSO and Screening Manager full-time equivalents and the funding levels for fiscal years 2004 through 2007, as reported by TSA. FSDs we interviewed in 2006 as part of our ongoing review of TSA’s staffing model generally reported that the model is a more accurate predictor of staffing needs than TSA’s prior staffing model, which took into account fewer factors that affect screening operations. However, FSDs identified that some assumptions used in the fiscal year 2006 staffing model did not reflect actual operating conditions. For example, FSDs noted that the staffing model’s assumption of a 20 percent part-time workforce—measured in terms of full-time equivalents—had been difficult to achieve, particularly at larger (category X and I) airports, because of, among other things, economic conditions leading to competition for part- time workers, remote airport locations coupled with a lack of mass transit, TSO base pay that has not changed since fiscal year 2002, and part-time workers’ desire to convert to full-time status. TSA data show that for fiscal years 2005 and 2006, the nation’s category X airports had a TSO workforce composed of about 8 percent part-time equivalents, and the part-time TSO attrition rate nationwide remains considerably higher than the rate for full- time personnel (approximately 46 percent versus 16 percent for full-time TSOs for fiscal year 2006). FSDs also expressed concern that the model did not specifically account for the recurrent training requirement for TSOs of 3 hours per week averaged over a fiscal year quarter. Further, FSDs identified that the model for fiscal year 2006 did not account for time away from screening to perform operational support duties. FSDs we interviewed stated that because they are not authorized to hire a sufficient number of mission support staff, TSOs are being routinely used to perform certain operational support functions, such as payroll processing, scheduling, distribution and maintenance of uniforms, data entry, and workman’s compensation processing. Similarly, in September 2006, the Department of Homeland Security’s Office of Inspector General reported that TSA had not determined the precise number of FSD administrative positions it needed and was using TSOs to perform administrative work. In response to FSDs’ input and the various mechanisms TSA has implemented to monitor the sufficiency of the model’s allocation outputs, TSA made changes to some assumptions in the model for fiscal year 2007. Our preliminary observations indicate that these revisions should help address the concerns identified by FSDs. For example, TSA recognized that some airports cannot likely achieve a 20 percent part-time full-time equivalent level and others (most likely smaller airports) may operate more effectively with other levels of part-time TSO staff. As a result, for fiscal year 2007, TSA modified this assumption to include a variable part- time goal based on each airport’s historic part-time to full-time TSO ratio. TSA also included an allowance in the fiscal 2007 Staffing Allocation Model for training to provide additional assurance that TSOs complete the required training on detecting improvised explosive devices—which TSA has identified as the most significant threat to commercial aviation. Additionally, TSA included an allowance for operational support duties in the 2007 Staffing Allocation Model to account for the current need for TSOs to perform these duties. Factors outside of the staffing model’s determination of overall TSO staffing levels also affect FSDs’ ability to effectively deploy their TSO workforce. Specifically, FSDs we interviewed as part of our ongoing review of TSA’s staffing model cited difficulties in recruiting and retaining sufficient TSOs (both full-time and part-time) to reach their full allocations as determined by the model; staffing checkpoints appropriately given that some TSOs are unavailable due to absenteeism and injuries; and managing around physical infrastructure limitations at some airports, such as lack of room for additional lanes or baggage check areas despite demand levels that would justify such added capacity. TSA has made progress in addressing these challenges through a variety of human capital initiatives. For example, to allow FSDs to more efficiently address staffing needs, TSA has shifted responsibility for hiring TSOs from TSA headquarters to FSDs at individual airports and, according to TSA officials, provided contractor support to assist in this effort. TSA data show that since local hiring began in March 2006, TSA has increased the number of new hire TSOs from approximately 180 per pay period in February 2006 to nearly 450 each pay period under the local hiring initiative. In addition to having an adequate number of TSOs, effective screening involves TSOs being properly trained to do their job. Since we first reported on TSO training in September 2003, TSA has taken a number of actions designed to strengthen training available to the TSO workforce beyond the basic training requirement. For example, TSA has expanded training available to the TSO workforce, such as introducing an Online Learning Center that makes self-guided courses available over TSA’s intranet and the Internet, and enhanced training on explosives detection. This training included both classroom and hands-on experience, and focused particularly on identifying X-ray images of improvised explosives device component parts, not just a completely assembled bomb. According to TSA, as of February 6, 2007, about 98 percent of the 48,236 TSOs on board had received classroom, checkpoint, or computer- based improvised explosive device recognition training. TSA has also developed new training curriculums to support new screening approaches. For example, TSA recently developed a training curriculum for TSOs in behavior observation and analysis at the checkpoint to identify passengers exhibiting behaviors indicative of stress, fear, or deception. The President’s fiscal year 2008 budget request includes $89.7 million to fully implement TSO training programs and related TSO workforce development programs. TSA has also made progress in addressing challenges that made it difficult for TSOs to access training. We reported in May 2005 that insufficient TSO staffing and a lack of high-speed Internet/intranet connectivity to access the Online Learning Center made it difficult for all TSOs at many airports to receive required training, and had limited TSO access to TSA training tools. We stated that without addressing the challenges to delivering ongoing training, including installing high-speed connectivity at airport training facilities, TSA may have difficulty maintaining a screening workforce that possesses the critical skills needed to perform at a desired level. As previously discussed, our preliminary observations from our ongoing review of TSA’s staffing model indicate that TSA has taken steps to address the TSO staffing challenges, including providing an allowance for TSO training in the Staffing Allocation Model for fiscal year 2007. However, it is too soon to determine whether TSA’s efforts will address TSA’s ability to provide required training while maintaining adequate coverage for screening operations. TSA established its Online Learning Center to provide passenger and baggage TSOs with online, high-speed access to training courses. However, effective use of the Online Learning Center requires high-speed Internet/intranet access, which TSA has not been able to provide to all airports. We reported that as of October 2004, about 45 percent of the TSO workforce did not have high-speed Internet/intranet access to the Online Learning Center. Given the importance of the Online Learning Center in both delivering training and serving as the means by which the completion of TSO training is documented, we recommended that TSA develop a plan that prioritizes and schedules the deployment of high-speed Internet/intranet connectivity to all TSA’s airport training facilities to help facilitate the delivery of TSO training and the documentation of training completion. Since that time, TSA has made progress in deploying high-speed connectivity to airports. According to the President’s fiscal year 2008 budget request, 95 percent of the nation’s airports now have high-speed connectivity. According to the budget request, TSA expects to meet the goal of all airports having high- speed connectivity during fiscal year 2007. In addition to TSA’s efforts to train and deploy a federal TSO workforce, steps have also been taken to strengthen passenger and checked baggage screening procedures to enhance detection capabilities. However, TSA could improve its evaluation and oversight of these procedures. With regard to passenger checkpoint screening procedures, between April and December 2005, proposed modifications were made in various ways and for a variety of reasons, and a majority of the proposed modifications— 48 of 92—were ultimately implemented at airports. As part of our ongoing review of TSA’s process for determining whether and how screening procedures should be modified, we found that TSA officials proposed standard operating procedure (SOP) modifications based on risk information (threat and vulnerability information), daily experiences of staff working at airports, and complaints and concerns raised by the traveling public. In addition to these factors, our preliminary observations indicate that consistent with its mission, TSA senior leadership made efforts to balance the impact that proposed SOP modifications would have on security, efficiency, and customer service when deciding whether proposed SOP modifications should be implemented. For example, in August 2006, TSA sought to increase security by banning liquids and gels from being carried onboard aircraft in response to the alleged terrorist plot to detonate liquid explosives onboard multiple aircraft en route from the United Kingdom to the United States. In September 2006, after obtaining more information about the alleged terrorist plot—to include information from the United Kingdom and U.S. intelligence communities, discussions with explosives experts, and testing of explosives—TSA officials decided to lift the total ban on liquids and gels to allow passengers to carry small amounts of liquids and gels onboard aircraft. TSA officials also lifted the total ban because banning liquids and gels as carry-on items was shown to affect both efficiency and customer service. Specifically, following the implementation of the total ban in August 2006, the number of bags checked per passenger increased by approximately 27 percent—thus placing a strain on the efficiency of the checked-baggage screening system. In addition, TSA recognized that passengers have legitimate needs that may require them to carry some liquids and gels onboard aircraft. Moreover, in an effort to harmonize its liquid screening procedures with other countries, in November 2006, TSA revised its procedures to allow 3.4 fluid ounces of liquids, gels, and aerosols onboard aircraft, which is equivalent to 100 milliliters—the amount permitted by the 27 countries in the European Union, as well as Australia, Norway, Switzerland, and Iceland. According to TSA, this means that approximately half of the world’s travelers will be governed by similar measures with regard to this area of security. In some cases, TSA first tested proposed modifications to screening procedures at selected airports to help determine whether the changes would achieve their intended purpose, such as to enhance detection of prohibited items or free up TSO resources to perform screening activities focused on threats considered to pose a high risk, such as explosives. TSA’s efforts to collect quantitative data through testing proposed procedures prior to deciding whether to implement or reject them is consistent with our past work that has shown the importance of data collection and analyses to support agency decision making. However, as part of our ongoing work, we identified that TSA’s data collection and analyses could be improved to help TSA determine whether proposed procedures that are operationally tested would achieve their intended purpose. Specifically, we found that for the tests of proposed screening procedures TSA conducted during the period April 2005 through December 2005, including the removal of small scissors and small tools from the prohibited items list, although TSA collected some data on the efficiency of and customer response to the procedures at selected airports, the agency generally did not collect the type of data or conduct the necessary analysis that would yield information on whether proposed procedures would achieve their intended purpose. We will report on the results of our analysis of TSA’s efforts to test proposed modifications to screening procedures later this year. Once proposed SOP changes have been implemented, it is important that TSA have a mechanism in place to ensure that TSOs are complying with established procedures. As part of our ongoing review of TSA’s process for revising passenger screening procedures, we identified that TSA monitors TSO compliance with passenger checkpoint screening SOPs through its performance accountability and standards system and through local and national covert testing. According to TSA officials, the performance accountability and standards system was developed in response to a 2003 report by us that recommended that TSA establish a performance management system that makes meaningful distinctions in employee performance, and in response to input from TSA airport staff on how to improve passenger and checked baggage screening measures. This system will be used by TSA to assess agency personnel at all levels on various competencies, including, among other things, technical proficiency. The technical proficiency component of the performance accountability and standards system will be used to measure TSO compliance with passenger checkpoint screening procedures. In addition to implementing the performance accountability and standards system, TSA conducts local and national covert tests to evaluate, in part, the extent to which TSOs’ noncompliance with the SOPs affects their ability to detect simulated threat items hidden in accessible property or concealed on a person. Our preliminary observations indicate that TSA airport officials have experienced resource challenges in implementing these compliance monitoring methods. TSA headquarters officials stated that they are taking steps to address these challenges. For example, officials said that they have automated many of the data entry functions of the performance accountability and standards system to relieve the field of the burden of manually entering this information into the online system. TSA has also taken steps to strengthen checked baggage screening through reducing the need to use alternative screening procedures. In addition to screening with standard procedures using EDS and ETD, which TSA had determined to provide the most effective detection of explosives, TSA also allows alternative screening procedures to be used when volumes of baggage awaiting screening pose security vulnerabilities or when TSA officials determine that there is a security risk associated with large concentrations of passengers in an area. These alternative screening procedures include the use of EDS and ETD machines in nonstandard ways, and also include three procedures that do not use EDS or ETD—screening with explosives detection canines, physical bag searches, and matching baggage to passenger manifests to confirm that the passenger and his or her baggage are on the same plane. TSA’s use of alternative screening procedures has involved trade-offs in security effectiveness. However, the extent of the security trade-offs is not fully known because TSA has not tested the effectiveness of alternative screening procedures in an operational environment. In our July 2006 report on TSA’s use of alternative screening procedures, we recommended that TSA conduct local testing of alternative screening procedures to determine whether checked baggage TSOs can detect simulated improvised explosives when using these procedures. Since then, TSA has conducted covert testing of alternative screening procedures at some airports. TSA is pursuing several mitigating actions to reduce the need to use alternative screening procedures. These actions include deploying more efficient checked baggage screening systems, strengthening its coordination with groups such as tour operators to better plan for increases in baggage screening needs, deploying “optimization teams” to airports that were frequently using alternative screening procedures to determine why the procedures were being used so often and to suggest remedies, and deploying additional EDS machines. However, although TSA has taken steps to reduce the need to use alternative screening procedures at airports, TSA’s oversight of FSDs’ use of alternative screening procedures could be strengthened. For example, in July 2006, we reported that FSDs and their staff did not always accurately report the occurrences when a particular alternative baggage screening procedure was used, impeding TSA’s ability to reliably determine how often and for how long the alternative screening procedures were used. In addition, FSDs and their staff did not always report the use of alternative screening procedures as required. TSA officials stated that they were working with FSDs to correct these reporting problems and had issued guidance clarifying requirements for reporting alternative screening procedures. Additionally, while TSA is working to minimize the need to use alternative screening procedures at airports, TSA has not created performance measures or targets related to the use of these procedures. By creating a performance measure for the use of alternative screening procedures as part of the checked baggage screening index or as a stand-alone measure, TSA could gauge whether it is making progress towards minimizing the need to use these procedures at airports and have more complete information on how well the overall checked baggage screening system is performing. Furthermore, performance targets for the use of alternative screening procedures would provide an indicator of how much risk TSA is willing to accept in using these procedures, and TSA’s monitoring of this indicator would identify when it has exceeded the level of risk that it has determined acceptable. We recommended that TSA develop performance measures and performance targets for the use of alternative screening procedures. Additionally, in September 2006, Congress directed TSA to take a variety of actions—most of which we recommended in our July 2006 report—to monitor and assess the use of alternative screening procedures, including (1) develop performance measures and performance targets for the use of alternative screening procedures; (2) track the use of alternative screening procedures at airports; (3) assess the effectiveness of these measures; (4) conduct covert testing at airports that use alternative screening procedures; (5) develop a plan to stop alternative screening procedures at airports as soon as practicable; and (6) report to the Senate and House Committees on Appropriations, the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Homeland Security by January 23, 2007, on implementation of these requirements. According to TSA officials, the agency is continuing to monitor and track the use of alternative screening procedures, which has allowed it to identify areas for improvement nationwide and address local issues to minimize the need for alternative screening procedures. TSA is supporting the development and deployment of technologies to strengthen commercial aviation security but faces management and funding challenges. For example, TSA and DHS’s S&T are exploring new passenger checkpoint screening technologies to enhance the detection of explosives and other threats. However, limited progress has been made in fielding explosives detection technology at passenger screening checkpoints, in part due to challenges DHS S&T and TSA face in coordinating research and development efforts. In addition, TSA has begun to systematically plan for the optimal deployment of checked baggage screening systems, but resources have not been made available to fund the installation of in-line EDS machines on a large-scale basis. To enhance passenger checkpoint screening, TSA is currently working with DHS S&T’s Transportation Security Laboratory to develop new passenger checkpoint screening technologies. TSA designated about $80.5 million in fiscal year 2007 to acquire and deploy emerging screening technologies, and has requested $81.6 million for similar purposes in fiscal year 2008. Our preliminary work has found that of the various research and development projects funded by TSA and DHS S&T, six checkpoint screening projects are currently in the applied research or advanced development phases. Projects in the applied research phase include liquid bottle screening devices, explosives trace portals that will reduce the size of the current explosives trace portals at checkpoints, and shoe scanners. Three other projects in the advanced development phase include whole body imagers, cast and prosthesis scanners, and checkpoint explosives detection systems. TSA plans to place whole body imagers and checkpoint explosives detection systems at certain airport locations to collect initial operational data, and plans to continue to conduct similar tests of the cast and prosthesis scanners during fiscal year 2007. Table 2 provides a description and status of the passenger checkpoint screening technologies TSA and DHS S&T are currently researching and developing. Despite TSA’s efforts to develop passenger checkpoint screening technologies, preliminary results from our ongoing work suggests that limited progress has been made in fielding explosives detection technology at checkpoints. For example, TSA’s fiscal year 2007 budget justification requested $80.5 million in budget authority to acquire and deploy screening technologies emerging from research and development programs, including the acquisition of 92 additional explosives trace portal machines and funds to operate and service approximately 434 portals. TSA had anticipated that the portals would be in operation throughout the country during fiscal year 2007. However, due to performance and maintenance issues, TSA halted the acquisition and deployment of the portals in June 2006, and the acquisition of additional portals is contingent on resolution of these issues. As a result, TSA has fielded less than 25 percent of the 434 portals it projected it would deploy by fiscal year 2007. In addition to the portals, TSA has fallen behind in its projected acquisition of other emerging screening technologies. For example, the acquisition of 91 Whole Body Imagers has been delayed in part because TSA needed to develop a means to protect the privacy of passengers screened by this technology. For fiscal year 2008, TSA has requested an additional $81.6 million to evaluate, acquire, and install emerging technologies. We will continue to assess DHS S&T and TSA’s deployment of checkpoint screening technologies during our on-going review. While TSA and DHS have taken steps to coordinate the research, development, and deployment of checkpoint technologies, our ongoing work has identified that challenges remain. For example, TSA and DHS S&T officials stated that they encountered difficulties in coordinating research and development efforts due to reorganizations of TSA and S&T. A senior TSA official also stated that while TSA and the DHS S&T have executed a memorandum of understanding to establish the services that the Transportation Security Laboratory is to provide to TSA, coordination with S&T remains a challenge because the organizations have not fully implemented the terms of the memorandum of understanding. In addition to challenges in coordinating with each other, our preliminary observations suggest that TSA and DHS S&T also face challenges in coordinating with external stakeholders. Specifically, while TSA and DHS S&T have taken steps to coordinate efforts with external stakeholders, some airport managers we interviewed in October 2006 stated that TSA did not adequately communicate with them about when new technologies were to be deployed in their airports. TSA officials stated that they do not have a master schedule that establishes milestones for conducting operational tests and evaluations of emerging technologies or for deploying these technologies. Lack of such a schedule could limit TSA’s ability to coordinate operational tests and deployments with stakeholders. Additionally, TSA does not yet have a strategic plan to guide its efforts to acquire and deploy screening technologies. As part of our ongoing work, we will assess further TSA’s efforts to develop an overall strategic approach to guide the deployment of checkpoint technologies. A lack of a strategic plan or approach could limit TSA’s ability to deploy emerging technologies at those airport locations deemed at highest risk. TSA officials stated that the agency is in the process of developing a strategic plan for the checkpoint that is scheduled to be completed in early 2007. TSA officials stated that the completion of the plan was delayed due to competing priorities, including ensuring the screening of checked baggage using explosives detection systems and responding to new and emerging threats, such as homemade explosives. TSA officials also said that reorganizations at TSA and DHS S&T have contributed to the delay. It is important that TSA continue to invest in and develop technologies for detecting explosives, as demonstrated by the alleged August 2006 terrorist plot to detonate liquid explosives on board multiple commercial aircraft bound for the United States from the United Kingdom. The President’s fiscal year 2007 budget request notes that emerging checkpoint technology will enhance the detection of prohibited items, especially firearms and explosives, on passengers. We are currently evaluating DHS’s and TSA’s progress in planning for, managing, and deploying research and development programs in support of airport checkpoint screening operations and will report on the results of our work later this year. At checked baggage screening stations, TSA has been effective in deploying EDS machines and ETD machines. However, initial deployment of EDS machines in a stand-alone mode—usually in airport lobbies—and ETD machines resulted in operational inefficiencies and security risks as compared with using EDS machines integrated in-line with airport baggage conveyor systems. As we reported in March 2005, to initially deploy EDS and ETD equipment to screen 100 percent of checked baggage for explosives, TSA implemented interim airport lobby solutions rather than in-line EDS baggage screening systems. TSA officials stated that they used EDS machines in stand-alone mode and ETD machines as an interim solution in order to meet the congressional deadline for screening all checked baggage for explosives. Officials stated that they employed these interim solutions because of the significant costs required to install in-line systems and the need to reconfigure many airports’ baggage conveyor systems to accommodate the equipment. TSA’s use of stand-alone EDS and ETD machines has required a greater number of TSOs and resulted in screening fewer bags for explosives each hour. Additionally, because in- line EDS checked baggage screening systems can significantly reduce the need for TSOs to handle baggage, installing them may also reduce the number of TSO on-the-job injuries. Moreover, screening with in-line EDS systems could also result in security benefits by reducing congestion in airport lobbies and reducing the need for TSA to use alternative screening procedures. In March 2005, we reported that at nine airports where TSA had agreed to help fund the installation of in-line EDS systems, TSA estimated that screening with in-line EDS machines could save the federal government about $1.3 billion over 7 years. In February 2006, TSA reported that a savings of approximately $4.7 billion could be realized over a period of 20 years by installing optimal checked baggage screening systems, including in-line EDS machines, at the airports with the highest checked baggage volumes. However, TSA also reported in February 2006 that many of the initial in-line EDS systems had not achieved the degree of anticipated savings initially estimated. TSA has since determined that recent improvements to the design of the in-line EDS systems and EDS screening technology now offer the opportunity for higher-performance and lower-cost screening systems. In June 2006, TSA issued guidance to airports to provide options, ideas, and suggestions for airports to choose from when considering security requirements in the planning and design of new or renovated airport facilities. This guidance also provides recommendations for airports in constructing in-line systems. TSA has begun to systematically plan for the optimal deployment of checked baggage screening systems, but resources have not been made available to fund the installation of in-line EDS machines on a large-scale basis. In March 2005, we reported that while TSA had made progress in deploying EDS and ETD machines, it had not conducted a systematic, prospective analysis of the optimal deployment of these machines to achieve long-term savings and enhanced efficiencies and security. We recommended that TSA systematically evaluate baggage screening needs at airports. In February 2006, TSA released its strategic planning framework for checked baggage screening aimed at increasing security through deploying more EDS machines, lowering program life-cycle costs, minimizing impacts to TSA and airport and airline operations, and providing a flexible security infrastructure. According to TSA, the framework will be used to establish a comprehensive strategic plan for TSA’s checked baggage screening program. As part of this planning effort, TSA identified, among other things, the top 25 airports that should first receive federal funding for projects related to the installation of in-line EDS systems, and the optimal checked baggage screening solutions for the 250 airports with the highest checked baggage volumes. DHS’s budget execution reports for TSA for fiscal year 2007 show that TSA received $524.4 million for the purchase, installation, maintenance, and operations integration of EDS and ETD machines. The President’s fiscal year 2008 budget request includes $692 million for these activities—an increase of $167.6 million over the previous year's appropriations. Most (about 72 percent) of this increase is for installation of EDS and ETD machines. In February 2006, TSA officials reported that if some of the top 25 airports do not receive in-line checked baggage screening systems, they will require additional screening equipment to be placed in airport lobbies and additional TSO staffing in order to remain in compliance with the mandate for screening all checked baggage using explosive detection systems. Additionally, in May 2006, TSA reported that under current investment levels, the installation of optimal checked baggage screening systems would not be completed until approximately 2024. According to TSA, as of September 30, 2006, 36 airports had operational in-line systems— 18 airports had airport-wide systems, while the remaining 18 airports had systems at a particular terminal or terminals. Over the next 2 years, TSA expects full and partial in-line systems to become operational at 25 additional airports. This level of effort, according to TSA, balances resources with other risks to transportation security. In March 2005, we reported that TSA and airport operators were relying on several sources of funding to construct in-line checked baggage screening systems. One source of funding airport operators initially used was the Federal Aviation Administration’s Airport Improvement Program, which traditionally funds grants to maintain safe and efficient airports. With Airport Improvement Program funds no longer available after fiscal year 2003 for this purpose, airports turned to other sources of federal funding to construct in-line systems. The fiscal year 2003 Consolidated Appropriations Resolution approved the use of letter of intent agreements as a vehicle to leverage federal government and industry funding to support facility modification costs for installing in-line EDS baggage screening systems. TSA also uses other transaction agreements as an administrative vehicle to directly fund, with no long-term commitments, airport operators for smaller in-line airport modification projects. Under these agreements, as implemented by TSA, the airport operator provides a portion of the funding required for the modification. To fund the procurement and installation of explosive detection systems in-line, TSA uses annual appropriations and the $250 million mandatory appropriation of the Aviation Security Capital Fund. For example, in fiscal years 2005, 2006, and 2007, TSA received appropriations of $175 million, $180 million, and $141.4 million, respectively, for the procurement of explosive detection systems. It received appropriations of $45 million in fiscal years 2005 and 2006, and $138 million in fiscal year 2007 for the installation of explosive detection systems, in addition to the $250 million made available through the capital fund. Congress also authorized an additional appropriation of $400 million per year through fiscal year 2007 for airport security improvement projects, including the installation of in-line EDS systems. However, appropriations have not been made under this authorization. Figure 5 shows TSA obligated funding levels for EDS installation and integration. TSA is collaborating with key stakeholders to identify funding and financing strategies for installing optimal checked baggage screening systems. In August 2006, the Aviation Security Advisory Committee baggage screening investment study working group, of which TSA is a member, released a study outlining an investment strategy for funding TSA’s checked baggage screening program. According to TSA, this study, which has been provided to the Office of Management and Budget for review, is the final component of TSA’s strategic plan for checked baggage screening. The investment study recommended four investment options, including (1) tax credit bonds, (2) continued appropriations for the procurement and installation of EDS machines, (3) combined line items for the purchase and installation of EDS machines in order to provide TSA increased flexibility in directing the funding where it is most needed, and (4) enhanced eligibility for the Passenger Facility Charge (PFC). The working group estimated that under its recommended approach, the present value cost of the checked baggage screening program is $23.3 billion over the next 20 years. Of these costs, the aviation industry is projected to bear $3.6 billion and the federal government is projected to bear $19.7 billion. According to the working group, the net effect of investing in optimal systems would be to reduce overall life-cycle costs by $1.2 billion relative to the current rate of investment, primarily through TSO staff cost savings and avoidance of increased TSO staff costs in the future. In addition, in its August 2006 study, the working group identified that in order to achieve these cost savings, a formal cost management process is needed given evolving technology and design practices, the various parties involved in design and operation, and the amount of capital investment to be made over the next several years. The working group identified a variety of actions that should be taken by Congress, TSA, and the aviation industry, including implementing a structured process for ongoing government and industry collaboration and increasing program management resources to provide for more substantial TSA involvement throughout the planning, design, and construction process. (App. I includes a complete list of the specific actions the working group identified for Congress, TSA, and the aviation industry.) In October 2005, we reported that TSA had taken a number of actions intended to strengthen domestic air cargo security, but, as we reported, factors existed that may have limited their effectiveness. Since our report was released, TSA has issued an air cargo security rule that revised some of the requirements air carriers are required to follow to ensure air cargo security, and has drafted new and revised security programs for domestic and foreign passenger and all-cargo carriers that contain more specific security requirements. However, more work remains to ensure that TSA has a comprehensive strategy to secure air cargo that fully incorporates risk management principles. TSA has taken steps towards applying a risk-based management approach to addressing domestic air cargo security, including conducting threat assessments. However, opportunities exist to strengthen these efforts. Applying a risk management framework to decision making is one tool to help provide assurance that programs designed to combat terrorism are properly prioritized and focused. TSA has underscored the importance of implementing a risk-based approach that protects against known threats, but that is also sufficiently flexible to direct resources to mitigate new and emerging threats. According to TSA, the ideal risk model would be one that could be used throughout the transportation sector and applicable to different threat scenarios. As part of TSA’s risk-based approach, TSA issued an Air Cargo Strategic Plan in November 2003 that focused on securing the domestic air cargo supply chain. TSA coordinated with air cargo industry stakeholders representing passenger and all-cargo carriers to develop this plan. TSA officials stated that they are revising their existing domestic air cargo strategic plan, but as of February 5, 2007, agency officials had not set a timeframe for when TSA will complete this revision. TSA’s Air Cargo Strategic Plan describes, among other things, an approach for screening or reviewing information on all domestic air cargo shipments to determine their level of relative risk, ensuring that 100 percent of cargo identified as posing an elevated risk is physically inspected, and pursuing technological solutions to physically inspect air cargo. TSA officials anticipate that the agency's system for targeting domestic air cargo, referred to as Freight Assessment, will minimize the reliance on the random physical inspections currently conducted by air carriers. According to agency plans, air carriers would receive targeting information from TSA on specific cargo items identified as posing an elevated risk. Upon notification by TSA's Freight Assessment System, air carrier personnel would be responsible for conducting the inspection of cargo identified as elevated risk. In October 2005, we reported that although TSA had identified data elements that could be used in its Freight Assessment System, the agency had not yet ensured that these data are complete, accurate, and current. We recommended that TSA take steps to do so; however, as of February 2007, TSA has not yet addressed this recommendation. Further, while TSA planned to phase in implementation and deployment of the targeting system for cargo transported on passenger carriers during calendar years 2006 and 2007, as of February 2007, TSA’s system for targeting domestic cargo is still under development. In addition to developing a strategic plan, a risk management framework in the homeland security context should include risk assessments, which typically involve three key elements—threats, vulnerabilities, and criticality or consequence. Information from these three assessments provides input for setting priorities, evaluating alternatives, allocating resources, and monitoring security initiatives. In September 2005, TSA’s Office of Intelligence (formerly known as the Transportation Security Intelligence Service) completed an overall threat assessment for air cargo, which identified general and specific threats to domestic air cargo. However, we reported that TSA had not conducted a vulnerability assessment to identify the range of security weaknesses that could be exploited by terrorists. TSA plans to conduct this assessment of domestic air cargo vulnerabilities—as we recommended—and expects it to be completed in late 2007. In October 2005, we reported that TSA had taken a number of actions intended to strengthen domestic air cargo security, but that factors existed that may limit the effectiveness of these actions. For example, we reported that TSA had established a centralized Known Shipper database to streamline the process by which shippers (individuals and businesses) are made known to carriers with whom they conduct business. However, at that time, the information in this database on the universe of shippers was incomplete, because participation in this database was voluntary. Moreover, we identified problems with the reliability of the information in the database. TSA estimated that the agency’s centralized database contained information on about 400,000 known shippers, or less than one- third of the total population of known shippers, which is estimated to be about 1.5 million. In May 2006, TSA issued an air cargo security rule that included a number of provisions aimed at enhancing the security of air cargo. For example, TSA made participation in the Known Shipper database mandatory, requiring air carriers and indirect air carriers to submit information on their known shippers to TSA’s Known Shipper database. However, the May 2006 security rule did not modify TSA’s current process for validating known shippers, which remains the responsibility of indirect air carriers and air carriers. Accordingly, passenger, all cargo, and indirect air carriers will continue to be responsible for determining the integrity of the shipper, which may allow for potential conflicts of interest because air carriers who conduct business with shippers will also continue to have the authority to validate these same shipping customers. In October 2005, we also reported that TSA had established requirements for air carriers to randomly inspect air cargo, but had exempted some cargo from inspection, potentially creating security weaknesses. We recommended that TSA examine the rationale for existing air cargo inspection exemptions, determine whether such exemptions leave the air cargo system unacceptably vulnerable to terrorist attack, and make any needed adjustments to the exemptions. TSA established a working group to examine the rationale for existing air cargo inspection exemptions, and in October 2006, issued a security directive and emergency amendment to domestic and foreign passenger air carriers operating within and from the United States that limited the inspection exemptions. According to TSA officials, the agency is still considering revisions to the inspection exemptions for cargo being transported into the United States. In October 2005, we also reported that TSA conducted compliance inspections of air carriers to ensure that they were complying with existing air cargo security requirements. These compliance inspections ranged from a comprehensive review of the implementation of all air cargo security requirements by an air carrier or indirect air carrier to a review of just one or several security requirements. However, TSA had not developed measures to assess the adequacy of air carrier compliance with air cargo security requirements, or assessed the results of its compliance inspections to target higher-risk air carriers or indirect air carriers for future reviews. More recently, TSA reported that the agency has increased the number of inspectors dedicated to conducting air cargo inspections, and has begun analyzing the results of the compliance inspections to help focus their inspections on those entities that have the highest rates of noncompliance. For fiscal year 2008, the President’s budget includes a request of about $56 million for TSA’s air cargo security program, which includes funding for, among other things, 300 air cargo security inspectors, TSA-certified canines for air cargo related activities, and the development and deployment of a Freight Assessment System to target elevated-risk cargo. In addition to taking steps to strengthen inspections of air cargo, TSA is working to enhance air cargo screening technologies. Specifically, TSA, together with DHS’s S&T, is currently developing and pilot testing a number of technologies to assess their applicability to inspecting and securing air cargo. These efforts include: an air cargo explosives detection pilot program implemented at three airports, testing the use of explosive detection systems, explosive trace detectors, standard X-ray machines, canine teams, technologies that can locate a stowaway through detection of a heartbeat or increased carbon dioxide levels in cargo, and manual inspections of air cargo; an EDS pilot program, which is testing the use of computer-aided tomography to measure the densities of objects in order to identify potential explosives in air cargo; an air cargo security seals pilot, which is exploring the viability of potential security countermeasures, such as tamper-evident security seals, for use with certain classifications of exempt cargo; the use of hardened unit-loading devices, which are containers made of blast-resistant materials that could withstand an explosion onboard the aircraft; and the use of pulsed fast neutron analysis, which allows for the identification of the material signatures of contraband, explosives, and other threat objects. According to TSA officials, the agency will determine whether it will require the use of any of these technologies once it has completed its assessments and analyzed the results. However, TSA has not established a timeframe for completing these assessments. According to TSA officials, the federal government and the air cargo industry face several challenges that must be overcome to effectively implement any of these technologies to inspect or secure air cargo. These challenges include factors such as the nature, type, and size of the cargo; environmental and climatic conditions; inspection throughput rates; staffing and training issues for individuals who inspect air cargo; the location of air cargo facilities (centralized versus decentralized); cost and availability; and employee health and safety concerns. To effectively inspect domestic air cargo that TSA deems to pose an elevated risk, the agency will need to make decisions regarding which technologies will be used to inspect such cargo. According to TSA officials, there is no single technology capable of efficiently and effectively inspecting all types of air cargo for the full range of potential terrorist threats, including explosives and weapons of mass destruction. We will soon report on the second phase of our review of air cargo security, which focuses on DHS’s efforts to secure air cargo that is transported into the United States from abroad, referred to as inbound air cargo. This report will address (1) the actions TSA and CBP have taken to secure inbound air cargo, and how, if at all these efforts could be strengthened; and (2) the practices the air cargo industry and select foreign governments have adopted that could be used to enhance TSA’s efforts to strengthen inbound air cargo security, and the extent to which TSA and CBP have worked with foreign governments to enhance their air cargo security efforts. DHS and TSA have undertaken numerous initiatives to strengthen the security of the nation’s aviation system, and should be commended for these efforts. Meeting the congressional mandates to screen airline passengers and checked baggage alone was a tremendous challenge. Since that time, TSA has turned its attention to strengthening passenger prescreening, more efficiently allocating and deploying TSOs, strengthening screening procedures, developing and deploying more effective and efficient screening technologies, and improving domestic air cargo security, among other efforts. TSA has made progress in all of these areas, but opportunities exist to further strengthen their efforts, in particular in the areas of risk-based decision making, program planning and monitoring, and stakeholder collaboration. Our work has shown—in homeland security and in other areas—that a comprehensive risk management approach can help inform decision makers in the allocation of finite resources to the areas of greatest need. We are encouraged that risk management has been a cornerstone of DHS and TSA policy, and that TSA has implemented risk-based decision making into a number of its efforts. Despite this commitment, however, TSA will continue to face difficult decisions and trade-offs—particularly as threats to commercial aviation evolve—regarding acceptable levels of risk and the need to balance security with efficiency and customer service. We recognize that doing so will not be easy. In implementing a risk-based approach, DHS and TSA must also address the challenges we identified in our work related to program planning, risk assessments, and implementation and monitoring of aviation security programs. Without rigorous planning and prioritization, and knowledge of the effectiveness of aviation security programs, DHS and TSA cannot be sure that they are focusing their finite resources on the areas of greatest need. Risk-based decision making will be particularly important as TSA begins to place more focus on the security of non-aviation modes of transportation, including passenger rail, and resource decisions and related trade-offs will have to be made not only within aviation, but across all transportation modes. TSA must also continue its work to strengthen partnerships with other federal agencies, state and local governments, the private sector, and international partners to improve the security of the commercial aviation system. Securing all aspects of commercial aviation is shared a responsibility among these parties. Accordingly, it is important that all stakeholders be involved, as appropriate, in coordinating security-related priorities and activities, and reviewing and sharing best practices and developing common security frameworks. Such efforts are particularly important with international partners due to our interdependence with foreign nations in securing the aviation system—as evidenced by the recent alleged terrorist plot to detonate liquid explosives onboard multiple aircraft departing the United Kingdom for the United States. TSA has strengthened its coordination efforts with domestic and international partners, which has aided its security efforts and helped to avoid duplication of effort. Existing risk-based decision making, program planning and monitoring, and coordination efforts will need to continue and be strengthened as TSA works to address continuing challenges and threats facing commercial aviation. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information on this testimony, please contact Cathleen A. Berrick, (202) 512-3404 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, Mike Bollinger, Kristy Brown, Carissa Bryant, Tony Cheesebrough, Kevin Copping, Christine Fossett, Samantha Goodman, John Hansen, Mike Harmond, Dawn Hoff, Suzanne Heimbach, Adam Hoffman, Anne Laffoon, Thomas Lombardi, Steve Morris, Katrina Moss, Mona Nichols-Blake, Leslie Sarapu, Brian Sklar, Edith Sohna, Maria Strudwick, Meg Ullengren, and Candice Wright made contributions to this testimony. In August 2006, the Aviation Security Advisory Committee baggage screening investment study working group, of which TSA is a member, released a study outlining an investment strategy for the funding of TSA’s checked baggage screening program. The working group’s investment study identified five key actions that should be taken by Congress, TSA, and the aviation industry, respectively, with regard to funding these systems. Table 3 provides a summary of the key actions identified by the working group. Transportation Security Administration’s Office of Intelligence: Responses to Post Hearing Questions on Secure Flight. GAO-06-1051R. Washington D.C.: August 4, 2006. Aviation Security: Management Challenges Remain for the Transportation Security Administration’s Secure Flight Program. GAO-06-864T. Washington D.C.: June 14, 2006. Aviation Security: Significant Management Challenges May Adversely Affect Implementation of the Transportation Security Administration’s Secure Flight Program. GAO-06-374T. Washington, D.C.: Feb. 9, 2006. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information During Secure Flight Program Testing in Initial Privacy Notes, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05-864R. Washington, D.C.: July 22, 2005. Aviation Security: Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System Is Further Developed. GAO-05-356. Washington, D.C.: March 28, 2005. Aviation Security: Measures for Testing the Effect of Using Commercial Data for the Secure Flight Program. GAO-05-324. Washington, D.C.: Feb. 23, 2005. Aviation Security: Challenges Delay Implementation of Computer- Assisted Passenger Prescreening System. GAO-04-504T. Washington, D.C.: March 17, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: Feb. 13, 2004. Aviation Security: TSA Oversight of Checked Baggage Screening Procedures Could Be Strengthened. GAO-06-869. Washington, D.C.: July 28, 2006. Aviation Security: TSA Has Strengthened Efforts to Plan for the Optimal Deployment of Checked Baggage Screening Systems but Funding Uncertainties Remain. GAO-06-875T. Washington, D.C.: June 29, 2006. Aviation Security: Enhancements Made in Passenger and Checked Baggage Screening, but Challenges Remain. GAO-06-371T. Washington, D.C.: April 4, 2006. Aviation Security: Transportation Security Administration Has Made Progress in Managing a Federal Security Workforce and Ensuring Security at U.S. Airports, but Challenges Remain. GAO-06-597T. Washington, D.C.: April 4, 2006. Aviation Security: Better Planning Needed to Optimize Deployment of Checked Baggage Screening Systems. GAO-05-896T. Washington, D.C.: July 13, 2005. Aviation Security: Screener Training and Performance Measurement Strengthened, but More Work Remains. GAO-05-457. Washington, D.C.: May 2, 2005. Aviation Security: Systematic Planning Needed to Optimize the Deployment of Checked Baggage Screening Systems. GAO-05-365. Washington, D.C.: March 15, 2005. Aviation Security: Challenges Exist in Stabilizing and Enhancing Passenger and Baggage Screening Operations. GAO-04-440T. Washington, D.C.: Feb. 12, 2004. Airport Passenger Screening: Preliminary Observations on Progress Made and Challenges Remaining. GAO-03-1173. Washington, D.C.: Sept. 24, 2003. Aviation Security: Federal Action Needed to Strengthen Domestic Air Cargo Security. GAO-06-76. Washington, D.C.: Oct. 17, 2005. Aviation Safety: Undeclared Air Shipments of Dangerous Goods and DOT’s Enforcement Approach. GAO-03-22. Washington, D.C.: Jan. 10, 2003. Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-344. Washington, D.C.: Dec. 20, 2002. Aviation Security: Further Study of Safety and Effectiveness and Better Management Controls Needed If Air Carriers Resume Interest in Deploying Less-than-Lethal Weapons. GAO-06-475. Washington, D.C.: May 26, 2006. Aviation Security: Federal Air Marshal Service Could Benefit from Improved Planning and Controls, GAO-06-203. Washington, D.C.: Nov. 28, 2005. Aviation Security: Flight and Cabin Crew Member Security Training Strengthened, but Better Planning and Internal Controls Needed. GAO-05-781. Washington, D.C.: Sept. 6, 2005. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D.C.: Nov. 19, 2003. Aviation Security: Information Concerning the Arming of Commercial Pilots. GAO-02-822R. Washington, D.C.: June 28, 2002. Homeland Security: Agency Resources Address Violations of Restricted Airspace, but Management Improvements Are Needed. GAO-05-928T. Washington, D.C.: July 21, 2005. General Aviation Security: Increased Federal Oversight Is Needed, but Continued Partnership with the Private Sector Is Critical to Long-Term Success. GAO-05-144. Washington, D.C.: Nov. 10, 2004. Aviation Security: Further Steps Needed to Strengthen the Security of Commercial Airport Perimeters and Access Controls. GAO-04-728. Washington, D.C.: June 4, 2004. Aviation Security: Challenges in Using Biometric Technologies. GAO-04-785T. Washington, D.C.: May 19, 2004. Nonproliferation: Further Improvements Needed in U.S. Efforts to Counter Threats from Man-Portable Air Defense Systems. GAO-04-519. Washington, D.C.: May 13, 2004. Aviation Security: Factors Could Limit the Effectiveness of the Transportation Security Administration’s Efforts to Secure Aerial Advertising Operations. GAO-04-499R. Washington, D.C.: March 5, 2004. The Department of Homeland Security Needs to Fully Adopt a Knowledge-based Approach to Its Counter-MANPADS Development Program. GAO-04-341R. Washington, D.C.: Jan. 30, 2004. Homeland Security: Progress Has Been Made to Address the Vulnerabilities Exposed by 9/11, but Continued Federal Action Is Needed to Further Mitigate Security Risks. GAO-07-375. Washington, D.C.: Jan. 24, 2007. Terrorist Watch List Screening: Efforts to Help Reduce Adverse Effects on the Public. GAO-06-1031. Washington, D.C.: Sept. 29, 2006. Transportation Security Administration: More Clarity on the Authority of Federal Security Directors Is Needed. GAO-05-935. Washington, D.C.: Sept. 23, 2005. Aviation Security: Improvement Still Needed in Federal Aviation Security Efforts. GAO-04-592T. Washington, D.C.: March 30, 2004. Aviation Security: Efforts to Measure Effectiveness and Strengthen Security Programs. GAO-04-285T. Washington, D.C.: Nov. 20, 2003. Aviation Security: Efforts to Measure Effectiveness and Address Challenges. GAO-04-232T. Washington, D.C.: Nov. 5, 2003. Aviation Security: Progress Since September 11, 2001, and the Challenges Ahead. GAO-03-1150T. Washington, D.C.: Sept. 9, 2003. Airport Finance: Past Funding Levels May Not Be Sufficient to Cover Airports’ Planned Capital Development. GAO-03-497T. Washington, D.C.: Feb. 25, 2003. Airport Finance: Using Airport Grant Funds for Security Projects Has Affected Some Development Projects. GAO-03-27. Washington, D.C.: Oct. 15, 2002. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: Oct. 2, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: July 25, 2002. Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations. GAO-01-1171T. Washington, D.C.: Sept. 25, 2001. Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities. GAO-01-1165T. Washington, D.C.: Sept. 21, 2001. Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports. GAO-01-1162T. Washington, D.C.: Sept. 20, 2001. Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security. GAO-01-1166T. Washington, D.C.: Sept. 20, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Transportation Security Administration (TSA), established in November 2001, has developed and implemented a variety of programs to secure the commercial aviation system. To implement these efforts, TSA funding related to aviation security has totaled about $20 billion since fiscal year 2004. Other Department of Homeland Security (DHS) components, such as the U.S. Customs and Border Protection (CBP) and the Science and Technology Directorate (S&T), also play roles in securing commercial aviation. In this testimony, we address the efforts TSA has taken or planned to strengthen aviation security, and the challenges that remain, in three key areas: airline passenger prescreening, airline passenger and checked baggage screening, and air cargo screening. GAO's comments are based on issued GAO reports and testimonies and our preliminary observations from ongoing work on TSA's passenger checkpoint screening procedures and technologies, and staffing standards for Transportation Security Officers (TSO). DHS and TSA have undertaken numerous initiatives to strengthen the security of the nation's aviation system, and should be commended for these efforts. However, more work remains. Meeting the congressional mandates to screen airline passengers and checked baggage alone was a tremendous challenge. Since that time, TSA has turned its attention to, among other things, strengthening passenger prescreening; more efficiently allocating, deploying, and managing the TSO workforce; strengthening screening procedures; developing and deploying more effective and efficient screening technologies; and improving domestic air cargo security. Some of the actions taken by TSA in these areas were in response to GAO recommendations. For example, consistent with GAO's recommendation to strengthen checked baggage screening, TSA has developed a strategic planning framework and identified several funding and financing strategies for installing optimal checked baggage screening systems. While TSA has undertaken numerous efforts to strengthen aviation security, GAO found that DHS and TSA could strengthen their risk-based decision-making efforts and collaboration with stakeholders. For example, as TSA moves forward with Secure Flight--TSA's prospective domestic passenger prescreening program--it will need to employ a range of program management disciplines, which we previously found missing, to control program cost, schedule, performance, and privacy risks. TSA has put in place a new management team, but it is too early to know how this change will affect the program's development. In addition, while TSA has tested some proposed modifications to passenger screening procedures at airports to help determine whether to implement the changes, GAO identified that TSA's data collection and analyses could be improved. GAO also found that limited progress has been made in developing and deploying technologies due to planning and funding challenges. For example, limited progress has been made in fielding explosives detection technology at passenger screening checkpoints, and while TSA has begun to systematically plan for the optimal deployment of checked baggage screening systems and to identify funding and financing strategies for installing these systems, the agency has identified that under current investment levels, installation of optimal checked baggage screening systems will not be completed until approximately 2024. Additionally, the federal government and the air cargo industry face several challenges that must be overcome to effectively implement technologies to inspect air cargo, such as ensuring that air cargo can be inspected in a timely manner to meet the delivery time frames of air carriers. GAO also found that more work is needed to fully implement a risk-based approach to securing air cargo, including finalizing a methodology and schedule for completing assessments of air cargo vulnerabilities and critical assets. TSA stated that the agency intends to perform a vulnerability assessment of U.S. air cargo operations and activities, as recommended by GAO, and plans to complete this assessment in 2007.
|
We found significant pay problems at the six Army Guard units we audited related to processes, human capital, and systems. The six units we audited, including three special forces and three military police units, were: Colorado B Company, 5th Battalion, 19th Special Forces Virginia B Company, 3rd Battalion, 20th Special Forces West Virginia C Company, 2nd Battalion, 19th Special Forces Mississippi 114th Military Police Company California 49th Military Police Headquarters and Headquarters Maryland 200th Military Police Company These units were deployed to help perform a variety of critical domestic and overseas mission operations, including search and destroy missions in Afghanistan against Taliban and al Qaeda forces, guard duty for al Qaeda prisoners in Cuba, and providing security at the Pentagon shortly after the September 11, 2001, terrorist attacks. For the six units we audited, we found significant pay problems involving over $1 million in errors. These problems consisted of underpayments, overpayments, and late payments that occurred during all three phases of Army Guard mobilization to active duty. For the 18-month period from October 1, 2001, through March 31, 2003, we identified overpayments, underpayments, and late payments at the six case study units estimated at $691,000, $67,000, and $245,000, respectively. In addition, for one unit, these pay problems resulted in largely erroneous debts totaling $1.6 million. Overall, we found that 450 of the 481 soldiers (94 percent) from our case study units had at least one pay problem associated with their mobilization to active duty. Table 1 shows the number of soldiers at our case study units with at least one pay problem during each of the three phases of active duty mobilization. Some of the pay problems we identified included the following. DOD erroneously billed 34 soldiers in a Colorado National Guard Special Forces unit an average of $48,000 each in payroll-related debt— most of which was erroneous. While we first notified DOD of these issues in April and sent a follow-up letter in June 2003, the largely erroneous total debt for these soldiers of about $1.6 million remained unresolved at the end of our audit in September 2003. As a result of confusion over responsibility for entering promotion- related transactions associated with a Colorado soldier’s promotion, the soldier’s spouse had to obtain a grant from the Colorado National Guard to pay bills while her husband was in Afghanistan. Some soldiers did not receive payments for up to 6 months after mobilization and others still had not received some of their active duty pays by the conclusion of our audit. Ninety-one of 100 members of a Mississippi National Guard military police unit deployed to Guantanamo Bay, Cuba, did not receive the correct amount of Hardship Duty Pay. One soldier from the Mississippi unit was paid $9,400 in active duty pay during the 3 months following an early discharge for drug-related charges. Forty-eight of 51 soldiers in a California National Guard military police unit received late payments because the unit armory did not have a copy machine available to make copies of needed pay-related documents. Four Virginia Special forces soldiers injured in Afghanistan, unable to resume their civilian jobs, experienced problems in receiving entitled active duty pays and related health care. Pays for 13 soldiers continued for 6 weeks after early release from active duty. 88 soldiers were mistakenly paid for 2 types of hardship duty pay. In some cases, the problems we identified may have distracted these professional soldiers from mission requirements, as they spent considerable time and effort while deployed attempting to address these issues. Further, these problems may adversely affect the Army’s ability to retain these valuable personnel. Our limited review of the pay experiences of the soldiers in the Colorado Army Guard’s 220th Military Police Company, which was mobilized to active duty in January 2003, sent to Kuwait in February 2003, and deployed to Iraq on military convoy security and highway patrol duties in April 2003, indicated that some of the same types of pay problems that we found in our six case study units continued to occur. Of the 152 soldiers mobilized in this unit, our review of available records identified 54 soldiers who were either overpaid, underpaid, or received entitled active duty pays and allowances over 30 days late, or for whom erroneous pay-related debts were created. We found that these pay problems could be attributed to control breakdowns similar to those we found at our case study units, including pay system input errors associated with amended orders, delays and errors in coding pay and allowance transactions, and slow customer service response. For example, available documentation and interviews indicate that while several soldiers submitted required supporting documentation to start certain pays and allowances at the time of their initial mobilization in January 2003, over 20 soldiers were still not receiving these pays in August 2003. This unit remained deployed in Iraq as of January 2004. Deficiencies in three key areas—process, human capital, and systems— were at the heart of the pay problems we identified. Processes were not well understood or consistently applied and were outdated in several instances. Insufficient resources, inadequate training, and poor customer service impaired the human capital operations in this area. Further, the automated systems supporting pays to mobilized Army Guard soldiers were ineffective because they were (1) not integrated and (2) constrained by limited processing capabilities and ineffective system edits. A substantial number of payment errors we found were caused, at least in part, by unclear procedural requirements for processing active duty pay and allowance entitlements to mobilized Army Guard soldiers. Complex, cumbersome processes, developed in piecemeal fashion over a number of years, provide numerous opportunities for control breakdowns. The DOD Financial Management Regulation guidance on pay and allowance entitlements alone covered 65 chapters. Procedural requirements, particularly in light of the numerous organizations issuing guidance applicable to this area, and potentially hundreds of organizations and thousands of personnel involved in implementing this guidance, were not well understood or consistently applied with respect to determining (1) the actions required to make timely, accurate active duty pays to mobilized Army Guard soldiers and (2) the component responsible, among Army Guard, active Army, and DFAS, for taking the required actions. For example, within the Army Guard, 54 state-level personnel and another 54 state-level pay offices—United States Property and Fiscal Offices (USPFOs) are integrally involved in the process to pay mobilized Army Guard soldiers. Further, we found instances in which existing guidance was out of date—some of which still reflected practices in place in 1991 during Operation Desert Storm. Unclear procedural requirements for processing active duty pays contributed to erroneous and late pay and allowances to mobilized Army Guard soldiers. We found existing policies and procedural guidance were unclear with respect to amending active duty orders, stopping active duty pays for early returning soldiers, and extending active duty pays to injured soldiers. At two of our case study locations, military pay technicians using vague guidance made errors in amending existing orders. One of these errors resulted in 34 soldiers being billed a largely erroneous total debt of about $1.6 million. Procedural guidance was not clear regarding how to carry out assigned responsibilities for soldiers returning from active duty earlier than their unit. DFAS-IN guidance provides only that “the supporting USPFO will be responsible for validating the status of any soldier who does not return to a demobilized status with a unit.” The guidance did not state how the USPFO should be informed of soldiers not returning with their unit, or what means the USPFO should use to validate the status of any such soldiers. One USPFO informed us that they became aware that a soldier had returned early from a deployment when the soldier appeared at a weekend drill while his unit was still deployed. In four of six case study units, we found instances in which Army Guard soldiers’ active duty pays were not stopped at the end of their active duty tour when they were released from active duty earlier than their units. One Mississippi Army Guard soldier was paid $9,400 in active duty pay during the 3 months following an early discharge for drug-related offenses. We also found a lack of specific procedures to ensure timely processing of active duty medical extensions for injured Army Guard soldiers. Even though Army regulations provide that Army Guard soldiers with active duty medical extension status are entitled to continue to receive active duty pays, allowances, and medical benefits, we found that four soldiers from the Virginia 20th Special Forces, B Company, 3rd Battalion in that status experienced significant pay problems and related problems in obtaining needed medical services to treat injuries or illnesses incurred while on active duty in part as a result of a lack of clearly defined implementing procedures in this area. Individual Case Illustration: Unclear Regulations for Active Duty Medical Extension Four soldiers who were injured while mobilized in Afghanistan for Operation Enduring Freedom told us that customer service was poor and no one was really looking after their interest or even cared about them. These problems resulted in numerous personal and financial difficulties for these soldiers. · “Not having this resolved means that my family has had to make greater sacrifices and it leaves them in an unstable environment. This has caused great stress on my family that may lead to divorce.” · “My orders ran out while awaiting surgery and the care center tried to deny me care. My savings account was reduced to nearly 0 because I was also not getting paid while I waited. I called the Inspector General at Walter Reed and my congressman. My orders were finally cut. In the end, I was discharged 2 weeks before my care should have been completed because the second amendment to my orders never came and I couldn’t afford to wait for them before I went back to work. The whole mess was blamed on the ‘state’ and nothing was ever done to fix it.” · One sergeant was required to stay at Womack, the medical facility at Fort Bragg, North Carolina, while on medical extension. His home was in New Jersey. He had not been home for about 20 months, since his call to active duty. While he was recovering from his injuries, his wife was experiencing a high-risk pregnancy and depended upon her husband’s medical coverage, which was available while he remained in active duty status. Even though she lived in New Jersey, she scheduled her medical appointments near Fort Bragg to be with her husband. The sergeant submitted multiple requests to extend his active duty medical extension status because the paperwork kept getting lost. Lapses in obtaining approvals for continued active duty medical extension status caused the sergeant’s military medical benefits and his active duty pay to be stopped several times. He told us that because of gaps in his medical extension orders, he was denied medical coverage, resulting in three delays in scheduling a surgery. He also told us he received medical bills associated with his wife’s hospitalization for the delivery of their premature baby as a result of these gaps in coverage. We also found that existing policies and procedures were vague with respect to organizational responsibilities. Confusion centered principally on the lack of clear guidance with respect to responsibility and accountability for Army Guard personnel as they move from state control to federal control and back again. To be effective, current processes rely on close coordination and communication between state (Army Guard unit and state-level command organizations) and federal (active Army finance locations at mobilization/demobilization stations and at area servicing finance offices) organizations. However, we found a significant number of instances in which critical coordination requirements were not clearly defined. For example, at one of our case study locations, we found that, in part because of confusion over responsibility for starting location-based pays, a soldier was required to carry out a dangerous multiday mission to fix these pays. Individual Case Illustration: Difficulty in Starting In-Theatre Pays A sergeant with the West Virginia National Guard Special Forces unit was stationed in Uzbekistan with the rest of his unit, which was experiencing numerous pay problems. The sergeant told us that the local finance office in Uzbekistan did not have the systems up and ready, nor available personnel who were familiar with DJMS-RC. According to the sergeant, the active Army finance personnel were only taking care of the active Army soldiers’ pay issues. When pay technicians at the West Virginia USPFO attempted to help take care of some of the West Virginia National Guard soldiers’ pay problems, they were told by personnel at DFAS-Indianapolis not to get involved because the active Army finance offices had primary responsibility for correcting the unit’s pay issues. Eventually, the sergeant was ordered to travel to the finance office at Camp Doha, Kuwait, to get its assistance in fixing the pay problems. As illustrated in the following map. This trip, during which a soldier had to set aside his in-theatre duties to attempt to resolve Army Guard pay issues, proved to be not only a major inconvenience to the sergeant, but was also life-threatening. At Camp Doha (an established finance office), a reserve pay finance unit was sent from the United States to deal with the reserve component soldiers’ pay issues. The sergeant left Uzbekistan for the 4-day trip to Kuwait. He first flew from Uzbekistan to Oman in a C-130 ambulatory aircraft (carrying wounded soldiers). From Oman, he flew to Masirah Island. From Masirah Island he flew to Kuwait International Airport, and from the airport he had a 45-minute drive to Camp Doha. The total travel time was 16 hours. The sergeant delivered a box of supporting documents used to input data into the system. He worked with the finance office personnel at Camp Doha to enter the pertinent data on each member of his battalion into DJMS-RC. After 2 days working at Camp Doha, the sergeant returned to the Kuwait International Airport, flew to Camp Snoopy in Qatar, and from there to Oman. On his flight between Oman and Uzbekistan, the sergeant’s plane took enemy fire and was forced to return to Oman. No injuries were reported. The next day, he left Oman and returned safely to Uzbekistan. 1 After collecting supporting documentation for company pay updates, Sgt. departs Uzbekistan and flies to Oman. 2 Sgt. flies to Masirah Island off the coast of Oman. 3 Sgt. flies from Masirah Island to Kuwait International Airport. 4 Sgt. travels 45 minutes to Camp Doha, Kuwait, arrives with supporting documentation, and uses Camp Doha finance office to input name, SSN, rank, arrival date, and other pertinent information for each battalion member. Sgt. returns to Kuwait International Airport. 5 Sgt. flies from Kuwait International Airport to Camp Snoopy, Qatar. 6 Sgt. flies from Camp Snoopy to Oman. 7 On initial attempt to fly from Oman back to Uzbekistan, Sgt.'s flight comes under enemy fire and returns to Oman. 8 Sgt. flies from Oman, returning to duty station in Uzbekistan. U. A. E. We found several instances in which existing DOD and Army regulations and guidance in the pay and allowance area were outdated and conflict with more current legislation and DOD regulations. Some existing guidance reflected pay policies and procedures dating back to Operations Desert Shield and Desert Storm in 1991. While we were able to associate pay problems with only one of these outdated requirements, there is a risk that they may also have caused as yet unidentified pay problems. Further, having out-of-date requirements in current regulations may contribute to confusion and customer service issues. With respect to human capital, we found weaknesses including (1) insufficient resources allocated to pay processing, (2) inadequate training related to existing policies and procedures, and (3) poor customer service. The lack of sufficient numbers of well-trained, competent military pay professionals can undermine the effectiveness of even a world-class integrated pay and personnel system. A sufficient number of well-trained military pay staff is particularly crucial given the extensive, cumbersome, and labor-intensive process requirements that have evolved to support active duty pay to Army Guard soldiers. GAO’s Standards for Internal Control in the Federal Government state that management should take steps to ensure that its organization has the appropriate number of employees, and that appropriate human capital practices, including hiring, training, and retention, are in place and effectively operating. Our audit identified a lack of knowledgeable personnel dedicated to entering and processing active duty pays and allowances to mobilized Army Guard soldiers. As discussed previously, both active Army and Army Guard military pay personnel play key roles in this area. Army Guard operating procedures provide that the primary responsibility for administering mobilized Army Guard soldiers’ pay rests with the 54 USPFOs. These USPFOs are responsible for processing pay for drilling reservists along with the additional surge of processing required for initiating active duty pays for mobilized soldiers. Our audit work identified concerns with USPFO military pay sections operating at less than authorized staffing levels and recruiting and retention challenges due to the positions being at a lower pay grade level. In addition, few of the military pay technicians on board at the six locations we audited had received formal training on pay eligibility and pay processing requirements for mobilized Army Guard personnel. Although the Army and DFAS have established an agreement that in part seeks to ensure that resources are available to provide appropriately skilled pay personnel at mobilization stations to support surge processing, no such contingency staffing plan exists for the USPFOs. As discussed previously, pay problems at the case study units were caused in part by USPFO military pay sections attempting to process large numbers of pay transactions without sufficient numbers of knowledgeable personnel. Lacking sufficient numbers of personnel undermines the ability of the USPFO pay functions to carry out established control procedures. For example, our audits at the six case study units showed that, for the most part, proposed pay transactions were not independently reviewed as required by DJMS-RC operating procedures before they were submitted for processing. USPFO officials told us that because of the limited number of available pay technicians, this requirement was often not followed. For example, one Chief of Payroll told us that because they were understaffed, the current staff worked 12 to 14 hours a day and still had backlogs of pay start transactions to be processed. We identified instances in which the personnel at military pay offices at both the USPFOs and the active Army finance offices did not appear to be knowledgeable about the various aspects of the extensive pay eligibility or payroll processing requirements. There are no DOD or Army requirements for military pay personnel to receive training on pay entitlements and processing requirements associated with mobilized Army Guard soldiers or for monitoring the extent to which personnel have taken either of the recently established training courses in the area. Such training is critical given that military pay personnel must be knowledgeable with respect to the existing extensive and complex pay eligibility and processing requirements. We also found that such training is particularly important for active Army pay personnel who may lack knowledge in the unique procedures and pay transaction entry requirements to pay Army Guard soldiers. As a result, we identified numerous instances in which military pay technicians at both the USPFOs and active Army finance office locations made data coding errors when entering transaction codes into the pay systems. Correcting these erroneous transactions required additional labor-intensive research and data entry by other more skilled pay technicians. While the Army Guard began offering training for their military pay technicians in fiscal year 2002, we found that there was no overall monitoring of training the Army Guard pay personnel had taken and no requirement for USPFO pay technicians to attend these training courses. At several of the case study locations we audited, we found that Army Guard pay technicians relied primarily on on-the-job-training and phone calls to the Army Guard Financial Services Center in Indianapolis or to other military pay technicians at other locations to determine how to process active duty pays. In addition, unit commanders have significant responsibilities for establishing and maintaining the accuracy of soldiers’ pay records. U.S. Army Forces Command Regulation 500-3-3, Reserve Component Unit Commander’s Handbook (July 15, 1999), requires unit commanders to (1) annually review and update pay records for all soldiers under their command as part of an annual soldier readiness review and (2) obtain and submit supporting documentation needed to start entitled active duty pay and allowances based on mobilization orders. However, we saw little evidence that commanders for our case study units carried out these requirements. We were told that this was primarily because unit commanders have many administrative duties and without additional training on the importance of these actions, they may not receive sufficient priority attention. The lack of unit commander training on the importance of these requirements may have contributed to pay problems we identified at our case study units. For example, at our Virginia case study location, we found that when the unit was first mobilized, USPFO pay personnel were required to spend considerable time and effort to correct hundreds of errors in the unit’s pay records dating back to 1996. Such errors could have been identified and corrected during the preceding years’ readiness reviews. Further, we observed many cases in which active duty pays were not started until more than 30 days after the entitled start date because soldiers did not submit the paperwork necessary to start these pays. We found indications that many Army Guard soldiers were displeased with the customer service they received. None of the DOD, Army, or Army Guard policies and procedures we examined addressed the level or quality of customer service that mobilized Army Guard soldiers should receive concerning questions or problems with their active duty pays. We found that not all Army Guard soldiers and their families were informed at the beginning of their mobilization of the pays and allowances they should receive while on active duty. This information is critical to enable soldiers to determine if they were not receiving such pays and therefore require customer service. We also found that the documentation provided to Army Guard soldiers—primarily in the form of leave and earnings statements— concerning the pays and allowances they received did not facilitate customer service. Consistent with the confusion we found among Army Guard and active Army finance components concerning responsibility for processing pay transactions for mobilized Army Guard soldiers, we found indications that the soldiers themselves were similarly confused. Many of the complaints we identified concerned confusion over whether mobilized Army Guard personnel should be serviced by the USPFO because they were Army Guard soldiers or by the active Army because they were mobilized to federal service. Individual Case Illustration: Poor Customer Service One soldier told us that he submitted documentation on three separate occasions to support the housing allowance he should have received as of the beginning of his October 2001 mobilization. Each time he was told to resubmit the documentation because his previously submitted documents were lost. Subsequently, while he was deployed, he made additional repeated inquiries as to when he would receive his housing allowance pay. He was told that it would be taken care of when he returned from his deployment. However, when he returned from his deployment, he was told that he should have taken care of this issue while he was deployed and that it was now too late to receive this allowance. Data collected from Army Guard units mobilized to active duty indicated that some members of the units had concerns with the pay support customer service they received associated with their mobilization— particularly with respect to pay issues associated with their demobilization. Specifically, of the 43 soldiers responding to our question on satisfaction with customer support at mobilization, 10 indicated satisfaction, while 15 reported dissatisfaction. Similarly, of the 45 soldiers responding to our question on customer support following demobilization, 5 indicated satisfaction while 29 indicated dissatisfaction. Of the soldiers who provided written comments about customer service, none provided any positive comments about the customer service they received, and several had negative comments about the customer service they received, including such comments as “non-existent,” “hostile,” or “poor.” A company commander for one of our case study units characterized the customer service his unit received at initial mobilization as time-consuming and frustrating. In addition, procedures used to notify soldiers of large payroll-related debts did not facilitate customer service. Under current procedures, if a soldier is determined to owe the government money while on active duty, he is assessed a debt and informed of this assessment with a notation for an “Unpaid Debt Balance” in the remarks section of his leave and earnings statement. One such assessment showing a $39,489.28 debt is shown in figure 1. Several systems issues were significant factors impeding accurate and timely payroll payments to mobilized Army Guard soldiers, including the lack of an integrated or effectively interfaced pay system with both the personnel and order-writing systems; limitations in DJMS-RC processing capabilities; and ineffective system edits for large payments and debts. Our systems findings were consistent with issues raised by DOD in its June 2002 report to the Congress on its efforts to implement an integrated military pay and personnel system. Specifically, DOD’s report acknowledged that major deficiencies with the delivery of military personnel and pay services were the direct result of the inability of a myriad of current systems with multiple, complex interfaces to fully support current business process requirements. DOD has a significant system enhancement project underway, but it is likely that the department will operate with many of its existing system constraints for a number of years. Figure 2 provides an overview of the five systems currently involved in processing Army Guard pay and personnel information. The five key DOD systems (see fig. 4) involved in authorizing, entering, processing, and paying mobilized Army Guard soldiers were not integrated. Lacking either an integrated or effectively interfaced set of personnel and pay systems, DOD must rely on manual entry of data from the same source documents into multiple systems. This error-prone, labor-intensive manual data entry caused various pay problems—particularly late payments. In our case studies, we found instances in which mobilization order data that were entered into SIDPERS were either not entered into DJMS-RC for several months after the personnel action or were entered inconsistently. Consequently, these soldiers either received active duty pays they were not entitled to receive—some for several months—or did not timely receive active duty pays to which they were entitled. Individual Case Illustration: Overpayment due to Lack of Integrated Pay and Personnel Systems A soldier with the Mississippi Army National Guard was mobilized in January 2002 with his unit and traveled to the mobilization station at Fort Campbell. The unit stayed at Fort Campbell to perform post security duties until June 2002. On June 14, 2002, the E-4 specialist received a "general" discharge order from the personnel office at Fort Campbell for a drug-related offense. However, he continued to receive active duty pay, totaling approximately $9,400, until September 2002. Although the discharge information was promptly entered into the soldier's personnel records, it was not entered into the pay system for almost 4 months. This problem was caused by weaknesses in the processes designed to work around the lack of integrated pay and personnel systems. Further, the problem was not detected because reconciliations of pay and personnel data were not performed timely. Specifically, it was not until over 3 months after the soldier's discharge, through its September 2002 end-of-month reconciliation, that the Mississippi Army National Guard USPFO identified the overpayment and took action on October 2, 2002, to stop the individual's pay. However, collection efforts on the $9,400 overpayment did not begin until July 2003, when we pointed out this situation to USPFO officials. A soldier with the Mississippi Army National Guard was mobilized in January 2002 with his unit and traveled to the mobilization station at Ft. Campbell. The unit stayed at Ft. Campbell to perform post security duties until June 2002. On June 14, 2002, the E-4 specialist received a “general” discharge order from the personnel office at Ft. Campbell for a drug-related offense. However, he continued to receive active duty pay, totaling approximately $9,400, until September 2002. Although the discharge information was promptly entered into the soldier’s personnel records, it was not entered into the pay system for almost 4 months. This problem was caused by weaknesses in the processes designed to work around the lack of integrated pay and personnel systems. Further, the problem was not detected because reconciliations of pay and personnel data were not performed timely. Specifically, it was not until over 3 months after the soldier’s discharge, through its September 2002 end-of-month reconciliation, that the Mississippi Army National Guard USPFO identified the overpayment and took action on October 2, 2002, to stop the individual’s pay. However, collection efforts on the $9,400 overpayment did not begin until July 2003, when we pointed out this situation to USPFO officials. DOD has acknowledged that DJMS-RC was not designed to process payroll payments to mobilized Army Guard soldiers for extended periods of active duty. Consequently, it is not surprising that we found a number of “workarounds”—procedures intended to compensate for existing DJMS- RC processing limitations with respect to Army Guard active duty pays. Such manual workarounds are inefficient and create additional labor- intensive, error-prone transaction processing. Because of limited DJMS-RC processing capabilities, the Army Guard USPFO and in-theatre active Army area servicing finance office pay technicians are required to manually enter transactions for nonautomated pay and allowances every month. DJMS-RC was originally designed to process payroll payments to Army Reserve and Army Guard personnel on weekend drills, or on short periods of annual active duty (periods of less than 30 days in duration) or for training. With Army Guard personnel now being paid from DJMS-RC for extended periods of active duty (as long as 2 years at a time), DFAS officials told us that the COBOL/mainframe-based system was now being stretched to the limits of its functionality. In several of the case study units we audited, we found a number of instances in which soldiers were underpaid their entitled pays that must be entered each month manually (such as foreign language proficiency, special duty assignment, or hardship duty pays) because pay technicians did not enter the monthly manual transaction input required to initiate those pays every month. In addition, we found a significant number of soldiers were overpaid when they were demobilized from active duty before the stop date specified in their original mobilization orders. This occurred because pay technicians did not update the stop date in DJMS-RC necessary to terminate the automated active duty pays when soldiers leave active duty early. For example, the military finance office in Kuwait, responsible for paying Virginia 20th Special Forces soldiers in the fall of 2002, did not stop hostile fire and hardship duty pays as required when these soldiers left Afghanistan in October 2002. We found that 55 of 64 soldiers eligible for hostile fire pay were overpaid for at least 1 month beyond their departure from Afghanistan. Further, these month-to-month pays and allowances were not separately itemized on the soldiers’ leave and earnings statements in a user-friendly format. Instead, many of these pays appeared as lump sum payments under “other credits.” In many cases these “other credit” pay and allowances appeared with little explanation. As a result, we found indications that Army Guard soldiers had difficulty using the leave and earnings statements to determine if they received all entitled active duty pays and allowances. Without such basic customer service, the soldiers cannot readily determine whether they received all entitled active duty pays and allowances. As shown in the example leave and earnings statement extract included in figure 3, an Army Guard soldier who received a series of corrections to special duty assignment pay along with their current special duty assignment payment of $110 is likely to have difficulty discerning whether he or she received all and only entitled active duty pays and allowances. In yet another example, one sergeant, apparently having difficulty deciphering his leave and earnings statement, wrote a letter to a fellow service member asking, “Are they really fixing pay issues or are they putting them off till we return? If they are waiting, then what happens to those who (god forbid) don’t make it back?” This sergeant was killed in action in Afghanistan on April 15, 2002, before he knew if his pay problems were resolved. While DJMS-RC has several effective edits to prevent certain overpayments, it lacks effective edits to reject large proposed net pays over $4,000 at midmonth and over $7,000 at end of month before their final processing. We found several instances in our case studies where soldiers received large lump sum payments, possibly related to previous underpayments or other pay errors, with no explanation. Further, the lack of preventive controls over large payments poses an increased risk of fraudulent payments. Individual Case Illustration: System Edits Do Not Prevent Large Payments and Debts A sergeant with the Colorado Army National Guard, Special Forces, encountered numerous severe pay problems associated with his mobilization to active duty, including his deployment to Afghanistan in support of Operation Enduring Freedom. The sergeant’s active duty pay and other pay and allowances should have been stopped on December 4, 2002, when he was released from active duty. However, because the sergeant’s mobilization orders called him to active duty for 730 days and not the 365 days that he was actually mobilized, and the Army area servicing finance office at the demobilization station, Fort Campbell, did not enter the release from active duty date into DJMS-RC, the sergeant continued to improperly receive payments, as if he were still on active duty, for 2 and a half months after he was released from active duty totaling over $8,000. The sergeant was one of 34 soldiers in the company whose pay continued after their release from active duty. In an attempt to stop the erroneous payments, in February 2003, pay personnel at the Colorado USPFO created a transaction to cancel the tour instead of processing an adjustment to amend the stop date consistent with the date on the Release from Active Duty Order. When this occurred, DJMS-RC automatically processed a reversal of 11 months of the sergeant’s pay and allowances that he earned while mobilized from March 1, 2002, through February 4, 2003, which created a debt in the amount of $39,699 on the soldier’s pay record; however, the reversal should have only been from December 5, 2002, through February 4, 2003. In April 2003, at our request, DFAS-Indianapolis personnel intervened in an attempt to correct the large debt and to determine the actual amount the sergeant owed. In May 2003, DFAS-Indianapolis erroneously processed a payment transaction instead of a debt correction transaction in DJMS-RC. This created a payment of $20,111, which was electronically deposited to the sergeant’s bank account without explanation, while a debt of $30,454 still appeared on his Leave and Earnings Statement. About 9 months after his demobilization, the sergeant’s unpaid debt balance was reportedly $26,559, but the actual amount of his debt had not yet been determined as of September 2003. DOD has a system enhancement project underway for which one of the major expected benefits is the improvement of military pay accuracy and timeliness. However, the effort to replace over 80 legacy personnel, pay, training, and manpower systems (including DJMS-RC) has been underway for over 5 years and DOD has encountered challenges fielding the system. In the nearer term, the department reported that it expected to field a system to replace the current DFAS system used to process pays to mobilized Army Guard soldiers by March 2005. However, given that the pay system is only one of several non-integrated systems the department currently relies on to authorize and pay mobilized Army Guard soldiers, it is likely that the department will continue to operate with many of the existing system constraints for at least several more years. While it is likely that DOD will be required to rely on existing systems for a number of years, a complete and lasting solution to the pay problems we identified will only be achieved through a complete reengineering, not only of the automated systems, but also of the supporting processes and human capital practices in this area. However, our related report (GAO-04-89) detailed immediate actions that can be taken in these areas to improve the timeliness and accuracy of pay and allowance payments to activated Army Guard soldiers. The need for such actions is increasingly imperative in light of the current extended deployment of Army Guard soldiers in their crucial role in Operation Iraqi Freedom and anticipated additional mobilizations in support of this operation. To help ensure that the Army Guard can continue to successfully fulfill its vital role in our national defense, immediate steps are needed to at least mitigate the most serious problems we identified. Accordingly, we made the following short-term recommendations to the Secretary of Defense to address the issues we identified with respect to the existing processes, human capital, and automated systems relied on to pay activated Army Guard personnel. Establish a unified set of policies and procedures for all Army Guard, Army, and DFAS personnel to follow for ensuring active duty pays for Army Guard personnel mobilized to active duty. Establish performance measures for obtaining supporting documentation and processing pay transactions (for example, no more than 5 days would seem reasonable). Establish who is accountable for stopping active duty pays for soldiers who return home earlier than their units. Clarify the policies and procedures for how to properly amend active duty orders, including medical extensions. Require Army Guard commands and unit commanders to carry out complete monthly pay and personnel records reconciliations and take necessary actions to correct any pay and personnel record mismatches found each month. Update policies and procedures to reflect current legal and DOD administrative requirements with respect to active duty pays and allowances and transaction processing requirements for mobilized Army Guard soldiers. Consider expanding the scope of the existing memorandum of understanding between DFAS and the Army concerning the provision of resources to support surge processing at mobilization and demobilization sites to include providing additional resources to support surge processing for pay start and stop transaction requirements at Army Guard home stations during initial soldier readiness programs. Determine whether issues concerning resource allocations for the military pay operations identified at our case study units exist at all 54 USPFOs, and if so, take appropriate actions to address these issues. Determine whether issues concerning relatively low-graded military pay technicians identified at our case study units exist at all 54 USPFOs, and if so, take appropriate actions to address these issues. Modify existing training policies and procedures to require all USPFO and active Army pay and finance personnel responsible for entering pay transactions for mobilized Army Guard soldiers to receive appropriate training upon assuming such duties. Require unit commanders to receive training on the importance of adhering to requirements to conduct annual pay support documentation reviews and carry out monthly reconciliations. Establish an ongoing mechanism to monitor the quality and completion of training for both pay and finance personnel and unit commanders. Identify and evaluate options for improving customer service provided to mobilized Army Guard soldiers by providing improved procedures for informing soldiers of their pay and allowance entitlements throughout their active duty mobilizations. Identify and evaluate options for improving customer service provided to mobilized Army Guard soldiers to ensure a single, well-advertised source for soldiers and their families to access for customer service for any pay problems. Review the pay problems we identified at our six case study units to identify and resolve any outstanding pay issues for the affected soldiers. Evaluate the feasibility of using the personnel-to-pay interface as a means to proactively alert pay personnel of actions needed to start entitled active duty pays and allowances. Evaluate the feasibility of automating some or all of the current manual monthly pays, including special duty assignment pay, foreign language proficiency pay, hardship duty pay, and HALO pay. Evaluate the feasibility of eliminating the use of the “other credits” for processing hardship duty (designated areas), HALO pay, and special duty assignment pay, and instead establish a separate component of pay for each type of pay. Evaluate the feasibility of using the JUSTIS warning screen to help eliminate inadvertent omissions of required monthly manual pay inputs. Evaluate the feasibility of redesigning Leave and Earnings Statements to provide soldiers with a clear explanation of all pay and allowances received so that they can readily determine if they received all and only entitled pays. Evaluate the feasibility of establishing an edit check and requiring approval before processing any debt assessments above a specified dollar amount. Evaluate the feasibility of establishing an edit check and requiring approval before processing any payments above a specified dollar amount. With regard to a complete and lasting solution to the pay problems we identified, our related report included the following long-term recommendations As part of the effort currently under way to reform DOD’s pay and personnel systems—referred to as DIMHRS—incorporate a complete understanding of the Army Guard pay problems as documented in this report into the requirements development for this system. In developing DIMHRS, consider a complete reengineering of the processes and controls and ensure that this reengineering effort deals not only with the systems aspect of the problems we identified, but also with the human capital and process aspects. The extensive problems we identified at the case study units vividly demonstrate that the controls currently relied on to pay mobilized Army Guard personnel are not working and cannot provide reasonable assurance that such pays are accurate or timely. The personal toll that these pay problems have had on mobilized soldiers and their families cannot be readily measured, but at least with two of our case study units there are already indications that these pay problems have begun to have an adverse effect on reenlistment and retention. It is not surprising that cumbersome and complex processes and ineffective human capital strategies, combined with the use of a system that was not designed to handle the intricacies of active duty pay and allowances, would result in significant pay problems. To its credit, DOD concurred with the recommendations included in our companion report and outlined some actions already taken, others that are underway, and further planned actions with respect to our recommendations. We did not assess the completeness and adequacy of DOD’s actions directed at improving controls over pays to mobilized Army Guard soldiers. However, pays to mobilized Army Reserve soldiers rely on many of the same processes and automated systems used to pay mobilized Army Guard soldiers. At your request, we will be reviewing the pay experiences of mobilized Army Reserve soldiers, and we will be assessing the effectiveness of any relevant DOD actions taken as part of that review. Finally, I commend the Chairman and Vice Chairman for holding an oversight hearing on this important issue. Your Committee’s continuing interest and diligence in overseeing efforts to effectively and efficiently support our Army Guard and Reserve forces will be essential in bringing about comprehensive and lasting improvements to many decades-old, entrenched problems. For example, in addition to our ongoing review of the pay experiences of mobilized Army Reserve soldiers, we now have related engagements ongoing that you requested concerning controls over pays and related medical benefits for mobilized Army Guard soldiers who elect to have their active duty tours extended to address injuries or illnesses incurred while on active duty, controls over travel reimbursements to mobilized Army Guard soldiers, utilization of Army Guard forces since September 11, 2001, and the impact of deployments on DOD’s ability to carry out homeland security missions. We are committed to continuing to work with you and DOD to identify and monitor actions needed to bring about comprehensive and lasting solutions to long-standing problems in its business and financial management operations. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions you or other members of the Committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-9095 or [email protected]. Individuals making key contributions to this testimony include Paul S. Begnaud, Amy C. Chang, Mary Ellen Chervenic, Francine M. DelVecchio, Dennis B. Fauber, Geoffrey B. Frank, Jennifer L. Hall, Charles R. Hodge, Julia C. Matta, Jonathan T. Meyer, Sheila D. Miller, and John J. Ryan, Patrick S. Tobo. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
In light of the recent mobilizations associated with the war on terrorism, GAO was asked to determine if controls used to pay mobilized Army Guard personnel provided assurance that such pays were accurate and timely. This testimony focuses on the pay experiences of Army Guard soldiers at selected case study units and deficiencies with respect to controls over processes, human capital, and automated systems. The existing processes and controls used to provide pay and allowances to mobilized Army Guard personnel are so cumbersome and complex that neither DOD nor, more importantly, the mobilized Army Guard soldiers could be reasonably assured of timely and accurate payroll payments. Weaknesses in these processes and controls resulted in over- and underpayments and late active duty payments and, in some cases, large erroneously assessed debts, to mobilized Army Guard personnel. The end result of these weaknesses is to severely constrain DOD's ability to provide active duty pay to these personnel, many of whom were risking their lives in combat in Iraq and Afghanistan. In addition, these pay problems have had a profound impact on individual soldiers and their families and may adversely impact on decisions to stay in the Army Guard. For example, many soldiers and their families were required to spend considerable time, sometimes while the soldiers were deployed in remote, hostile environments overseas, seeking corrections to active duty pays and allowances. The pay process, involving potentially hundreds of DOD, Army, and Army Guard organizations and thousands of personnel, was not well understood or consistently applied with respect to determining (1) the actions required to make timely, accurate pays to mobilized soldiers, and (2) the organization responsible for taking the required actions. With respect to human capital, we found weaknesses including (1) insufficient resources allocated to pay processing, (2) inadequate training related to existing policies and procedures, and (3) poor customer service. Several systems issues were also significant factors impeding accurate and timely payroll payments to mobilized Army Guard soldiers, including (1) nonintegrated systems, (2) limitations in system processing capabilities, and (3) ineffective system edits.
|
Connected vehicles offer services and features to consumers through wireless communication systems. Technologies, such as in-vehicle sensors and global positioning systems, generate data that are transmitted through two-way communication between a vehicle and a central computer system or a call center. As shown in figure 1, automakers use third parties to provide connectivity (e.g., enable a vehicle to transmit and receive voice and data communications) and typically contract with third parties to provide support for the services offered in connected vehicles (“connected vehicle services”). For example, automakers may contract with: telecommunication companies to connect a vehicle to the internet or a wireless network, telematics service providers to provide connected vehicle services by staffing calling centers and processing data, and content providers to provide optional applications, similar to those available on a smartphone, that consumers can access through their vehicle’s console. Connected vehicles can offer consumers a range of safety, security, and convenience services. For example, roadside assistance and automatic crash notification services allow for voice and data communication between a vehicle and a person at a call center. In providing these services, connected vehicles generate, transmit, and receive various types of data, such as a car’s location. In this report, we categorized data that are collected or that could be collected by connected vehicles into six categories as defined and described in figure 2. Personal information includes identifying information such as a consumer’s name, address, e-mail or other information that directly links back to an individual as well as other information that can be reasonably linked to a specific consumer, computer, or device. For example, in the connected vehicle context, data (e.g., GPS coordinates or airbag status) that can be linked to a vehicle owner is often treated as personal information. Personal information collected from vehicles varies in its sensitivity. As reported by FTC staff, some personal information, such as data on precise geolocation, is considered sensitive due to what it can reveal about someone (e.g., routines). There is no federal comprehensive privacy law governing the collection, use, and sale of personal information by private sector companies. Rather, federal statutes and regulations addressing privacy issues in the private sector are generally tailored to specific purposes, situations, types of information, or sectors or entities. For example, the Health Insurance Portability and Accountability Act governs the use and disclosure of an individual’s health information by certain entities. Federal law also does not require all companies to have a privacy policy or to notify consumers of their privacy practices. While no single federal agency oversees data privacy issues, FTC is a law enforcement agency with a mission to promote consumer protection and prevent business practices that are anticompetitive, deceptive, or unfair to consumers. The National Highway Traffic Safety Administration (NHTSA) within DOT is responsible for vehicle safety. Many organizations and governments have used the Fair Information Practice Principles (FIPPs) to guide their privacy practices. The Organisation for Economic Co-Operation and Development developed a version of the FIPPs—a set of internationally recognized principles for protecting the privacy and security of personal information—in 1980 that has been widely adopted and was updated in 2013. While the FIPPs are principles, not legal requirements, they provide a framework for balancing privacy protections with other interests. Like other industries, the automobile industry recently developed a set of privacy principles—the Consumer Privacy Protection Principles: Privacy Principles for Vehicle Technologies and Services (“Consumer Privacy Protection Principles”). These principles are a self-regulatory framework influenced by the FIPPs and were adopted by most automakers with vehicle sales in the United States. The Consumer Privacy Protection Principles went into effect January 2, 2016. Nearly all selected automakers offer connected vehicles or plan to offer them in the next 5 or more years. Specifically, 13 of the 16 automakers we interviewed sell new vehicles that met our definition of a connected vehicle—ones that come equipped with technologies and services that transmit and receive data wirelessly (see fig. 3). Four of these automakers currently provide connected services in all of their vehicles; most of the remaining automakers reported increases in their production of connected vehicles over the last 5 years. Of the 12 automakers we interviewed that do not currently offer connectivity in all of their new vehicles, all but 1 told us they plan to do so. Three plan to offer all connected vehicles in the next 3 to 4 years, and 8 plan to offer all connected vehicles in 5 or more years. Several automakers noted that consumer demand will influence when they will offer all connected vehicles. While most automakers offer connected vehicles and services, not all consumers with access to these services use them. Based on our interviews with automakers, the percentage of customers who use these services ranges from less than 50 percent up to 100 percent. Selected automakers differed in how they offer these services to consumers. For example, some automakers offer free trials for 3 months, 6 months, or 1 year, while other automakers offer these services at no additional cost. The 3 automakers that do not offer connected vehicles or connected vehicles services told us they do not collect any data from their vehicles. Our discussion below focuses on the 13 automakers that offer connected vehicles and services. Based on interviews with the 13 selected automakers that offer connected vehicles, the types of data they collect from those vehicles vary. All 13 reported collecting vehicle health and location data (see fig. 4). However, fewer automakers reported collecting driver behavior and infotainment data, such as music selections and mobile applications used. These 13 automakers did not report collecting data related to vehicle occupants’ health or personal communications. According to the 13 selected automakers that offer connected vehicles, they currently use data collected from connected vehicles to provide connected vehicle services, and some use the data for research and development and marketing. Connected Vehicle Services: According to all of these 13 automakers, collected data are used to provide services, such as automatic crash notification or roadside assistance, to customers. Providing these services typically requires location, vehicle health, and for some services, driver behavior data. For example, sensors in a connected vehicle may detect that an airbag deployed. The connected vehicle transmits this information to the automaker or the provider operating the automatic crash notification service. The provider uses a vehicle’s location, status of airbag deployment, rollover status, and other pertinent information to inform and request assistance from emergency responders. Research and Development: All but 1 of these 13 automakers told us they also use collected data for research and development, specifically to improve their vehicles’ safety and performance. For example, automakers may use vehicle health data (e.g., diagnostic trouble codes) to identify an issue with a certain model or component within their vehicles. Twelve of these 13 automakers also told us they use these data to improve connected vehicle services they offer. For example, these collected data allow them to see which services or features consumers actually use and how they use them. Marketing: Five of these 13 automakers reported using collected data to market products and services to consumers, for example using vehicle health data to target advertisements to specific consumers for specific vehicle service or maintenance offers. These 13 selected automakers differed on who owns data collected from their connected vehicles. Specifically, 7 told us ownership of these data is legally unclear or they do not yet have a position. Of the remaining automakers, 3 said the vehicle owner owns the data, but the automaker has a license to use them; 2 said the automaker owns the data, and 1 said the automaker owns anonymized data and the customer owns personal data (e.g., data tied to a vehicle identification number). As we reported in 2016, data ownership is a potential challenge to achieving gains in transportation safety and efficiency. In the context of connected vehicles, data ownership can determine who or what entity controls access to the data and how they can be used. All 13 selected automakers that offer connected vehicles said they typically do not share collected data with unaffiliated third parties. For example, none of these automakers reported sharing collected data with firms that collect and sell information (data brokers). When automakers do share collected data, they said they typically do so with explicit consumer consent, at the request of a consumer, or to comply with a valid court order. Specifically, all of these automakers said they would share collected data with law enforcement in response to a valid court order or in exigent circumstances. Seven automakers said they share collected data, specifically vehicle health data, with dealerships to aide in vehicle servicing. Two automakers reported sharing collected data with insurance companies to enable consumers to participate in insurance plans that base premiums on driving behavior. Several automakers reported sharing and using collected data that has been de-identified more widely than they do data that can be linked back to a vehicle or vehicle owner. For example, one automaker discussed sharing de- identified vehicle health data with university-based researchers to examine vehicle structural integrity after crashes. Other automakers mentioned sharing de-identified location data with traffic services to improve their accuracy. Some automakers we spoke with emphasized that their current use and sharing of data may change as the industry evolves and data collection expands. As mentioned above, for the purposes of this report we identified six leading practices most relevant to connected vehicle data privacy, based on our analysis of the FIPPs and other policy frameworks (see table 1). To assess the extent to which selected automakers’ reported privacy policies reflected each leading practice, we identified multiple elements for each practice. We then reviewed selected automakers’ written privacy notice(s) and their responses to interview questions about their privacy practices. We did not evaluate the extent to which selected automakers follow their reported privacy policies. Although the leading practices are interrelated, we focused our assessments on how fully the automakers’ practices met each leading practice. We found that most selected automakers’ reported privacy policies at least partially reflected each of the six identified leading practices. Although we saw some variation among automakers, they tended to reflect and not reflect the same leading practice elements. For example, most automakers reported limiting the sharing of data and using safeguards to protect data security. However, none of the automakers’ written notices were in plain language, and their reported data collection, use, and sharing practices generally were more limited than suggested in their notices. Automakers reported obtaining consent before collecting data from vehicles, but they offered few options besides opting in and opting out of sharing data. Transparency: All 13 selected automakers’ written privacy notices were readily accessible from their public websites, but based on our analysis, none of the notices was clearly written. As we have previously reported, FTC and others have recommended that privacy notices should be readily accessible, clearly written, and describe all the purposes for which personal data are collected and shared. All of the automakers’ notices discussed the types of data collected; the potential purposes for collecting the data, such as to provide the connected services; and some conditions when data might be shared with third parties. However, none of the notices was written in plain language, a lack that could make them difficult for consumers to understand. In addition, most notices did not describe all of the types and purposes of the connected vehicle data that were being collected, but instead used broad language to describe this process. For example, of the 15 notices, only 2 included a list with all of the actual purposes for which the automaker collects data, and only 1 included a list with all of the types of personal data collected. Two notices clearly stated that the purposes identified for data collection were not exhaustive. Although the use of broad language is common for privacy notices and is not specific to the auto industry, it does not promote transparency. Several automakers discussed their other efforts to increase transparency. For example, one automaker reported revising its website to offer a consumer-friendly privacy portal after signing onto the Consumer Protection Privacy Principles. Three other automakers told us that they display their privacy policies on in-vehicle displays. Two automakers told us that a recently issued consumer guide could help promote understanding about this issue. In January 2017, the National Automobile Dealers Association and Future of Privacy Forum issued a consumer guide outlining types of vehicle data, practices governing their collection and use, and potential consumer options; the guide is to be available at auto dealerships. Focused Data Use: Most selected automakers reported limiting their data collection, use, retention and sharing, but their policies varied. Most written notices did not clearly identify data sharing and use practices. In interviews, all 13 selected automakers reported limiting the data collected from connected vehicles, with some (4 of 13) noting that they only collect the specific data they need to provide the consumer with services. With regard to data use, as discussed previously, all 13 automakers reported they use collected data to provide direct consumer services, such as roadside assistance, but some automakers (5 of 13) reported they use such data for marketing other services. Also, all 13 automakers told us that they use de-identified data when possible, such as when the data are being used for research and development purposes. Most automakers (12 of 13) also told us that they limit data retention, but policies varied. For example, of the 12 automakers that limit retention, 8 told us that their retention time frames depend on the type of data, while one of them specified that it retains all connected vehicle data for 6 years regardless of data type. Two automakers also told us that they are still developing their policies, so their retention time frames may change. With regard to data sharing, as mentioned previously, all 13 automakers told us that they do not typically share collected data with unaffiliated third parties. However, most automakers’ written privacy notices used vague language and did not consistently reflect their relatively limited data collection, use, retention, and sharing. For example, none of the notices stated that the automaker would not use data for reasons other than those listed in the policy, and only one notice specified the automaker’s data retention time frame. In another example, less than half of the notices (6 of 15) stated that data would not be shared with or sold to non-affiliated third parties, such as data brokers. Similarly, less than half of the notices (7 of 15) stated that location and driving behavior data would not be shared with any parties besides service providers without first obtaining the consumer’s consent. Only 2 notices included both of these statements about sharing. Data Security: Selected automakers reported using various methods to safeguard data, including methods that we have reported could be applied to increase vehicle data security. As we reported in 2016, automakers can identify and mitigate cybersecurity vulnerabilities by using practices such as conducting risk assessments and by employing technological measures. In interviews, all 13 automakers reported using policy and technological measures to protect data, such as limiting data access to certain company staff, using firewalls and encryption, and using “penetration testing” and “code reviews.” In addition, 12 automakers told us that they participate in the Auto-Information Sharing and Analysis Center, an industry-operated forum that includes automakers and parts suppliers. The center seeks to heighten awareness and increase security by allowing industry stakeholders to share threat information and potential mitigation strategies. Most automakers (9 of 13) also reported conducting privacy risk assessments, which would involve determining, among other things, the sensitivity of the collected data and the potential risks if the data were improperly lost, accessed, or disclosed. In addition, almost all of the notices (14 of 15) explained safeguards used to protect data, and some (7 of 15) also included examples of industry standard practices used for data security. Data Access and Accuracy: Selected automakers reported offering consumers various methods to access their personal account information, but most of their notices were unclear about methods used to ensure data accuracy. The majority (9 of 13) of automakers told us that consumers can access and correct information, such as name and address, related to the driver or subscriber. For example, 5 of these automakers noted that such subscriber information can be accessed through websites or mobile applications. One other automaker told us that consumers can access information on the website or mobile application but must e-mail or call the automaker to correct it. For other types of vehicle data—such as location, vehicle health, and driver behavior—consumers’ access varied among automakers. For example, some (4 of 13) automakers reported that consumers can access their vehicle health data—such as tire pressure information—through websites or mobile applications. On the other hand, 2 automakers told us it is not possible for consumers to access any data other than subscription-related data. With regard to data accuracy, the majority (9 of 13) of automakers told us they take steps, such as validation tests and other quality control measures, to ensure the accuracy of data collected from connected vehicles. Although most (12 of 15) notices explained how to access and correct one’s data, only a few notices (4 of 15) discussed actions the company takes to ensure data accuracy. Individual Control: Selected automakers reported that they obtain explicit consent before collecting data and most seek consent again, but they offered few options to consumers besides opting into sharing data and receiving the connected services or opting out of the service entirely if they do not wish to share data. In interviews, all 13 automakers told us that they obtain explicit consent before initiating services that require data to be collected and transmitted, typically through the consumer signing a service agreement or activating the service. Also, the majority of automakers (8 of 13) seek consent again in certain circumstances (e.g., when updating a service subscription or if the company’s data use practices will change significantly). In addition, most of the notices (13 of 15) discussed consumer choices, including how to opt out of sharing data with the automaker. However, all 13 automakers told us while consumers can opt out of sharing data this would typically involve losing all connected vehicle functionality, in part because connected vehicle services are often bundled. Accountability: Selected automakers reported using various methods to ensure that their staff and third parties receiving personal data handle them properly and most automakers’ notices discussed the methods used. In interviews, almost all automakers (12 of 13) reported that they work to ensure that third parties receiving data meet certain requirements, such as following the automaker’s privacy policies, and most reported including data-handling requirements in their contractual agreements. Several also reported imposing additional requirements, such as asking third parties to conduct privacy risk assessments. Nine automakers also reported that they conduct risk assessments related to third parties’ use of data collected from connected vehicles. Most automakers’ notices included descriptions of the methods used to promote accountability and designated which entity is ultimately responsible for properly handling data. For example, almost all notices (14 of 15) named the company responsible for handling personal data and provided contact information. In addition, the majority of notices (9 of 15) outlined requirements that third parties must meet before receiving data. Views differ on the importance and effectiveness of privacy notices in providing privacy protections for consumers. For example, FTC’s 2012 report recommended that companies should provide easy-to-use choice mechanisms that allow consumers to control whether their data are collected and how they are used; such mechanisms could include privacy notices. In the report, FTC also recommended that privacy notices should be made clearer, shorter, and more standardized to increase consumers’ ability to comprehend and compare various companies’ data practices. Most (14 of 16) selected experts in our review agreed with that FTC recommendation. However, FTC, we, and others have acknowledged that improved notices alone cannot guarantee consumer protections. Specifically, FTC has argued that clearer notices and improved consumer choices would need to be combined with other privacy practices, such as focused data collection and data security, to provide substantive privacy protections for consumers. Furthermore, FTC stated that when combined, such practices would help accomplish a broader goal of shifting the burden for protection away from consumers and to the companies handling consumer data. We have also reported that notices alone do not guarantee consumer protections. For example, as we reported in 2016, some consumers do not take the time to read notices, decreasing their ability to provide fully informed consent. In another example, four experts in our review mentioned the multiple decisions and corresponding large amount of paperwork required for buying a vehicle as factors that would make it less likely for a consumer to thoroughly read the privacy notice. In interviewing selected experts on privacy issues related to connected vehicle data, we presented the experts with general privacy concerns about the commercial collection and use of data that we had identified from our and other federal agencies’ reports and interviews with organizations that advocate the protection of the privacy of consumers’ data and asked the experts if these issues applied to data collected through connected vehicles. A majority of the experts generally agreed that these general data privacy issues, as described in table 2, apply to connected vehicle data. All selected experts agreed that tracking, loss of consumer control over personal information, and potentially insecure data were relevant privacy concerns. They emphasized that using location data to track individuals is particularly relevant in the context of vehicles. For example, one expert said location data could paint a picture of an individual’s life, revealing with whom they associate, the doctors they see, and the places they frequent. Experts also raised concerns about potentially inappropriate or illegal uses of location data, such as stalking. All experts expressed concern about potential data access through a security breach; however, no one we interviewed, including automakers, selected experts, industry groups, or government officials was aware of an incident where a database storing connected vehicle data had been compromised maliciously. Regarding loss of consumer control over personal information, one expert explained that it is not possible for a consumer to know exactly what is collected, when, and how the data are used. Another expert noted that other technologies face this same challenge; however, consumers may be less aware of what their vehicle is doing than their computer or smartphone. In addition, vehicles may be used by multiple individuals, and one expert expressed concern about how multiple drivers of a car would be informed about data collection. Some experts thought data sharing and third-party use could become a greater issue as the auto industry evolves. These issues mirror concerns we reported on in 2013 and 2015 about the collection, use, and sharing of personal data by commercial entities. The majority of experts we interviewed also agreed that the lack of sufficiently informed consent (due to low consumer awareness and lack of company transparency), disparate treatment, and little or no consumer choice were relevant privacy concerns. Several experts said that, as in other industries, informed, meaningful consent is difficult to obtain, as consumers may not read notices and automakers may not present privacy information clearly. Regarding disparate treatment, two experts raised the example that data from connected vehicles could potentially be used to treat consumers differently, and unfairly, in the provision of auto insurance. Finally, experts raised concerns about consumer choice. For example, as described above, several experts noted that consumers must provide consent to all data collection and use or not receive any services. Another expert said that consumers have limited choice because vehicles are essential to people’s lives. Similarly, another expert noted that it is difficult to compare privacy practices across automakers and connected vehicle platforms and that consumers cannot easily change their minds after buying a car, as it is a large financial investment. While we did not ask selected experts to comment on individual automaker policies, some were concerned about automakers’ efforts to protect consumer privacy. As previously discussed, automakers signed onto the Consumer Privacy Protection Principles to demonstrate their commitment to protect consumers’ privacy. However, the majority of experts we interviewed (13 of 16) did not think that these principles provide sufficient guidance to inform automakers’ actions or protect consumers’ privacy. Some experts noted that other industries, such as the credit card industry, have developed more specific self-regulatory guidelines that include enforcement measures and better protect consumers. Most selected experts said the Consumer Privacy Protection Principles lacked specificity about, for example, data use and consumers’ right to protection. For example, they said the principles used “vague language” and allowed for data use without consumers’ affirmative consent for “legitimate business purposes,” which are not clearly defined. To remedy these issues, six experts said the Consumer Privacy Protection Principles should be more specific, and four said the principles should define restrictions on data use. Most experts we interviewed agreed automakers should limit data retention (13 of 16) and limit data collection (12 of 16). While the majority of experts thought that automakers should de-identify data (11 of 15) for focused data use, four experts expressed skepticism about this practice, including whether it is possible to completely de-identify data. In 2013, we reported concerns about de- identifying location data. Specifically, we found that some methods of de- identification can allow for an individual to be re-identified, and that different de-identification methods and data retention practices may lead to varying levels of consumer protection. Other suggestions to improve the Consumer Privacy Protection Principles included making the principles enforceable, making privacy information accessible and transparent, explaining the rationale or risks and benefits of data use, and laying out the trade-offs for consumers. All selected experts agreed that automakers should be required to obtain explicit consent for the use of sensitive data or data used in a manner beyond a consumer’s expectations. In addition, the majority of experts we interviewed (13 of 16) agreed that automakers should obtain consumer consent at the time and in the context in which consumers are making a decision about their data. Several experts said that consumers should have access to personal data collected about them. However, auto industry trade association officials said that the Consumer Privacy Protection Principles provide automakers with a sufficient framework to address privacy issues and allow automakers the flexibility to tailor implementation. While no federal law expressly confers broad privacy protections for consumers’ data and no single federal agency oversees data privacy issues, the FTC Act gives FTC the authority to bring actions against companies or individuals that engage in unfair or deceptive acts or practices in or affecting commerce. According to FTC officials, the FTC Act applies to privacy and data security issues for connected vehicles. For example, FTC officials said they could use this authority to bring an action against an automaker that uses a consumer’s data without his or her consent or in a way that violates the manufacturer’s stated privacy policy. To date, FTC has not brought such a public enforcement action against a connected vehicle manufacturer or its affiliates, but it has brought such actions against other companies offering services in the Internet of Things. For example, in 2016, FTC settled a case alleging that critical security flaws in a company’s routers put the home networks of hundreds of thousands of consumers at risk. In addition, as the primary agency with authority over consumer privacy, FTC has ongoing efforts related to protecting the privacy of consumers that use connected devices in the Internet of Things, which includes connected vehicles. FTC and FTC staff have issued guidance, including two reports—Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers and Internet of Things: Privacy and Security in a Connected World—both of which outline best practices for companies. Prior to issuing these reports, FTC held outreach forums to gather views from a variety of stakeholders on these issues. One of these forums included a panel specifically focused on consumer-facing technology in connected vehicles, which covered, among other things, an overview of these technologies, security issues, and the diversity of auto industry practices. NHTSA, according to agency officials, has broad authority over the safety of passenger vehicles and may issue voluntary guidance or mandate standards through a rulemaking process to address safety, but it does not have the authority to regulate consumer privacy as it relates to motor vehicles or motor vehicle data. However, according to NHTSA officials, the agency is required to consider the privacy impacts of its regulatory activities. Specifically, NHTSA is required to conduct privacy impact assessments and inform the public about any consumer privacy impacts that may stem from its activities and the motor vehicle safety standards issued by the agency. Also as part of the rulemaking process, NHTSA examines privacy as a component of public acceptance (i.e., will the public accept and use the mandated technology). According to NHTSA officials, this is an aspect of “practicability” the agency is required to consider when it proposes a motor vehicle safety standard under the Motor Vehicle Safety Act. NHTSA may also address privacy in voluntary guidance it issues for technologies that it expects will have safety benefits and that it may regulate in the future. Recent efforts by NHTSA to address emerging safety technologies align with NHTSA’s goals of ensuring safety on the roadways and keeping pace with trends impacting consumers. These efforts also illustrate how NHTSA has addressed privacy issues related to emerging technologies. For example, in December 2016, NHTSA proposed a rulemaking on vehicle-to-vehicle technology that, according to NHTSA, is expected to provide safety benefits. The proposed vehicle-to-vehicle rule involves the broadcast, collection, and storage of data that includes location and other information about passenger vehicles. As described in the Notice of Proposed Rulemaking, consumer acceptance of vehicle-to-vehicle technologies—which will depend on addressing consumer privacy concerns, among other concerns—is crucial to achieving the expected safety benefits of this technology. As a result, NHTSA expects manufacturers to take steps to minimize consumer privacy risks by providing a clear and transparent vehicle-to-vehicle privacy notice to consumers. Similarly, in September 2016, NHTSA issued the Federal Automated Vehicles Policy, to speed the delivery of an initial regulatory framework and best practices to guide manufacturers and other entities in the safe design, development, testing, and deployment of highly automated vehicles (e.g., vehicles systems capable of monitoring the driving environment). As described in this voluntary guidance, NHTSA views automated vehicle technology as capable of bringing significant safety benefits. The guidance, among other things, outlined a set of privacy principles and recommended that automakers manufacturing automated vehicle technologies adopt these or similar principles. In addition, manufacturers and other entities are asked to voluntarily provide information on how the guidance is being followed, including how they are addressing privacy. According to officials from FTC and NHTSA, in recent years the agencies have collaborated on vehicle data privacy and coordinated their respective efforts in this area. For example, recently, FTC and NHTSA staff has met monthly to discuss cybersecurity and privacy issues related to passenger vehicles. FTC and NHTSA also hosted a workshop in June 2017 on consumer privacy and security issues posed by automated and connected vehicles. Specifically, the workshop aimed to bring together multiple industry stakeholders, consumer advocates, academics, and government regulators to discuss various issues, including potential benefits and challenges posed by the collection of connected and automated vehicle data. In addition to FTC and NHTSA, other federal agencies doing work on privacy issues include the National Institute of Standards and Technology and the National Telecommunications and Information Administration within the Department of Commerce. For example, in January 2017, the National Institute of Standards and Technology issued a report recommending practices for including privacy risk assessments when designing federal systems, and that according to agency officials, could also serve as guidance for private companies, including automakers. In addition, the National Telecommunications and Information Administration has convened multistakeholder processes with industry and other stakeholders that focused on developing voluntary codes of conduct for the commercial use of emerging technologies, such as facial recognition technology. According to National Telecommunications and Information Administration officials, no additional privacy focused multistakeholder processes are planned at this time. The auto industry is currently undergoing a rapid evolution as vehicles become more connected and automated. For example, most automakers and industry associations in our review agreed that auto technology is rapidly evolving, and some of these stakeholders noted that the auto industry more broadly is evolving. As several of these stakeholders told us, this evolution will result in more data—including more sensitive data— being collected, used, and shared. NHTSA’s recent actions on emerging vehicle technologies align with the agency’s safety mission, authority, and goals. However, NHTSA has not clearly defined and communicated its roles and responsibilities related to the privacy of connected vehicle data to stakeholders. Specifically, according to some automakers and all industry associations we spoke with, NHTSA’s recent actions on emerging vehicle technologies have left stakeholders without a clear understanding of the agency’s role with respect to privacy. For example, three automakers noted that NHTSA appears to be more involved in data privacy as reflected by the Federal Automated Vehicles Policy. One automaker also questioned whether NHTSA was coordinating on data privacy issues with other relevant federal agencies. Five industry associations told us that NHTSA appears to have an interest in the area of privacy. Four of these associations also told us that NHTSA might have a role in monitoring its members’ use of connected vehicle data, but the four were not sure. In addition, in public comments filed on the Federal Automated Vehicles Policy, one auto industry trade group questioned whether considering privacy issues is consistent with NHTSA’s safety mission and suggested that certain privacy provisions in the policy exceeds NHTSA’s current statutory authority. In contrast, all 11 automakers that discussed this topic and all industry associations in our review told us that FTC’s role in this area is clear. For example, representatives of both auto industry associations told us that they had chosen to notify the FTC about their members’ implementation of the Consumer Privacy Protection Principles because FTC would be the federal agency to enforce and hold automakers accountable to these principles. NHTSA officials acknowledged that some stakeholders may be uncertain whether and, if so, to what extent it has authority to address any privacy issues with respect to motor vehicles. The Standards for Internal Control in the Federal Government call for agencies to identify, analyze, and respond to significant changes, and in response to such changes, to periodically reevaluate and further define key agency roles and responsibilities. These standards also direct agencies to clearly communicate relevant information—such as the agencies’ roles, responsibilities, and important changes to these—to external parties. These standards are intended, among other things, to help agencies manage change associated with shifting environments and evolving demands and priorities. We have also previously found that interagency collaboration is enhanced when agencies, among other things, ensure that their roles and responsibilities are clearly defined. By agreeing on and clearly defining roles and responsibilities, agencies can clarify which agency will do what, organize their joint and individual efforts, and facilitate better decision making. As previously described, NHTSA and FTC have collaborated on the potential privacy risks posed by new vehicle technologies; clarifying NHTSA’s role could therefore further enhance collaboration with FTC and other federal counterparts. Furthermore, if NHTSA more clearly defined its roles and responsibilities for protecting the privacy of connected vehicle data, industry stakeholders would have a better understanding of how the agency intends to oversee the privacy of data generated by emerging vehicle safety technologies and which agency is responsible for privacy as the connected vehicle landscape continues to evolve. In recent years, connected vehicles have become more common, offering consumers a number of benefits but also increasing the potential for privacy risks. Currently, automakers are reportedly collecting, using, and sharing connected vehicle data on a fairly limited basis and are, at least partially, using leading privacy practices to protect that data. However, experts and others have raised a number of consumer privacy concerns, including whether such data—including sensitive information such as a driver’s location and behavior—are being adequately protected. No single federal agency oversees data privacy issues. However, FTC has the primary role—one that is clearly defined and reinforced through its enforcement actions against companies that have engaged in unfair and deceptive practices, such as violating their own privacy practices or failing to implement reasonable security practices. Although NHTSA has a clear role in overseeing the safety of the estimated 265 million passenger vehicles on U.S. roads today, industry stakeholders are unclear about NHTSA’s role with respect to vehicle data privacy, due to recent agency actions on automated vehicles and vehicle-to-vehicle technology. With the anticipated increase in the number of connected vehicles on U.S. roads in the near future—and with it, the financial incentives for automakers and others to more widely collect, use, and share vehicle and driver data—it is important for NHTSA to define and communicate its privacy roles and responsibilities. If NHTSA makes its roles and responsibilities clearer, industry stakeholders will likely have a better understanding of its oversight role for emerging vehicle technologies, and NHTSA will likely be more effective in collaborating with FTC and other federal agencies. However, if that opportunity is missed, consumers may not fully embrace emerging technologies with potential safety benefits, such as vehicle-to-vehicle technology and automated vehicles. With a forward-looking approach to identify changes in the environment and clarifying its roles and responsibilities as needed in response, NHTSA can anticipate and—in collaboration with FTC—better plan for the anticipated changes that could impact the privacy of many Americans. The Secretary of Transportation should direct NHTSA to define, document, and externally communicate the agency’s roles and responsibilities in relation to connected vehicle data privacy. We provided a draft of this report to the Departments of Transportation (DOT), Commerce, and Justice, FTC, and the Federal Communications Commission for review and comment. We received written comments from DOT, which are reprinted in appendix IV. We also received technical comments from DOT and FTC, which we have incorporated, as appropriate. DOT concurred with our recommendation to define, document, and externally communicate the agency’s roles and responsibilities in relation to connected vehicle data privacy. Among other things, DOT reiterated the importance of consumer privacy and how it considers the privacy implications of its regulations and voluntary guidance, such as in the proposed vehicle-to-vehicle rulemaking. The Departments of Commerce and Justice and the Federal Communications Commission reviewed our report, but did not have any comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Transportation, Commerce, and Justice and the Chairs of the Federal Trade Commission and Federal Communication Commission, and other interested parties. In addition, the report will be available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report addresses the following objectives: (1) the types of data collected by connected vehicles and transmitted to automakers and how, if at all, selected automakers use and share these data, (2) the extent to which selected automakers’ privacy policies for connected vehicles align with leading practices, (3) selected experts’ views on privacy issues related to the commercial use of data collected by connected vehicles, and (4) federal roles and efforts related to the privacy of data collected by connected vehicles. To address all of our objectives, we reviewed applicable federal statutes and regulations, our prior work, and reports by other federal agencies, academics, and research organizations on privacy and connected technologies. We interviewed representatives of three organizations that advocate protecting the privacy of consumers’ data identified and selected based on their contributions to our prior work. We also interviewed eight industry associations representing automakers, automotive suppliers, application developers, and telecommunication companies. The industry associations we interviewed were: Association of Global Automakers, Alliance of Automobile Manufacturers, Application Developers Alliance, Connected Vehicle Trade Association, Consumer Technology Association, CTIA-The Wireless Association, Motor and Equipment Manufacturers Association, and the National Automobile Dealers Association. To determine types of data collected from connected vehicles and how, if at all, these data are used, we conducted semi-structured interviews with representatives of 16 automakers and 3 other industry stakeholders. We attempted to interview most of the automakers selling passenger vehicles in the United States and identified them using the membership lists of two automotive industry trade associations, the Alliance of Automobile Manufacturers and the Association of Global Automakers. We selected automakers to interview based on their 2015 U.S. market share, specifically, a market share that was larger than zero. Based on our discussions with the automotive trade associations, we excluded two automakers due to their very small U.S. market share, and two other automakers we contacted told us they no longer sell passenger vehicles in the U.S. We identified and selected one additional automaker, which is not a member of these automotive industry trade associations, due to its growing market share and high profile in the connected vehicle market. Of the 19 automakers we contacted, three did not respond to our interview request. For the complete list of automakers we interviewed, see table 3. We identified and selected other industry stakeholders based on their industry roles, specifically their roles as telecommunication companies, telematics service providers, and application developers. Of the 12 other industry stakeholders we contacted, we interviewed three stakeholders representing two telecommunications companies and one telematics service provider. Given the small number of other industry stakeholders interviewed, the names of these companies are not included in this report. We used a semi-structured format in our interviews with automakers and other industry stakeholders and asked each type of stakeholder the same set of questions. For example, we asked each automaker what types of data it collects, how these data are used, and how they share these data. We asked each of the 13 automakers that offer connected vehicles a set of additional questions to clarify its use and sharing of data and its privacy practices. After interviewing selected automakers, we summarized and analyzed their responses to identify themes relevant to our research objectives, such as the types of data collected. The views and information gathered through our interviews with selected automakers and industry stakeholders cannot be generalized to the industry as a whole. However, the 16 selected automakers we interviewed produce over 25 vehicle brands and represented around 90 percent of the U.S. passenger vehicle sales market share in 2015. To determine the extent to which selected automakers’ reported privacy practices and written privacy notices (collectively we refer to these as “privacy policies”) for connected vehicles reflect leading practices, we identified leading practices related to privacy using several widely recognized sources. The privacy frameworks and reports we used for this analysis are: (1) the Organisation for Economic Co-operation and Development’s The OECD Privacy Framework (known as the “Fair Information Practice Principles” (FIPPs); (2) the Federal Trade Commission’s (FTC) report, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers (2012); (3) FTC Staff Report, Internet of Things: Privacy and Security in a Connected World (2015); (4) the National Highway and Transportation Administration’s (NHTSA) Federal Automated Vehicles Policy (2016); and (5) the National Institute of Standards and Technology’s NISTIR 8062: An Introduction to Privacy Engineering and Risk Management in Federal Systems (2017). We compared the leading practices identified in each source and grouped practices into similar categories. We did not deem any practices from the sources to be irrelevant for our review on connected vehicles, but we did determine that similar identified practices could be combined into one leading practice. The privacy leading practices we used for our analysis are transparency, focused data use, data security, data accuracy and access, individual control, and accountability. To determine the extent to which automakers’ privacy policies reflect these leading practices, we analyzed selected automakers’ written privacy notices through a document review and reported privacy practices related to connected vehicles based on automakers’ responses to semi-structured interview questions. Of the 16 automakers we interviewed, 13 offered connected vehicles at the time we spoke to them. While many of these automakers offer several brands, one automaker had different written privacy notices for its three vehicle brands sold in the U.S. As such, we analyzed a total of 15 sets of privacy policies (their written notices and reported privacy practices). To analyze the written privacy notices, we developed questions to determine whether an automaker’s written privacy notice specifically addressed the various elements of a privacy leading practice. We conducted a test case with one automaker’s written privacy notice and made needed revisions to the questions. Then, one analyst coded each of the 15 written privacy notices using NVivo software. A second analyst reviewed the coding and confirmed the accuracy of the results. To analyze the reported privacy practices, we conducted a content analysis of automakers’ interview responses to questions related to their privacy practices. All automakers were asked the same set of questions, and we conducted follow-up with each automaker offering connected vehicles to ensure consistency in responses and ensure that we asked questions related to each of the leading privacy practices identified as relevant for connected vehicles for the purposes of this report. As part of our follow up work, we asked selected automakers questions directly related to the leading privacy practices. One analyst coded the interview responses and a second analyst reviewed and confirmed the accuracy of the coding. To assess whether an automaker’s policies reflected each of these leading privacy practices, we used the following scale: Substantially reflected: a company met most (70 percent or more) of the elements of this leading practice. Partially reflected: a company met about half of the elements of this leading practice. Minimally reflected: a company met less than half of the elements of this leading practice. For each of the six leading practices, there were between 5 to 10 questions of equal weight that we used to determine the extent to which a practice was reflected. As noted above, our analysis of automakers’ privacy policies is based on written notices and reported practices obtained through interviews. We did not conduct a compliance review, as the leading practices used in this report are not legally binding. We also did not evaluate the extent to which selected automakers follow their reported privacy policies. To determine selected experts’ views on privacy issues related to the commercial use of data collected by connected vehicles, we interviewed 16 subject matter experts in connected vehicles or privacy. We identified a prospective pool of subject matter experts through reviewing our prior reports, related National Academy of Sciences panels, relevant literature, and recommendations from other interviewees. We selected subject matter experts using eight criteria: relevant background, selected academic publications and technical reports, selected presentations at conferences, selected popular source articles, selected testimonies, selected instances of interviews on prior engagements, selected professional service and appointments, and recommendations from other interviewees. Those organizations and individuals that had relevant experience in at least four of our eight criteria were deemed experts for the purposes of our review (see table 4 for the full list of experts we interviewed). As part of these semi-structured interviews, we presented each selected expert with general privacy concerns about the commercial use of data we identified from our prior reports, FTC reports, and preliminary interviews with organizations that advocate protecting consumers’ data privacy. We asked each selected expert to what extent these privacy concerns were relevant to data collected through connected vehicles. After interviewing selected experts, we summarized and analyzed their responses to identify themes relevant to our research objective. The views and information gathered through our interviews with subject matter experts cannot be generalized to all such experts, but they do provide insight into relevant privacy concerns and solutions. To examine federal roles and efforts related to the privacy of data collected from connected vehicles, we reviewed relevant documents and interviewed officials from four federal agencies—FTC, Department of Transportation, Department of Commerce, and Federal Communications Commission—that we identified as having privacy and consumer protection responsibilities potentially related to connected vehicles. We also discussed with selected experts, automakers, other industry stakeholders, and industry associations the federal laws that may apply in this context and related federal efforts and roles. Because of DOT’s role in overseeing motor vehicles, we compared DOT’s efforts to reevaluate and define key agency roles and responsibilities as new vehicle technologies emerge with pertinent Standards for Internal Control in the Federal Government and practices identified in our prior work on agency collaboration. We conducted this performance audit from April 2016 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We assessed the extent to which selected automakers’ reported policies reflected each of the leading privacy practices we identified. We included in this analysis those 13 automakers offering connected vehicle services. However, one of these 13 automakers has different written privacy notices and reported privacy practices for its three affiliated brands. As a result, our analysis of automakers’ privacy policies included 15 sets of privacy policies. We reviewed the automakers’ written privacy notices and responses to specific interview questions focused on their use of these practices. We also asked automakers to confirm their answers to the interview questions directly related to the leading privacy practices. We used the following scale to categorize the extent to which each automaker’s policies reflected each leading practice: Substantially reflected: automaker met most (70 percent or more) of the elements of this leading practice. Partially reflected: automaker met about half (50 to 69 percent) of the elements of this leading practice. Minimally reflected: automaker met less than half or none of the elements of this leading practice. For this assessment, we used a total of 43 questions, each of which was given equal weight, across the six identified leading practices: 2) focused data use, 3) data security, 4) data access and accuracy, 5) individual control, and 6) accountability. Some assessment questions relate to automakers’ written privacy notices and others relate to interview questions we asked about the automaker’s use of the leading practices. The 6 tables below include, for each leading practice: the assessment questions, the results for each automaker, and an overall summary of how fully all the selected automakers’ policies reflected the practice. In addition to the individual named above, the following individuals made important contributions to this report: Nancy Lueke (Assistant Director); Sarah Arnett (Analyst-in-Charge); Jessica Bryant-Bertail; Camilo Flores; Pamela Davidson; Delwen Jones; Josh Ormond; Eleni Orphanides; and John Villecco.
|
The prevalence of connected vehicles—those with technology that wirelessly transmits and receives data—has raised questions about how the collection, use, and sharing of these data affect consumer privacy. GAO was asked to review consumer privacy issues related to connected vehicles. This report: (1) examines the types, use, and sharing of data collected by connected vehicles; (2) determines the extent to which selected automakers' privacy policies for these data align with leading practices; and (3) evaluates related federal roles and efforts, among other objectives. GAO interviewed relevant industry associations, organizations that work on consumer privacy issues, and a non-generalizable sample of 16 automakers selected based on their U.S. passenger vehicle sales. In addition, GAO analyzed selected automakers' privacy policies (written notices and reported practices) against a set of leading privacy practices determined to be relevant to connected vehicles. To identify these practices, GAO reviewed a variety of privacy frameworks developed by federal agencies and others. GAO reviewed relevant federal statutes, regulations, and reports, and interviewed agency officials, including those from DOT, the Department of Commerce, and FTC. Thirteen of the 16 selected automakers in GAO's review offer connected vehicles, and those 13 reported collecting, using, and sharing data from connected vehicles, such as data on a car's location and its operations (e.g., tire pressure). All 13 automakers described doing so on a relatively limited basis. For example, they reported using data to provide requested services to consumers and for research and development. None of the 13 reported sharing or selling data that could be linked to a consumer for unaffiliated third parties' use. However, as connected vehicles become more commonplace, the extent of data collection, use, and sharing will likely grow. Automakers have taken steps, including signing onto a set of privacy principles, to address privacy issues. In comparing selected automakers' reported privacy policies to leading privacy practices, GAO found that these automakers' policies at least partially reflected each of the leading privacy practices, for example: Transparency : All 13 selected automakers' written privacy notices were easily accessible, but none was written clearly. Focused data use : Most selected automakers reported limiting their data collection, use, and sharing, but their written notices did not clearly identify data sharing and use practices. Individual control : All 13 selected automakers reported obtaining explicit consumer consent before collecting data, but offered few options besides opting out of all connected vehicle services to consumers who did not want to share their data. The Federal Trade Commission (FTC) and the Department of Transportation's (DOT) National Highway Traffic Safety Administration (NHTSA) are primarily responsible for protecting consumers and ensuring passenger vehicles' safety, respectively. FTC has the authority to protect consumer privacy and has issued reports and guidance and conducted workshops on the topic generally as well as on connected vehicles specifically. NHTSA has broad authority over the safety of passenger vehicles and considers the privacy effects and implications of its regulations and guidance. FTC and NHTSA have coordinated on privacy issues related to connected vehicles. However, NHTSA has not clearly defined its roles and responsibilities as they relate to the privacy of vehicle data. In response to emerging vehicle technologies, NHTSA included privacy requirements in a related rulemaking and included privacy expectations in voluntary guidance. Because of these actions, selected automakers and others said NHTSA's role in data privacy was unclear. NHTSA officials acknowledged that some stakeholders may be uncertain about its authority to address privacy issues. Federal standards for internal control require, among other things, that agencies define and communicate key roles and responsibilities. By clearly defining, documenting, and communicating NHTSA's roles and responsibilities in vehicle data privacy, NHTSA would be better positioned to coordinate with other federal agencies and to effectively oversee emerging vehicle technologies. GAO recommends that NHTSA define, document, and externally communicate its roles and responsibilities related to the privacy of data generated by and collected from vehicles. NHTSA concurred with our recommendation.
|
In 2004, the United States consumed about 20.5 million barrels per day of crude oil accounting for roughly 25 percent of world oil production. A great deal of the crude oil consumed in this country goes into production of gasoline and, as a nation, we use about 45 percent of all gasoline produced in the world. California alone presently consumes almost 44 million gallons of gasoline per day. To put this in perspective, in 1997 (the last year for which we found available data for international comparisons), only the rest of the United States and Japan consumed more gasoline than California. Products made from crude oil—petroleum products, including gasoline— have been instrumental in the development of our modern lifestyle. In particular, gasoline, diesel, and jet fuel have provided the nation with affordable fuel for automobiles, trucks, airplanes and other forms of public and goods transportation. Together, these fuels account for over 98 percent of the U.S. transportation sector’s fuel consumption. In addition, petroleum products are used as raw materials in manufacturing and industry; for heating homes and businesses; and, in small amounts, for generating electric power. Gasoline use alone constitutes about 44 percent of our consumption of petroleum products in the United States, so when gasoline prices rise, as they have in recent months, the effects are felt throughout the country, increasing the costs of producing and delivering basic retail goods and making it more expensive to commute to work. It is often the case that prices of other petroleum products also increase at the same time and for the same reasons that gasoline prices rise. For example, today’s high gasoline prices are mirrored by high jet fuel prices, which have put pressure on airline companies, some of which are currently in the midst of financial difficulties. Gasoline prices vary a great deal over time. For example, in the 10-year period April 1995 through April 2005, the national average price for a gallon of regular grade gasoline has been as low as $0.89 and as high as $2.25 without adjusting for inflation. In addition, gasoline prices vary by location and, in recent years, California has consistently had among the highest prices in the nation. The future path of gasoline prices is difficult to predict, but it is clear that the use of petroleum products worldwide is going to increase for the near term and maybe beyond. Some analysts have predicted much higher crude oil prices—and as a result, higher prices of petroleum products—while others expect prices to moderate as producers respond to high prices by producing more crude oil and consumers respond by conserving more, and investing in more energy-efficient cars and other products. In either case, the price of gasoline will continue to be an important part of the household budgets of Americans for the foreseeable future and therefore, it is important to understand how prices are determined so that consumers can make wise choices. Crude oil prices feed directly into the price of gasoline, because crude oil is the primary raw material from which gasoline is produced. For example, according to our analysis of EIA data, crude oil accounted for about 48 percent of the price of a gallon of gasoline on average in 2004 in the United States. When crude oil prices rise, as they have in recent months, refiners find their cost of producing gasoline also rises, and in general, these higher costs are passed on to consumers in the form of higher gasoline prices at the pump. Figure 2 illustrates the importance of crude oil in the price of gasoline. The figure also shows that taxes, refining, and distribution and marketing also play important roles. Because of the prominent role of crude oil as a raw material of gasoline production, in order to understand what determines gasoline prices it is necessary to examine how crude oil prices are set. Overall, the price of crude oil is determined by the balance between world demand and supply. A major cause of rising crude oil prices in recent months has been rapid growth in world demand, without a similar growth in available supplies. In particular, the economy of China has grown rapidly in recent years, leading to increases in their demand for crude oil. In contrast, oil production capacity has grown more slowly, leading to a reduction in the surplus capacity—the amount of crude oil that is left in the ground, but could be extracted on short notice in the event of a supply shortfall. EIA has stated that the world’s surplus crude oil production capacity has fallen to about one million barrels per day, or just over one percent of the world’s current daily consumption, making the balance between world demand and supply of crude oil very tight. This tight balance between world crude oil demand and supply means that any significant supply disruptions will likely cause prices to rise. For example, a workers’ strike in Nigeria’s oil sector in October 2004 forced world crude oil prices to record highs (Nigeria is the world’s seventh largest oil producer, supplying an average 2.5 million barrels per day in 2004). Another important factor affecting crude oil prices is the behavior of the Organization of Petroleum Exporting Countries (OPEC)—members of which include Algeria, Indonesia, Iran, Iraq, Kuwait, Libya, Nigeria, Qatar, Saudi Arabia, United Arab Emirates, and Venezuela. OPEC members produce almost 40 percent of the world’s crude oil and control almost 70 percent of the world’s proven oil reserves. In the recent past and on numerous other occasions, OPEC members have collectively agreed to restrict production of crude oil in order to increase world prices for that commodity. In addition to the cost of crude oil, gasoline prices are influenced by a variety of other factors, including refining capacity constraints, low inventories, unexpected refinery or pipeline outages, environmental and other regulations, and mergers and market power in the oil industry. First, domestic refining capacity, has not kept pace with growing demand for gasoline. As demand has grown faster than domestic refining capacity, the United States has imported larger and larger volumes of gasoline and other petroleum products from refiners in Europe, Canada, and other countries. EIA officials told us that, in general, this increase in imports has reflected the availability of gasoline from foreign sources at lower cost than building and operating additional refining capacity in the United States would entail. However, the American Petroleum Institute (API) has recently reported that capacity utilization has been high in the U.S. refinery sector. Capacity has typically averaged over 90 percent, and has recently increased to 92 percent—much higher than the rate in many other industries, which API reports are more typically operating at around 80 percent of capacity. As a result, domestic refineries have little room to expand production in the event of a temporary supply shortfall. Further, the fact that imported gasoline comes from farther away than domestically produced gasoline means that when supply disruptions occur in the United States, it might take longer to get replacement gasoline than if we had excess refining capacity in the United States, and this could cause gasoline prices to rise and stay high until these new supplies can reach the market. Gasoline prices may also be affected by unexpected refinery outages or accidents that significantly disrupt the delivery of gasoline supply. For example, in a recent report, we found that unexpected refinery outages had been a factor in a number of prices spikes in California in the 1990s. More recently, the tragic explosion and subsequent fire at a BP refinery in Houston, that killed 15 people, temporarily shut down about 3 percent of the nation’s refining capacity. While we have not analyzed the potential impact on gasoline prices of this specific event, similar events in the past have caused temporary increases in prices until alternative sources of supply can be brought to market. Pipeline disruptions can have a similar effect, as was seen when Arizona’s Kinder Morgan pipeline broke in July 2003 and average gasoline prices jumped 56 cents in a month in Arizona. In addition, tanker spills, and other similar events can all have an impact on gasoline prices at various points in time because they cause interruption in the supply of crude oil or petroleum products, such as gasoline. The level of gasoline inventories can also play an important role in determining gasoline prices over time because inventories represent the most accessible and available source of supply in the event of a production shortfall or increase in demand. Similar to trends in other industries, the level of inventories of gasoline has been falling for a number of years. In part, this reflects a trend in business to more closely balance production with demand in order to reduce the cost of holding large inventories. However, reduced inventories may contribute to increased price volatility, because when unexpected supply disruptions or increases in demand occur, there are lower stocks of readily available gasoline to draw from. This puts upward pressure on gasoline prices until new supplies can be refined and delivered domestically, or imported from abroad. Regulatory steps to reduce air pollution have also influenced gasoline markets and consequently have influenced gasoline prices. For example, since the 1990 amendments to the Clean Air Act, the use of various blends of cleaner-burning gasoline—so-called “boutique fuels—has grown. A number of reports by government agencies, academics, and private entities have concluded that the proliferation of these special gasoline blends has put stress on the gasoline supply infrastructure and may have led to increased price volatility because areas that use special blends cannot as easily find suitable replacement gasoline in the event of a local supply disruption. However, these special gasoline blends provide environmental and health benefits because they reduce emissions of a number of pollutants. GAO is currently working on a report on special gasoline blends that will look at these issues and discuss the effects of these special blends on emissions and on the supply system. Finally, we recently reported that industry mergers increased market concentration and in some cases caused higher wholesale gasoline prices in the United States from the mid-1990s through 2000. Overall, the report found that the mergers led to price increases averaging about 2 cents per gallon on average. For conventional gasoline, the predominant type used in the country, the change in the wholesale price, due to specific mergers, ranged from a decrease of about 1 cent per gallon—due to efficiency gains associated with the merger—to an increase of about 5 cents per gallon— attributed to increased market power after the merger. For special blends of gasoline, wholesale prices increased by from between 1 and 7 cents per gallon, depending on location. California, and the West Coast states more generally, have consistently had among the highest gasoline prices in the nation. For example, California’s gasoline prices averaged about 21 cents more per gallon than national gasoline prices over the last ten years. In addition, California has at times had more volatile gasoline prices than the rest of the country. For example, in an earlier report on California gasoline prices, we noted that, while gasoline prices did not spike more frequently than in the rest of the United States, California’s gasoline price spikes were generally higher. Many of the factors influencing gasoline prices nationwide have had an even more dramatic effect on California prices. For example, California’s high gasoline prices have been attributed, in part, to its cleaner burning gasoline. In response to air quality problems and in order to meet air quality standards resulting from the Clean Air Act and amendments, California adopted a unique blend of gasoline in 1996 that increased refining costs and likely caused prices of gasoline in the state to rise. California’s blend of gasoline is unique in the United States and, according to EPA models, is the cleanest burning of all the widely used special gasoline blends in the country. This gasoline blend is also very difficult to make, and those refineries that chose to make it had to install expensive new equipment and refining processes in order to meet the specifications of the gasoline. Some studies have suggested that the current blend of California gasoline costs between 5 and 15 cents more per gallon to make than conventional gasoline. It is likely that these costs are passed on, at least in part, to consumers. In addition, in recent years, California has developed a tight balance between supply and demand, which has at times led to sharper or longer price spikes when supply disruptions have occurred. Expansion of the gasoline supply infrastructure has not kept pace with growing demand, and as a result, the California refinery system has run at near capacity. For example, according to EIA testimony before the Congress, demand for gasoline in California has grown at roughly two to four times production capacity growth. California Energy Commission staff told us that the tight supply and demand balance has led to large price movements in response to even small supply disruptions, caused by refinery outages and other events. Moreover, supply disruptions may have a larger impact on California than on other states. First of all, only a few refineries outside of the state can produce California’s special blend of gasoline. In addition, there are no major pipelines connecting the state with other major refining areas. Therefore, if supply is disrupted in California, gasoline must be brought in from the few refineries outside the state that make California’s blend of gasoline—often from as far away as the Gulf Coast or beyond. And because of the lack of pipeline access to the state, tankers and other means must be used, and the process is slow. For example, we recently reported that gasoline shipped into California by tanker from such places as the Gulf Coast, the U.S. Virgin Islands, Europe, and Asia, can take between 11 and 40 days and added 3 to 12 cents per gallon to the retail price. Another factor contributing to the prices Californians pay at the gasoline pump is that residents of California pay comparatively higher gasoline taxes than residents in many other states. For example, at about 57 cents per gallon on average, California’s total gasoline tax rate is among the highest, behind only New York and Hawaii, and is 30 percent higher than the national average of 44 cents per gallon, according to a November 2004 survey by the American Petroleum Institute. In our recent report on oil industry mergers discussed earlier in this testimony, we found that the highest price impact of mergers—over 7 cents per gallon of gasoline—was in California. In addition, the California Attorney General recently reported that California’s gasoline industry is more concentrated than that of the rest of the United States, with California’s six largest refiners controlling more than 90 percent of refining capacity. The California Attorney General noted further, that these six refiners in California control a majority of the terminal facilities and 85 percent of the retail locations in the state. To the extent that these factors lead to greater market power on the part of refiners or gasoline marketers, prices may be higher as a result. However, we have not analyzed this directly. Looking into the future, daunting challenges lie ahead in finding, developing, and providing sufficient quantities of oil to meet projected global demand. For example, according to EIA, world oil demand is expected to grow to nearly 103 million barrels per day in 2025 under low growth assumptions, and may reach as high as 142 million barrels per day in 2025 —increases of between 25 and 71 percent, from the 2004 consumption level of 83 million barrels per day. For the United States alone, EIA estimates that oil consumption will increase by between 1.2 and 1.9 percent annually through 2025 depending on assumptions about economic growth and other factors. Looking further ahead, the rapid pace of economic growth in China and India, two of the world’s most populous and fastest growing countries, may lead to a similarly rapid increase in their demand for crude oil and petroleum products. While these countries currently consume only a small fraction of world crude oil, the pace of their demand growth could have far reaching implications if recent trends continue. For example, consumption of oil by China and India is currently far below that of the United States, but is projected to grow at a more rapid rate. EIA’s medium-growth projections estimate that oil consumption for China and India will each grow by about 4 percent annually through 2025, while consumption in the U.S. is projected to grow at an annual rate of 1.5 percent over the same period. To meet the rising demand for gasoline and other petroleum products, new oil deposits will likely be developed and new production facilities built. Currently, many of the world’s known and easily accessible crude oil deposits have already been developed, and many of these are experiencing declining volumes as fields become depleted. For example, the existing oil fields in California and Alaska have long since reached their peak production, necessitating an increasing volume of imported crude oil to West Coast refineries. Developing new oil deposits may be more costly than in the past, which could put upward pressure on crude oil prices and the prices of petroleum products derived from it. For example, some large potential new sources, such as oil shales, tar sands, and deep-water oil wells, require different and more costly extraction methods than are typically needed to extract oil from existing fields. In addition, the remaining oil in the ground may be heavier and more difficult to refine, necessitating investment in additional refinery processes to make gasoline and other petroleum products out of this oil. If developing, extracting, and refining new sources of crude oil are more costly than extracting and refining oil from existing fields, crude oil and petroleum product prices will rise to make these activities economically feasible. On the other hand, technological advances in oil exploration, extraction, and refining could mitigate future price increases. In the past, advances in seismic technology significantly improved the ability of oil exploration companies to map oil deposits, which enabled them to ultimately extract the oil more efficiently, thereby getting more out of a given oil field. In addition, improvements in technology have enabled oil companies to drill in multiple directions from a single platform, and also to pin-point specific oil deposits more accurately, which has led to increases in the supply of crude oil. Further, refining advances over the years have also enabled U.S. refiners to increase the yield of gasoline from a given barrel of oil—while the total volume of petroleum products has remained relatively constant, refiners have been able to get more of the more valuable components, such as gasoline, out of each barrel, thereby increasing the supply of these components. Further technological improvements that lower costs or increase supply of crude oil or refined products would likely lead to lower prices for these commodities. Similarly, innovations that reduce the costs of alternative sources of energy could also reduce the demand for crude oil and petroleum products, and thereby ease price pressures. For example, hydrogen is the simplest element and most plentiful gas in the universe and its use in fuel cells produces almost no pollution. In addition, hydrogen fuel cell cars are expected to be roughly three times more fuel-efficient than cars powered by typical internal combustion engines. Currently, enormous technical problems stand in the way of converting America’s fleet of automobiles from gasoline to hydrogen, including how to produce, store, and distribute the flammable gas safely and efficiently, and how to build hydrogen cars that people can afford and will want to buy. However, there are federal and state initiatives under way as well as many private efforts to solve these technical problems, and if they can be solved in an economical way in the future, the implications for gasoline use could be profound. Greater conservation or improved fuel efficiency could also reduce future demand for crude oil and petroleum products, thereby leading to lower prices. The amount of oil and petroleum products we will consume in the future is, ultimately, a matter of choice. Reducing our consumption of gasoline by driving smaller, more fuel-efficient cars—as occurred in the 1980s in response to high gasoline prices—would reduce future demand for gasoline and put downward pressure on prices. For example, the National Academies of Science recently reported that if fuel-efficiency standards for cars and light trucks had been raised by an additional 15 percent in 2000, consumption of gasoline in the year 2015 would be 10 billion gallons lower than it is expected to be under current standards. The Congress established fuel economy standards for passenger cars and light trucks in 1975 with the passage of the Energy Policy and Conservation Act. While these standards have led to increased fuel efficiency for cars and light trucks, in recent years, the switch to light trucks has eroded gains in the overall fuel efficiency of the passenger fleet. Future reductions in demand for gasoline could be achieved if either by fuel efficiency standards for cars and light trucks are increased, or consumers switch to driving smaller or more fuel-efficient cars. The effect of future environmental regulations and international initiatives on oil and petroleum products prices is uncertain. On one hand, regulations that increase the cost or otherwise limit the building of refining and storage capacity may put pressure on prices in some localities. For example, the California Energy Commission told us the lack of storage capacity for imported crude oil and petroleum products may be a severe problem in the future, potentially leading to supply disruptions and price volatility. Alternatively, international efforts to reduce the generation of green house gas emissions could cause reductions in the demand for crude oil and petroleum products through the development and use of more fuel-efficient processes and as cleaner, lower-emissions fuels are developed and used. Moreover, geopolitical factors will likely continue to have an impact on crude oil and petroleum product prices in the future. Because crude oil is a global commodity, the price we pay for it can be affected by any events that affect world demand or supply. For example, Venezuela—which produces around 2.6 million barrels of crude oil per day, and which supplies about 12 percent of total U.S. imports for oil—is currently experiencing considerable social, economic, and political difficulties that have, in the past, impacted oil production. In April 2002, the oil flow from Venezuela was stemmed during 3 consecutive days of general strikes, affecting oil production, refining, and exports. Finally, instability in the Middle East, and particularly the Persian Gulf, has in the past, caused major disruptions in oil supplies, such as occurred toward the end of the first Gulf War, when Kuwaiti oil wells were destroyed by Iraq. Finally, the value of the U.S. dollar on open currency markets could also affect crude oil prices in the future. For example, because crude oil is typically denominated in U.S. dollars, the payments that oil-producing countries receive for their oil are also denominated in U.S. dollars. As a result, a weak U.S. dollar decreases the value of the oil sold at a given price. Some analysts have recently reported in the popular press that this devaluation can influence long-term prices in two ways. First, oil- producing countries may wish to increase prices for their crude oil in order to maintain their purchasing power in the face of a weakening dollar. Secondly, because the dollars that these countries have accumulated, and that they use, in part, to finance additional oil exploration and extraction, are worth less, the costs these countries pay to purchase technology and equipment from other countries whose currencies have gained relative to the dollar will increase. These higher costs may deter further expansion of oil production, leading to even higher oil prices. In closing, clearly none of the options for meeting the nation’s energy needs are without tradeoffs. Current U.S. energy supplies remain highly dependent on fossil energy sources that are costly, imported, potentially harmful to the environment, or some combination of these three, while many renewable energy options are currently more costly than traditional options. Striking a balance between efforts to boost supplies from alternative energy sources and policies and technologies focused on improved efficiency of petroleum burning vehicles or on overall energy conservation present challenges as well as opportunities. How we choose to meet the challenges and seize the opportunities will help determine our quality of life and economic prosperity in the future. What is true for the nation as a whole is even more dramatically so in California. California is one of the most populous and steadily growing states in the nation, and its need for gasoline, as well as other energy sources, will grow. However, California’s unique problems with respect to developing the right amount and type of infrastructure necessary to ensure a sufficient supply of gasoline, other petroleum products, or alternative fuels must be resolved or viable alternatives developed if California is to continue to enjoy the prosperity and high quality of life it is known for. We are currently studying the gasoline prices in particular, and the petroleum industry more generally, including a primer on gasoline prices, a forthcoming report on special gasoline blends, an analysis of the viability of the Strategic Petroleum Reserve, an evaluation of world oil reserves, and an assessment of U.S. contingency plans should oil imports from a major oil producing country, such as Venezuela, be disrupted. With this body of work, we will continue to provide Congress and the American people the information needed to make informed decisions on energy that will have far-reaching effects on our economy and our way of life. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 (or at [email protected] ). Godwin Agbara, Nancy Crothers, Randy Jones, Mary Denigan-Macauley, Samantha Gross, Mark Metcalfe, Michelle Munn, Melissa Arzaga Roye, and Frank Rusco made key contributions to this testimony. U.S. Retail Price of Gasoline (Not adjusted for inflation) U.S. Gasoline Consumption (1970-2004) Refining Capacity and Number of Refineries (1970-2004) This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Gasoline prices have increased dramatically in recent weeks and currently, California has the highest gasoline prices in the nation. Consequently, consumers are expected to spend significantly more on gasoline this year than last. Specifically, EIA recently projected that, because of higher expected gasoline prices, the average American household will spend about $350 more on gasoline in 2005 than they did in 2004. Understandably, the public and the press have focused on these higher gasoline prices and some have questioned why this is happening. Moreover, people are concerned about the future, with some analysts projecting prices of crude oil--the primary raw material from which gasoline is produced--to remain at current high levels or even increase. Other analysts expect prices to fall as new oil supplies are developed and as consumers adjust to the current high prices and adopt more energy-efficient practices. This testimony, as requested, address factors that help explain today's high gasoline prices in the nation as a whole and specifically in California. In addition, potential trends that may impact future prices of crude oil and gasoline are addressed. Crude oil prices and gasoline prices are linked, because gasoline is derived from the refining of crude oil. As a result, crude oil prices and gasoline prices generally follow a similar, albeit not identical, pattern over time. For example, from January 2004 to the present (April 25, 2005), the price of West Texas Intermediate crude oil rose by almost $20 per barrel, an increase of almost 60 percent, while over the same period, average gasoline prices rose nationally from $1.49 to $2.20 per gallon, an increase of 48 percent. Explanations for this large increase in crude oil and gasoline prices include rapid growth of world demand for crude oil and petroleum products, instability in the Persian Gulf region, and actions by the Organization of Petroleum Exporting Countries (OPEC) to restrict the production of crude oil and thereby increase its price on the world market. In addition to the cost of crude oil, gasoline prices are influenced by a variety of other factors, including refining capacity constraints, low inventories, unexpected refinery or pipeline outages, environmental and other regulations, and mergers and market power in the oil industry. Gasoline prices in California, and in other West Coast states, have consistently been among the highest in the nation and recent experience is no different. For the last week in April, the price of regular grade gasoline in California was $2.63 per gallon, about 43 cents above the national average. Explanations for California's higher than average gasoline prices include (1) California's unique gasoline blend, which is cleaner burning and more expensive to produce than any of the other commonly used gasoline blends; (2) a tight balance between supply and demand in the West Coast, and the long distance to any viable sources of replacement gasoline in the event of local supply disruptions; and (3) California's higher level of gasoline taxes--California currently taxes a gallon of gasoline at 30 cents per gallon more than the state with the lowest taxes, Alaska. Some sources have also attributed high gasoline prices, in part, to the fact that California's refining sector is more concentrated in the hands of fewer companies than in other refining areas, such as the Gulf Coast. Future gasoline prices will, in large part, be determined by the supply and demand for crude oil and its price on the world market. World crude oil demand is projected to rise, so new sources will have to be developed or prices will rise. Technological innovations that reduce the cost of finding or extracting crude oil could reduce prices, other things remaining constant. Greater conservation, or improvements in energy efficient technologies could also mitigate rising demand and reduce upward pressure on prices. In addition, alternative fuel sources may become more economical, thereby supplanting some of the demand for crude oil and gasoline in the future. America faces daunting challenges in meeting future energy demands, and policy makers must choose wisely to ensure that the country can meet these demands, while balancing environmental and quality of life concerns.
|
Individuals seeking to establish a regional center under the EB-5 Program must submit an initial application and supporting documentation as well as an update for each fiscal year (or as otherwise requested by USCIS) showing that the regional center continues to meet the program requirements to maintain its regional center designation. Prospective regional center sponsors apply to the program by submitting Form I-924, Application for Regional Center under the Immigrant Investor Pilot Program. On this form, applicants are to provide a proposal, supported by economically or statistically valid forecasting tools, that describes, among other things, how the regional center (1) focuses on a geographic area of the United States; (2) will promote economic growth through increased export sales and improved regional productivity, job creation, and increased domestic capital investment; and (3) investors will create jobs directly or indirectly. Applicants must also include a detailed statement regarding the amount and source of capital committed to the regional center, as well as a description of the promotional efforts they have taken and planned. Once a regional center has been approved to participate in the program, a designated representative of the regional center must file a Form I-924A, Supplement to Form I-924, for each fiscal year, to provide USCIS with updated information demonstrating that the regional center continues to promote economic growth, improved regional productivity, job creation, or increased domestic capital investment in the approved geographic area. USCIS is to issue a notice of intent to terminate the participation of a regional center if it fails to submit the required information or upon a determination that the regional center no longer As of July 2015, serves the purpose of promoting economic growth.USCIS had approved approximately 689 regional centers spread across 49 states, the District of Columbia, and 4 U.S. territories; and USCIS terminated the participation of 34 regional centers for not filing a Form I- 924A or not promoting economic growth. Prospective immigrant investors seeking to participate in the EB-5 Program must complete three forms and provide supporting documentation that USCIS or State officials, as appropriate, assess to ensure that they have met (1) the terms of participation for the program, (2) conditions for lawful admission for permanent residence on a conditional basis either through adjustment of status if already in the United States under other lawful immigration status or the immigrant visa process if abroad, and (3) requirements of the program to have lawful permanent resident conditions removed. (See fig. 1.) USCIS has identified fraud and national security risks in the EB-5 Program in various assessments it conducted over time and in collaboration with its interagency partners. For example, in 2012, USCIS met with its interagency partners and National Security Staff to assess fraud and national security risks in the EB-5 Program. An internal memo discussing this effort also highlighted steps that USCIS was undertaking to mitigate fraud risks to the program, such as improving collaboration with law enforcement agencies such as SEC and FBI. In response to this assessment, later in 2012, USCIS worked with FBI and the Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN),among others, to better understand the scope of EB-5 Program fraud risks and to assess the benefits of incorporating enhanced security screenings to improve its vetting of EB-5 Program petitioners. FDNS officials told us that one key determination of the study was the need to provide dedicated fraud personnel to the EB-5 Program, which, as discussed later, USCIS implemented. Most recently, in early 2015, DHS’s Office of Intelligence and Analysis prepared a classified report updating the program’s 2012 risk assessment, in response to congressional and USCIS requests, which assessed the fraud risks to the EB-5 Program. In addition to conducting these risk assessments, USCIS officials told us that they identify potential fraud stemming from the EB-5 Program through regular oversight work such as producing reports on investors’ sources of funds. Additionally, law enforcement agencies such as ICE HSI, SEC, and FBI may also uncover fraud through their own investigative efforts and, as appropriate, will share this information with USCIS. FDNS officials noted that fraud risks and schemes in the EB-5 Program were constantly evolving, and that updating their 2015 risk assessment helped them better understand the nature and scope of fraud risks to the program. Further, FDNS officials stated that the office constantly identifies new fraud schemes, and that they must work to stay on top of emerging issues. SEC training materials for EB-5 Program staff on securities fraud also stated that fraud scams are creative and constantly changing, and may make use of new distribution channels such as social media. Moreover, as noted previously, the program has grown substantially over time—the total number of EB-5 visas issued increased from almost 3,000 in fiscal year 2011 to over 9,000 in fiscal year 2014, according to State data; this creates additional opportunities for fraud. Although the risk assessments conducted by USCIS and other agencies have helped provide information to USCIS to better understand and manage risks to the EB-5 Program, these assessments were onetime exercises, and USCIS does not have documented plans to conduct regular future risk assessments of the program because, according to USCIS officials, the agency would perform them on an “as needed” basis. Standards for Internal Control in the Federal Government provides guidance on the importance of identifying and analyzing risks, and using that information to make decisions. These standards address various aspects of internal control that should be continuous, built-in components of organizational operations. One internal control standard, risk assessment, calls for identifying and analyzing risks that agencies face from internal and external sources and deciding what actions should be taken to manage these risks. The standards indicate that conditions governing risk continually change and mechanisms are required to ensure that risk information, such as vulnerabilities in the program, remains current and relevant. Such mechanisms could include periodic risk assessment updates. Moreover, our executive guide for helping agencies identify effective strategies to manage improper payments notes the importance of periodically updating risk assessments because of constant changes in governmental, economic, industry, regulatory, and operating conditions that can affect program risks in their programs. Information collected through periodic reviews, as well as daily operations, can inform the analysis and assessment of risk. Furthermore, DHS’s Risk Management Fundamentals states that DHS and its component agencies should use a risk-based approach when managing programs that includes, among other things, identifying potential risks, assessing and analyzing identified risks, and using risk information and analysis to inform decision making. Planned regular or updated future risk assessments could help better position USCIS to identify, evaluate, and address fraud risks given the potential for changing conditions. Unlawful Source of Petitioner Funds As one example, Fraud Detection and National Security (FDNS) Directorate officials told us about a case in which a petitioner did not report potential financial ties to a number of brothels in China, which would have raised questions about the legitimacy of the petitioner’s source of funds. FDNS’s fraud detection efforts ultimately identified this connection and the U.S. Citizenship and Immigration Services (USCIS) denied the petition. See 8 C.F.R. § 204.6(e), (f), (g)(1), (j); 8 C.F.R. § 216.6(c)(2). In the Senate Judiciary Committee report accompanying the Immigration Act of 1990, it is stated that “the committee intends that processing of an individual visa not continue under this section if it becomes known to the Government that the money invested was obtained by the alien through other than legal means (such as money received through the sale of illegal drugs).” S. Rep. No. 101-55, at 21. This committee report was cited as a basis for changing the definition of capital to exclude assets directly or indirectly acquired by unlawful means. See 56 Fed. Reg. at 60,902. EB-5 Program adjudicators to FDNS for fraud concerns.2015, on the basis of detailed reviews by FDNS staff located in headquarters and overseas, FDNS determined that the sources of funds in many of these petitions contained a high risk for fraud. In addition, ICE HSI headquarters officials provided us with cases of immigrant investors using overseas preparers to submit counterfeit documentation to fraudulently show that funds were lawfully obtained, which can make determining the legitimacy of the source of funds challenging. Further, ICE HSI officials stated that they are concerned that overseas document preparers and recruiters may try to use increasingly sophisticated methods to circumvent program controls. USCIS officials said that IPO and FDNS did not have a means to verify self-reported immigrant financial information with many foreign banks. In addition, both USCIS and State officials noted that they did not have authority to verify banking information with many foreign countries. For example, State officials said that because the U.S. government lacks access to many foreign financial systems, there is no reliable method to verify the source of the funds of petitioners. Legitimacy of investment entity. The amount of investment required to participate in the EB-5 Program, coupled with the fact that EB-5 investors are making an investment in order to obtain an immigration benefit, can create fraud risks tied to unscrupulous regional center operators and intermediaries. According to SEC officials, they have identified instances of fraudulent investment schemes, including securities fraud, related to EB-5 investments. From January 2013 through January 2015, SEC officials reported receiving over 100 tips, complaints, and referrals related to possible securities fraud violations and the EB-5 Program. Just over half of these tips, complaints, and referrals resulted in further investigation by SEC staff or were referred to other state, local, or federal law enforcement agencies for further review. According to an SEC official, as of July 2015, SEC has initiated four civil enforcement actions alleging securities law violations by EB-5 Program participants. Moreover, according to FDNS documentation, as of May 2015, over half (35) of the 59 open investigations tracked by the program primarily involved securities fraud issues. In addition, SEC officials noted that immigrant investors may be vulnerable to fraud schemes because they may be primarily focused on obtaining their visas. SEC officials noted that, anecdotally, immigrant investors often accepted lower rates of return on their investment relative to other non-EB-5 Program investors in the same project as well as non-EB-5 Program investment opportunities. A 2015 academic study reported that EB-5 Program loans bear a lower overall interest rate than conventional loans because the immigrant investors are motivated by the visa rather than the maximization of financial returns.SEC officials said that investors sometimes did not exercise due diligence about their investment decisions, thus increasing the likelihood that immigrant investors could be taken advantage of by unscrupulous regional centers through fraud schemes or by being steered toward poor investments. Moreover, SEC officials told us that the U.S. government is limited in its ability to investigate foreign-based sales and marketing practices of EB-5 Program investment opportunities and that unrealistic or patently false promises are sometimes made to investors. For example, SEC cases have uncovered incidents of regional center principals defrauding prospective immigrant investors by misrepresenting the business investment. Additionally, USCIS and ICE HSI officials all reported that it can be difficult to verify whether funds are being invested in projects and commercial enterprises as reported in immigrant investor petitions and regional center applications and immigrant investors may also be involved in schemes to fraudulently portray job creation or economic activity. For example, ICE HSI officials reported on a 2014 investigation related to a business enterprise that did not provide employees any work and told the employees to sit in an office during business hours. In another example cited by ICE HSI, in 2013, an alleged future EB-5 Program hotel project site was actually a vacant lot; the owner of the location was not aware of any plans to build a hotel there. Appearance of favoritism and special access. The DHS OIG reported in March 2015 that a previous USCIS director had created an appearance of favoritism by providing certain petitioners and stakeholders with special access to DHS leadership and preferential treatment for their EB-5 The OIG report also stated that Program applications or petitions. according to USCIS whistleblower allegations, which the OIG corroborated in some cases, the former director created special processes and revised existing policies in the EB-5 Program to accommodate specific parties. According to the OIG, if not for the intervention of the then director, the career staff at USCIS would have decided adjudication matters differently. According to the OIG report, some USCIS employees felt uncomfortable and pressured to comply with managers’ instructions that appeared to have come from the former director or those working directly for him. Not consistently following standard processes designed to identify potential fraud and other risks in adjudicating applications can increase the likelihood that those with criminal ties or those making fraudulent investments will go unnoticed. It may also create a control environment tolerant of not adhering to risk mitigation processes and could reduce trust and transparency in the overall adjudication process. Although the OIG report did not make specific recommendations, following its issuance, the DHS Secretary expressed further concerns about the program and asked Congress for help to strengthen the security and integrity of the program, stating that the EB-5 Program was frequently contacted by outsiders on behalf of those with an interest in the outcome of a particular EB-5 Program case. The DHS Secretary also announced the creation of a new protocol to help prevent the reality or perception of improper outside influence in the EB-5 Program. As of June 2015, USCIS officials had developed the protocol and anticipate that it will be fully implemented by August 2015. Given these identified fraud risks, and the constantly evolving nature of risks to the program, planning and conducting regular fraud risk assessments of the EB-5 Program could better position USCIS to identify and evaluate emerging fraud risks to the program and address and mitigate these risks. USCIS has taken some steps to enhance its fraud risk management efforts, including creating an organizational structure conducive to fraud risk management, establishing a dedicated entity to design and oversee its fraud risk management activities, conducting fraud awareness training, and establishing collaborative relationships with external stakeholders, including law enforcement agencies. USCIS established an organizational structure to better address fraud risks. In 2013, USCIS restructured the organization of its EB-5 Program operations to help better detect fraud, moving EB-5 Program activities from its California Service Center office and centralizing these operations in USCIS headquarters’ new Immigrant Investor Program Office in Washington, D.C. As of June 2015, nearly all EB-5 adjudication operations are now colocated in Washington, D.C., with the exception of adjudication for Form I-485 applications from immigrant investors who are already in the United States under other lawful immigration status and who are applying to adjust their status to conditional permanent residency under the EB-5 visa category. USCIS officials indicated they plan to move adjudication of the Form I-485 applications from the California Service Center to the National Benefits Center in Lee’s Summit, Missouri, by the end of 2015. These officials stated that because USCIS is primarily paper-driven, colocation also allows for relatively more efficient handling and examination of files for fraud and other risks. In November 2013, USCIS also established a fraud specialist unit for the EB-5 Program within FDNS. FDNS officials said that they increased the number of fraud specialists and hired individuals with specialized skill sets in areas that they consider critical to fraud prevention, including economics, finance, immigration, and national security, as well as relevant language skills. As of May 2015, FDNS was in the process of hiring an additional 8 dedicated staff with specialized fraud expertise to enhance its EB-5 Program fraud detection capabilities and oversight, which will bring the total authorized FDNS EB-5 staff from 13 to 21. USCIS established fraud awareness training. GAO’s fraud control framework states that providing training on fraud awareness and potential fraud schemes to all key government staff is important in stopping fraud. FDNS’s training of its employees includes specialized fraud training. For instance, as of May 2015, FDNS had sent 8 of its 12 staff to Federal Law Enforcement Training Centers for specific training on detecting money laundering following an introductory course provided in headquarters in May 2014. FDNS has also developed an “EB-5 University” to provide staff with monthly presentations on specific fraud-related topics believed to be immediately relevant to adjudication of EB-5 Program petitions and applications. USCIS held six sessions from August 2014 through January 2015, each of which addressed a different issue, including an overview of FinCEN and the use of external agency data for investigating potential fraud. USCIS took steps to improve law enforcement collaboration. USCIS took steps to improve its level of coordination related to EB-5 fraud risk with SEC, ICE HSI, and FBI. USCIS does not generally conduct enforcement actions and therefore coordinates with, and also makes referrals to, law enforcement when it detects potential fraud, criminal activity, or national security threats. According to SEC, ICE HSI, FBI, and USCIS officials, USCIS has increased its level of coordination with law enforcement agencies to cross-train staff with additional expertise. For example, in September 2014 USCIS held an interagency symposium to encourage collaboration among the government partners that have a stake in the EB-5 Program. These officials also said that USCIS has established more reliable avenues of communication among the agencies, which has led to increased communication and collaboration on referrals, investigations, and enforcement actions that can be taken when potential threats and fraud are detected in the EB-5 Program. As of May 2015, USCIS was also finalizing a memorandum of understanding with the Department of the Treasury’s FinCEN to improve USCIS’s ease of access to information related to financial fraud and related criminal activity. Moreover, since consolidating operations in Washington, D.C., USCIS officials stated that they have expanded the scope of their background checks to include a greater number of individuals associated with EB-5 investments and have increased the number of databases used to examine individuals considered high risk. These officials said that they are currently working with stakeholders to further enhance and automate checks across law enforcement databases. USCIS faces significant challenges in its efforts to detect and mitigate fraud risks. Specifically, USCIS’s information systems and processes limit its ability to collect and use data on the EB-5 Program to identify fraud related to individual investors or investments or to determine any fraud trends across the program. While improvements to USCIS information systems are delayed, USCIS has taken alternative steps to gather information to mitigate fraud risk, such as expanding its site visits program to include random checks of the operation of EB-5 Program projects. However, opportunities remain to expand information collection through interviews with immigrant investors and expanded EB-5 Program petition and application forms. Limitations in electronic data on EB-5 Program regional center applicants and immigrant investors. USCIS relies heavily on paper- based documentation. While USCIS contractors and employees are to enter certain information from these paper documents into various electronic databases, these databases have limitations that reduce their usefulness for conducting fraud-mitigating activities. For example, information that could be useful in identifying program participants linked to potential fraud is not required to be entered into USCIS’s database, such as the applicant’s name, address, and date of birth on the Form I- 924 used to apply for regional center participation in the EB-5 Program.Moreover, FDNS officials told us that some data fields are also not standardized, a fact that presents significant barriers to conducting basic fraud-related searches. For example, the “geographic location” field, which USCIS personnel use to record where a regional center intends to operate, variously contains counties, parishes, cities, states, ZIP codes, census tracts, and other abbreviations. USCIS’s rules guiding data entry leave many form fields “optional” in USCIS data-systems because, according to USCIS officials, the adjudication is completed from the paper application forms, so USCIS considers entering these data unnecessary. However, including such information in USCIS databases could better position USCIS to use information on investors to assess whether any potential fraud may exist with individual investors or across the program and initiate appropriate mitigating actions. For example, including in USCIS databases information on regional center principals and other Regional Center Program participants that is not consistently recorded in those databases, such as name and date of birth, could help USCIS better identify specific individuals who may be targeted for or are under investigation. Further, more standardized information in USCIS databases, such as for the geographic locations of regional centers, could help the agency better identify and assess any potential regional center fraud trends, for example, within and across geographic areas. USCIS officials stated that the agency will be able to collect and maintain more readily available data on EB-5 Program petitioners and applicants through the deployment of electronic forms in its new system, the Electronic Immigration System (USCIS ELIS). USCIS officials told us in May 2015 that they expect USCIS ELIS capabilities for the EB-5 Program to become functional in 2017. However, USCIS has faced long-standing challenges in implementing USCIS ELIS, a fact that raises questions about its eventual deployment and thus the extent to which it will position USCIS to collect and maintain more readily available data. As we reported in May 2015, USCIS ELIS is nearly 4 years delayed and program costs increased by over $1 billion. In March 2012, USCIS began to significantly change its acquisition strategy to address various technical challenges with the system, and these changes have significantly delayed the program’s planned schedule. Changes made to the program’s acquisition strategy were intended to help mitigate past technical and programmatic challenges; however, at the time of our review, the plans had not yet been approved and USCIS was operating without a current and approved acquisition program baseline. USCIS subsequently approved the plans for an acquisition program re-baseline in May 2015. However, as we reported in May 2015, USCIS’s ability to effectively monitor USCIS ELIS program performance and make informed decisions about its implementation has been limited because department- level governance and oversight bodies were not using reliable program information to inform their program evaluations. While USCIS ELIS is under development, other actions could help USCIS mitigate fraud, as discussed below. FDNS’s project site visits are limited in number and scope, but FDNS has taken steps to expand them. FDNS presently conducts EB-5 Program site visits when IPO staff have identified a material concern such as indicators that a project is behind schedule or nonexistent and that cannot be verified through other means such as database searches or requests for evidence from the petitioner or applicant. FDNS officials told us that during a site visit, they typically look for evidence to corroborate petitioner and applicant information such as loan documentation and invoices showing that a business project’s management staff use of investment funds is consistent with the approved business plan. USCIS, SEC, and ICE HSI officials and members of the national industry association representing regional centers said that additional site visits could enhance program integrity. In one example, USCIS officials stated that an EB-5 Program site visit was conducted because three stand-alone businesses claimed they were operating at the same address on their EB-5 petition materials. The businesses had placards on the door, but the owner of the property did not know the petitioners were using the space to run businesses. As a result, USCIS rejected these EB-5 Program petitions. GAO’s fraud control framework states that inspections and physical validations are important tools to help mitigate fraud.officials, even relatively simple site examinations, limited to a physical visit of the investment site, may catch indicators of fraud risk when the site is obviously unsuitable for the stated business purpose or when the petition or application includes falsified information. According to these officials, more comprehensive site examinations are staff intensive but sometimes necessary for detecting fraud. ICE HSI officials said that this includes cases when a business has not invested in physical property or is inactive even though the EB-5 documents show that spending is taking place. These more comprehensive examinations include gathering Further, according to SEC and ICE HSI sources of information related to the project site such as mortgage documents and local city records. Recognizing the potential benefits of site visits, USCIS plans to expand the EB-5 Program site visits, which could enhance fraud detection and deterrence. FDNS officials stated that they would like to conduct additional scrutiny of cases based on indications of fraud risk, which may include site visits; however, because of the EB-5 Program data limitations described above, FDNS has been unable to develop risk indicators and therefore cannot conduct risk-targeted site visits. However, officials plan to pilot random site visits, which may also help to identify and deter fraud. According to FDNS officials, USCIS approved their request for EB-5 Program random site visits in 2015, but they were not granted the staff positions required to administer these site visits. As of May 2015, FDNS had received authorization to hire 8 additional EB-5 Program staff, a level that FDNS officials stated is sufficient to begin administering a random site visit program. FDNS requested an expanded site visit budget for fiscal year 2016, which is now pending approval. FDNS officials stated that if the request is approved, a pilot random site visit program will begin sometime in fiscal year 2016. While improvements to USCIS information systems are delayed, piloting a random site visit program is a step that could provide USCIS valuable information in its efforts to mitigate fraud. USCIS does not interview immigrant investors seeking removal of permanent residency conditions. USCIS is statutorily required to conduct interviews of immigrant investors within 90 days after they submit the Form I-829 petition to remove conditions on their permanent residency. However, USCIS also has the statutory authority to waive the requirement for such interviews. As of April 2015, USCIS officials stated that USCIS IPO has not conducted an interview at the I-829 stage. Conducting interviews at this stage to gather additional corroborating or contextual information could help establish whether an immigrant investor is a victim of or complicit in fraud—a concern shared by both ICE HSI and SEC officials, who noted that gathering additional information and context about individual investors could help to inform investigative work. For example, interviews could present an opportunity to gather additional information on the extent to which the initial investment proposal offered to potential immigrant investors differed from the actual investments made and interest returned on investments. Further, these interviews could gather additional information from immigrant investors in cases where their associated regional center or commercial enterprise is suspected of fraud, such as whether investors were asked to recruit other investors as a condition of receiving a return on their investment. Thus, USCIS’s use of its authority to conduct interviews under the program could help collect information that would otherwise be difficult to obtain from investors. USCIS officials agreed that conducting interviews at this stage could be a source of relevant information and said they anticipate conducting these interviews in the near future. However, USCIS officials explained that they have not developed plans or a strategy for conducting interviews at this stage primarily because IPO is relatively new and began adjudicating I-829 petitions in September 2014. These officials added that IPO is in the process of determining whether or not to schedule an interview with a current immigrant investor but does not have a general strategy for conducting these interviews. While we recognize the establishment of IPO is relatively new, developing a strategy for conducting interviews on investors at the I-829 stage could, for example, help corroborate information those investors originally submitted to demonstrate that the investors have met program requirements before having their conditions for lawful permanent residency removed. Given that IPO is relatively new, this strategy could include an approach to focus on those investors at the I-829 stage who may be at higher risk for fraud. USCIS does not collect certain applicant information that could help mitigate fraud. In fiscal year 2011, USCIS expanded reporting requirements to gather information about ongoing regional center activities such as information on the active projects managed by each regional center. According to USCIS and SEC officials, this information has helped identify potential incidents of fraud. However, USCIS does not require information on the Form I-924 about the businesses supported by the regional center and program investments coordinated by the regional center, such as the names of principals or key officers associated with the business, or information on advisers to investors such as foreign brokers, marketers, attorneys, and other advisers receiving fees from investors. According to USCIS officials, USCIS is drafting revised Forms I-924 and I-924A that will seek to address many of these concerns. However, as these revisions have not been completed, it is too early to tell the extent to which they will position USCIS to collect additional applicant information. SEC and FDNS stakeholders with whom we spoke emphasized that collecting additional information could be useful for USCIS to combat fraud. For example, according to these officials, the absence of information about businesses supported by regional centers limits USCIS’s ability to identify potential fraud such as misrepresentation of a new commercial enterprise. USCIS officials agreed that some additional information collection would enhance program integrity but have not done so because the process to add questions to application forms to capture information requires USCIS to document the rationale for such changes by directly connecting new questions to statutory eligibility criteria, and USCIS has dedicated its regulatory group to other priorities pending potential new legislation or expiration of the Regional Center Program in September 2015. We recognize these competing priorities currently exist; while these priorities are being addressed by USCIS’s regulatory group, the agency could also develop a strategy for identifying and collecting additional information on its petition and application forms to help mitigate fraud risks to the program, such as information on the businesses supported by regional centers. GAO’s fraud control framework states that fraud prevention can be achieved by requiring registrants to provide information that is sufficient to provide reasonable assurance against fraud risks. Further, Standards for Internal Control in the Federal Government states that identified program risks, including fraud risk, should guide management’s planning and development of internal controls. Given that information system improvements with the potential to expand USCIS’s fraud mitigation efforts will not take effect until 2017 at the earliest and that gaps exist in USCIS’s other information collection efforts, developing a strategy to capitalize on existing opportunities for collecting additional information would better position USCIS to identify and mitigate potential fraud. USCIS has recognized that the connection between national security concerns and specific EB-5 Program eligibility criteria may, at times, be tenuous. Specifically, USCIS has determined that it cannot terminate participation of regional centers, or deny immigrant investor petitions or regional center applications solely on the basis of national security concerns, unless such concerns lead an adjudicator to determine that the petitioner or applicant does not meet one or more EB-5 Program eligibility criteria by a preponderance of evidence. The preponderance of evidence standard requires petitioners or applicants to establish eligibility by demonstrating that it is more likely than not that they meet all EB-5 Program requirements. USCIS’s authority with respect to fraud or misrepresentation identified by an adjudicator in the petition or application process is less uncertain than that for national security concerns in that petitioners or applicants must show that their claims for EB-5 Program eligibility are more likely true than not (i.e., probably true), and potential fraud would generally bear on the truthfulness of petitioner or applicant claims. USCIS officials noted that USCIS has authority to deny a Form I- 485 application based on fraud, misrepresentation, and national security concerns as these constitute grounds of inadmissibility that would render an immigrant investor ineligible for adjustment to conditional permanent residency. According to FDNS officials, some regional centers continue to operate despite concerns of fraud or associations with criminal activity. For example, FDNS officials cited a case involving a regional center principal against whom a federal grand jury returned a multiple count wire fraud indictment, and who was, at the time, in custody in a foreign country. According to FDNS officials, USCIS terminated this regional center because the principal failed to file the Form I-924A application supplement as required by regulation rather than, for example, on grounds related to the charges upon which the regional center principal was indicted. USCIS officials noted that if fraud or national security concerns either alone or in combination with other factors lead an adjudicator to determine, based on a preponderance of the evidence, that a regional center is failing to fulfill the statutory requirement of promoting economic growth, adjudicators can under those circumstances terminate the regional center or deny an application for regional center designation. However, USCIS believes that unless a connection can be made that the regional center is failing to promote economic growth, it does not have the authority to terminate a regional center. According to USCIS officials, the lack of authority to terminate a regional center or deny an immigrant investor petition or regional center application based solely on national security or fraud concerns is a major challenge and requires a significant amount of time to link findings to the statutory criteria. In addition to S.1501, in January 2015 the American Entrepreneurship and Investment Act of 2015, H.R.616, was introduced in the House of Representatives, and would provide a permanent authorization of the regional center program. explicitly requires that fraud, misrepresentation, criminal misuse, and threats to public safety or national security be considered in establishing eligibility criteria for regional centers; and states that the Secretary of DHS shall deny or revoke approval of a regional center business plan application with any particular investment or business arrangement that, in his or her unreviewable discretion, presents a public safety or national security threat or significant risk of criminal misuse, fraud, or abuse. USCIS has taken action to increase its capacity to verify job creation in response to past GAO and DHS OIG reports that found that USCIS did not have staff with the expertise to verify job creation estimates and that the agencies’ methodologies for verifying such estimates were not rigorous. Specifically, in April 2005, GAO reported that USCIS adjudicators lacked the expertise to adjudicate EB-5 Program petitions, and were not sufficiently trained to properly adjudicate EB-5 Program petitions because of the complex business and tax issues involved.More recently, in December 2013, the DHS OIG reported that USCIS lacked meaningful economic expertise to conduct independent and thorough reviews of economic models used by investors to estimate indirect job creation for regional center projects, and recommended that USCIS coordinate with other federal agencies to provide expertise in the adjudication process. USCIS took action over time to increase the size and expertise of its workforce, provide clarifying guidance and training, and revise its process for assigning petitions and applications for adjudication. For example, in 2014, USCIS began increasing its staffing from 9 adjudicators to 58 adjudicator officers and 22 economists as of June 2015, and in May 2013, issued a policy memorandum clarifying existing guidance to help ensure consistency in the adjudication of petitions and to provide greater transparency for the EB-5 Program stakeholder community, according to IPO officials. In addition, USCIS improved its training curriculum to better ensure consistency and compliance with applicable statutes, regulations, and agency policy, including an update in 2014 of the new employee EB- 5 training program and the establishment of an ongoing training focusing on recurring issues and petition cases that are novel in nature. IPO program managers stated that USCIS revised its application assignment process in 2015 to help improve the consistency and efficiency of its adjudication of large-scale, multi-investor regional center projects. Under the new approach, the same economist is assigned to review the business plan, economic analysis, and organizational documents for each project involving multiple regional center investors. We interviewed 8 EB- 5 Program economists who reported that they were satisfied with the guidance and that the training provided them with a high degree of confidence in adjudicating EB-5 petitions and applications. Further, IPO program managers reported that USCIS has provided its economists with access to data from the RIMS II economic model since fiscal year 2013 that increased their capacity to verify job creation estimates reported by immigrant investors for investments in regional center projects. IPO program managers estimated that as of fiscal year 2015, about 95 percent of EB-5 Program petitioners used economic models to estimate job creation, with about 90 percent of those petitioners using RIMS II. The RIMS II model is widely used across the public and private sectors and is considered to be valid to verify estimates of indirect and induced jobs reported for investments in regional center projects, according to USCIS and Commerce economists, as well as industry and academic experts. Indirect jobs include jobs that are not directly created by the new commercial enterprise, but may result from increased employment in other businesses that supply goods and services to the regional center business as well as induced jobs created from workers’ spending of increased earnings on consumer goods and services. Under the law establishing the Regional Center Program, regional center investors are permitted to meet the job creation requirement using reasonable methodologies to estimate the number of jobs created, including jobs estimated to have been created indirectly through revenues generated from increased exports, improved regional productivity, job Further, the EB-5 creation, or increased domestic capital investment.Program regulation permits regional center investors to estimate direct and indirect jobs for regional center projects using reasonable methodologies, including multiplier tables that are based on input-output economic models—coefficients that when used in conjunction with inputs such as a specified investment amount, can estimate economic outputs, such as job creation. USCIS economists said that the use of the RIMS II multipliers in combination with other information, including the eligible project investment amount, the code that identifies the project industry, and project location, has provided them with the necessary capacity to better ensure investors meet program requirements for job creation. We conducted a technical review of articles and other documents on the model as well as interviewed subject matter experts, including industry and academic researchers who published studies of the EB-5 Program structure, Commerce officials with the Bureau of Economic Analysis who administer the RIMS II model, and USCIS IPO officials who review the various economic models used by EB-5 investors. On the basis of our reviews and interviews, we determined that IPO’s use of RIMSII data is a reasonable methodology to verify job creation as permitted in law and program regulation. A targeted employment area is defined as a rural area or an area that has experienced unemployment of at least 150 percent of the national average rate. A rural area is defined as any area not within either a metropolitan statistical area (as defined by the Office of Management and Budget) or the outer boundary of any city or town having a population of 20,000 or more (based on the most recent decennial census of the United States). See 8 U.S.C. § 1153(b)(5)(B)(ii), (iii); 8 C.F.R. § 204.6(e), (j)(6)(ii). A technical limitation of input- output models as a whole is that they cannot predict when and where indirect jobs will be created. amount—$500,000 instead of $1 million—to participate in the EB-5 Program because they are claiming investment in a commercial enterprise that will create employment in a targeted employment area. The remaining 10 percent of immigrant investors pay twice that amount to participate in projects that are not limited to these locations. The IPO Economics Division Chief said that USCIS has not identified a need to verify the creation of jobs in a targeted employment area because the law permits regional center investors to use reasonable methodologies such as input-output models that do not have this capacity, and because program regulation and policy address the issue by requiring that capital be invested in a job-creating enterprise that is principally doing business in a targeted employment area.said that given the relative ease of proving job creation through economic modeling compared with documentation requirements to prove creation of direct jobs, immigrant investors generally claim indirect jobs, rather than direct jobs, to qualify for the program. USCIS’s methodology for reporting EB-5 Program outcomes and economic benefits is not valid and reliable because it may overstate or understate results in certain instances as it is based on the minimum program requirements for job creation and investment instead of the number of jobs and actual investment amounts investors report on EB-5 Program forms. To estimate job creation, USCIS multiplies the number of immigrant investors who have successfully completed the program with an approved Form I-829, by 10—the minimum job creation requirement per immigrant investor. To estimate overall investment in the economy, the agency multiplies the number of immigrant investors approved to participate in the program with an approved Form I-526, by $500,000— the minimum investment amount, assuming all investments were made for projects in a targeted employment area. Accordingly, USCIS reported that from program inception in fiscal year 1990 through fiscal year 2014, the EB-5 Program has created a minimum of 73,730 jobs and more than $11.2 billion in investments. Our review and past GAO and DHS OIG audits of the program have pointed out the limitations of this methodology to report reliable program outcomes in that the data can be understated or overstated in certain instances. For example, USCIS officials stated that the majority of immigrant investors reported creating more than the 10-job minimum, and 10 percent of immigrant investors pay $1 million instead of $500,000 because they invest in projects outside of a targeted employment area. Estimating economic outcomes using the minimum program requirements in these circumstances would lead to an underestimate of the program’s benefits. For example, we reviewed one project with about 450 immigrant investors that created over 10,500 jobs, or about 23 jobs per immigrant investor, while USCIS counted only the 10-job minimum per immigrant investor (totaling 4,500), a difference of 6,000 jobs. Additionally, according to DHS’s 2013 Yearbook of Immigration Statistics, about 32 immigrant investors paid $1 million instead of $500,000 into the EB-5 Program in fiscal year 2013, a total difference of $16 million not counted by USCIS. Conversely, USCIS’s methodology may, in certain instances, overstate some economic benefits derived from the EB-5 Program. For example, the methodology assumes that all immigrant investors approved for the program will invest the required amount of funds, and that these funds will be fully spent on the project. According to our analysis of EB-5 Program data, there are fewer immigrant investors who successfully complete the program than were approved for program participation, and the actual amount invested and spent in these circumstances is unknown. For example, our analysis showed that approximately 26 percent of all EB-5 Program immigrant investors who entered the program from its inception year through fiscal year 2011 may not have completed the process to show funds spent and jobs created with an approved I-829 as of the fiscal year ending in 2014. USCIS collects more complete information on EB-5 Program forms, but does not track or analyze this information to more accurately report program outcomes. Specifically, immigrant investors are required to report (and USCIS staff are to verify) the amount of their initial investment on the Form I-526, and to report the number of new jobs created (or expected to be created within a reasonable time) by their investment on the Form I-829. However, USCIS officials said that they report EB-5 Program outcomes using minimum program requirements because these are the required economic benefits stated in law, and that they are not statutorily required to develop a more comprehensive assessment of overall program benefits. The Project Management Institute’s The Standard for Program Management states that programs need to establish monitoring and controlling activities to report on program performance.collecting, measuring, and disseminating performance information so program management has the data necessary to report on the program’s state and identify areas in need of improvement. Additionally, GAO’s Standards for Internal Control in the Federal Government states that activities need to be established to monitor performance, managers need This includes to compare actual performance against planned or expected results, and controls should aim at validating the propriety and integrity of performance measures. Further, transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions throughout the entire process of an event from initiation through final classification. Tracking and reporting the investment and job creation data it collects on the Forms I- 526 and I-829 would better position USCIS to more accurately assess and report on the EB-5 Program’s outcomes, in line with the program’s mission to bring new investment capital and jobs into the country and to help Congress and others better evaluate the benefits of the program. Views differ on whether USCIS methodology, as defined in EB-5 Program regulations, should allow immigrant investors to claim all jobs created by projects with EB-5 and non-EB-5 investors. We and the DHS OIG have previously raised questions about this practice because immigrant investors are to create 10 jobs based on their investment in the new commercial enterprise, and therefore including non-EB-5 Program investments in the enterprise can inflate the job creation benefit of the immigrant investment.program managers said that while they do not have resources to verify this fact for each project, it is possible that a regional center project would not occur or be viable without EB-5 Program investment funds, which provide an alternative source of capital for projects that might not be able to attract or afford investments from other foreign or U.S. sources. In the final rule implementing section 121 of the Immigration Act of 1990, legacy Immigration and Naturalization Service (INS) contemplated multiple investor scenarios in promulgating EB-5 regulations, and on the basis of The IPO Economics Division Chief and IPO comments in response to the proposed rule, permitted the practice of allocating only to immigrant investors the jobs created as a result of the establishment of a new commercial enterprise by multiple investors, some of whom may not be seeking EB-5 visas. Additionally, according to the IPO Economic Division Chief, his analysis showed that projects in many industries could not generate the required number of jobs based on the minimum EB-5 investment alone, and otherwise would not be able to use and benefit from the EB-5 Program. Specifically, his analysis showed that about 160 industries, including manufacturing, are unable to create the required 10 jobs per investor based solely on the EB-5 Program minimum investment of $500,000. According to IPO officials, without the practice of allowing immigrant investors to claim jobs generated by investments from other sources, a higher investment amount would be required for investors to meet the job creation requirements in these industries and qualify for removal of their permanent residency conditions. GAO did not independently corroborate the outcomes of this analysis. Economic modeling for the project showed that 10,500 jobs were created by the total project spending. All 10,500 jobs were distributed on a pro rata basis such that each of the 450 investors was allocated 23 jobs each. This represents a “job cushion” of approximately 130 percent over the USCIS-required 10 jobs per investor. According to an IPO economist we interviewed, most projects build in a job cushion to ensure that all investors meet the job creation requirements and to increase the likelihood that investors achieve approval for lawful permanent residency. met the job creation criteria necessary to achieve lawful permanent residency. USCIS has commissioned Commerce’s Economics and Statistics Administration (ESA) to conduct a study of the economic impact of the EB-5 Program. According to the IPO Economics Division Chief, USCIS undertook this action in response to a December 2013 DHS OIG recommendation that USCIS conduct a comprehensive review of the EB- 5 Program to demonstrate how investor funds have stimulated the U.S. economy. As of June 2015, USCIS and ESA had not yet finalized the methodology for the new study; however, ESA and USCIS approved a statement of work in November 2014 that outlines a preliminary methodology and study steps that would address some, but not all, shortcomings of prior studies of the overall EB-5 Program benefits. Past studies, for example, included small sample sizes that were not representative of the total population and may have overstated economic impact because of the use of national, instead of regional, multipliers in the analysis. ESA’s study is to assess the value of the EB-5 Program beginning at the EB-5 project level for all projects completed (or at least lasting 2 years) for fiscal years 2012 and 2013. According to Commerce officials, the study findings will include (1) the immigrant investor investments as well as the non EB-5 investments used in each job creation estimate; (2) the number of jobs created as well as the value of the jobs from each project, citing the geographic area for which the job creation was claimed in the economic impact assessment; and (3) the likely household spending of immigrant investor families while living in the United States. Commerce officials indicated that for the study, all projects within a state will be added to derive a state total and then the state totals will be aggregated to determine a national total. ESA will review a majority of the economic impact assessments that led to the job creation estimates for each of the projects to determine whether the models used for estimating job creation were applied correctly. ESA also plans to use information submitted by immigrant investors on EB-5 Program forms and entered into the Intranet Computer Linked Application Information Management System (iCLAIMS) to more specifically and reliably report program benefits. USCIS has provided ESA with data from the iCLAIMS database, including information from forms immigrant investors and regional centers use to meet program participation requirements—Forms I-526, I-829, I-924, and I-924A—which, according to an ESA official, ESA is beginning to examine in greater detail. ESA will review DHS’s data collection and reporting system prior to using the iCLAIMS data to determine the value of the program. USCIS officials said that ESA plans to finalize the study methodology once it completes a review of the program data submitted by IPO, and to issue a final report in November 2015. Additionally, the level and mechanisms of the most prevalent forms of social support (the transfers made through Social Security payments and various forms of income assistance) in the United States are inversely related to income, and since incomes for an accredited investor are in, at minimum, the top 5 percent of incomes in the United States, it is unlikely that they receive any form of social assistance. currently collects to support. USCIS officials said that for these reasons, the costs to gather the information may not justify the investment. The Office of Management and Budget Circular A-94 Revised, Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs, which applies to all analyses used to support government decisions to initiate, renew, or expand programs or projects that would result in a series of measurable benefits or costs extending for 3 or more years into the future, identifies actions agencies can take in cases where costs cannot Specifically, be quantified when measuring the impact of a program.OMB Circular A-94 provides that in analyses where not all benefits or costs can be assigned a monetary value, a comprehensive enumeration of the different types of benefits and costs can help identify the full range of program effects. For example, DHS costs to track and remove immigrant investors (and their families) from the United States who do not successfully complete the program, and costs to social programs such as Medicaid, Medicare, and Social Security may be associated with the program but difficult to quantify. Ensuring that the ESA study includes a discussion of costs that should be considered but cannot be quantified for the program would provide Congress and other stakeholders with more information on the overall value of the program. The EB-5 Program seeks to stimulate the economy by promoting job creation and encouraging capital investment by foreign investors in the United States. However, these features of the program that can provide economic benefit to the United States can also create unique fraud and national security risks that must be identified and addressed. Planning to conduct risk assessments on a more regular basis would better position USCIS to identify, evaluate, and address future and changing risks to the program. This may be of particular importance as USCIS is unable to comprehensively identify and address fraud trends across the program because of its reliance on paper-based documentation and because it faces certain limitations with using available data and with collecting additional data on EB-5 immigrant investors or investments. Developing a strategy to expand its data collection efforts, such as interviewing investors who apply to remove conditions on their permanent resident status and requesting additional information on applicant and petitioner forms, could better position USCIS to address these limitations. USCIS’s ability to apply a valid and reliable methodology for reporting EB- 5 Program outcomes and economic benefits is important for program accountability and to provide the public and Congress with more complete information to evaluate the program and make reauthorization decisions. Tracking and using more comprehensive information it collects on project investments and job creation on the Forms I-526 and I-829 submitted by immigrant investors and verified by USCIS would enable USCIS to more reliably report on EB-5 Program outcomes and economic benefits. Additionally, taking steps to ensure that the valuation it commissioned Commerce to conduct includes a discussion of the types of costs that should be considered, but could not be quantified by the study, would provide Congress and other stakeholders with more comprehensive information on the overall economic benefits of the program. To strengthen USCIS’s EB-5 Program fraud prevention, detection, and mitigation capabilities, and to more accurately and comprehensively assess and report program outcomes and the overall economic benefits of the program, we recommend that the Director of USCIS take the following four actions: plan and conduct regular future fraud risk assessments of the EB-5 develop a strategy to expand information collection, including considering the increased use of interviews at the I-829 phase as well as requiring the additional reporting of information in applicant and petitioner forms; track and report data that immigrant investors report, and the agency verifies on its program forms for total investments and jobs created through the EB-5 Program; and include a discussion of the types and reasons any relevant program costs were excluded from the Commerce study of the EB-5 Program. We provided a draft of this report to Commerce, DHS, DOJ, SEC, and State for their review and comment. DHS provided written comments, which are reproduced in appendix I, and Commerce, DOJ, SEC, and State did not provide written comments. In its comments, DHS concurred with the four recommendations and described actions under way or planned to address them. Commerce and DHS provided technical comments, which we incorporated as appropriate. With regard to the first recommendation, that USCIS plan and conduct regular future fraud risk assessments of the EB-5 Program, DHS concurred, stating that the EB-5 Branch of USCIS’s FDNS will continue to conduct a minimum of one fraud, national security, or intelligence assessment on an aspect of the program annually, as it has done since 2012. DHS further requested that GAO consider this recommendation resolved and closed. While we believe that planning to continue conducting a minimum of one assessment on an aspect of the program annually is a positive step, to fully address the intent of our recommendation, USCIS needs to conduct at least one review, as planned. Thus, we continue to consider this recommendation open. With regard to the second recommendation, that USCIS develop a strategy to expand information collection, including considering the increased use of interviews at the I-829 phase as well as requiring the additional reporting of information in applicant and petitioner forms, DHS concurred and estimated that actions to develop such a strategy would be completed by September 30, 2016. Upon completion of the strategy, these actions should address the intent of the recommendation to strengthen USCIS’s ability to prevent, detect, and mitigate fraud in the EB-5 Program. With regard to the third recommendation, that USCIS track and report data that immigrant investors report, and the agency verifies on its program forms for total investments and jobs created through the EB-5 Program, DHS concurred and estimated that a plan to collect and aggregate additional data, including revisions to USCIS data systems and processes, would be completed by September 30, 2016. When USCIS implements this plan, this action should address the intent of the recommendation to more comprehensively assess and report program outcomes of the EB-5 Program. With regard to the fourth recommendation, that USCIS include a discussion of the types and reasons any relevant program costs were excluded from the Commerce study of the EB-5 Program, DHS concurred and said that USCIS IPO will recommend to Commerce that a description of potential costs not assessed as a part of the study be included when the study is published later this year. Should Commerce include such a discussion of relevant program costs in its study that USCIS estimates will be completed November 30, 2015, this action should address the intent of our recommendation to more comprehensively assess and report the overall economic benefits of the EB-5 Program. We are sending copies of this report to interested congressional committees; the Secretaries of Commerce, Homeland Security, and State; the Attorney General of the United States; as well as the U.S. Securities and Exchange Commission Chair. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Rebecca Gambler at (202) 512-8777 or [email protected] or Seto Bagdoyan at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Rebecca Gambler at (202) 512-8777 or [email protected], or Seto Bagdoyan at (202) 512-6722 or [email protected]. In addition to the contacts named above, Cindy Ayers, Joah Iannotta, David Alexander, Christopher Hayes, John Karikari, Andrew Kurtzman, Natalie Maddox, Jan Montgomery, Jon Najmi, Brynn Rovito, Edith Sohna, and Nick Weeks made key contributions to this report.
|
Congress created the EB-5 visa category to promote job creation by immigrant investors in exchange for visas providing lawful permanent residency. Participants are required to invest $1 million in a business that is to create at least 10 jobs--or $500,000 for businesses located in an area that is rural or has experienced unemployment of at least 150 percent of the national average rate. Upon meeting program requirements, immigrant investors are eligible for conditional status to live and work in the United States and can apply to remove the conditions for lawful permanent residency after 2 years. GAO was asked to review fraud risks and economic benefits for the EB-5 Program. This report examines USCIS efforts under the EB-5 Program to (1) work with interagency partners to assess fraud and other related risks, (2) address any identified fraud risks, and (3) increase its capacity to verify job creation and use a valid and reliable methodology to report economic benefits. GAO reviewed risk assessments and processes to address fraud risks, verify job creation, and report economic benefits. The Department of Homeland Security's (DHS) U.S. Citizenship and Immigration Services (USCIS) administers the Employment-Based Fifth Preference Immigrant Investor Program (EB-5 Program) and collaborated with its interagency partners to assess fraud and national security risks in the program in fiscal years 2012 and 2015. Unique fraud risks identified in the program included uncertainties in verifying that the funds invested were obtained lawfully and various investment-related schemes to defraud investors. These assessments were onetime efforts; however, USCIS officials noted that fraud risks in the EB-5 Program are constantly evolving, and they continually identify new fraud schemes. USCIS does not have documented plans to conduct regular future risk assessments, in accordance with fraud prevention practices, which could help inform efforts to identify and address evolving program risks. USCIS has taken steps to address the fraud risks it identified by enhancing its fraud risk management efforts, including establishing a dedicated entity to oversee these efforts. However, USCIS's information systems and processes limit its ability to collect and use data on EB-5 Program participants to address fraud risks in the program. For example, USCIS does not consistently enter some information it collects on participants in its information systems, such as name and date of birth, a fact that presents barriers to conducting basic electronic searches that could be analyzed for potential fraud, such as schemes to defraud investors. USCIS plans to collect and maintain more complete data in its new information system; however, GAO reported in May 2015 that the new system is nearly 4 years delayed. In the meantime, USCIS does not have a strategy for collecting additional information, including some information on businesses supported by EB-5 Program investments, that officials noted could help mitigate fraud, such as misrepresentation of new businesses. Given that information system improvements with the potential to expand USCIS's fraud mitigation efforts will not take effect until 2017 at the earliest and that gaps exist in USCIS's other information collection efforts, developing a strategy for collecting such information would better position USCIS to identify and mitigate potential fraud. USCIS increased its capacity to verify job creation by increasing the size and expertise of its workforce and providing clarifying guidance and training, among other actions. However, USCIS's methodology for reporting program outcomes and overall economic benefits is not valid and reliable because it may understate or overstate program benefits in certain instances as it is based on the minimum program requirements of 10 jobs and a $500,000 investment per investor instead of the number of jobs and investment amounts collected by USCIS on individual EB-5 Program forms. For example, USCIS reported 4,500 jobs for 450 investors on one project using its methodology instead of 10,500 jobs reported on EB-5 Program forms for that project. Further, investment amounts are not adjusted for investors who do not complete the program or invest $1 million instead of $500,000. USCIS officials said they are not statutorily required to develop a more comprehensive assessment. However, tracking and analyzing data on jobs and investments reported on program forms would better position USCIS to more reliably assess and report on the EB-5 Program economic benefits. GAO recommends that, among other things, USCIS conduct regular future risk assessments, develop a strategy to expand information collection, and analyze the data collected on program forms to reliably report on economic benefits. DHS concurred with our four recommendations.
|
From the 1950s until the early 1970s, concerns about physician shortages prompted measures by the federal and state governments to increase physician supply. Federal and state governments supported the growth in the physician population by providing funds for constructing medical schools and increasing medical school class sizes, offering loans and scholarships to medical students, and paying hospitals through Medicare to subsidize residency training costs. Concurrent with these initiatives, the total physician supply and per-capita supply increased in the United States. By the 1980s and through the 1990s, however, concerns were raised about the adequacy of the physician supply. A 1981 study by the Graduate Medical Education National Advisory Committee (GMENAC) and a series of reports from 1992 to 1999 by the Council on Graduate Medical Education (COGME) forecast a national physician surplus. COGME based these estimates on its determination that the appropriate target for physician supply ranged from 145 to 185 physicians per 100,000 people. These estimates were predicated in part on the belief that managed care, with its emphasis on preventive care and reliance on primary care gatekeepers exercising tight control over access to specialists, would become a more typical health care delivery model. COGME and others have noted that managed care has not become as dominant as predicted. By 2000, some research concluded that physician supply increased even more than these studies predicted. Some researchers, however, questioned whether there was a national surplus of physicians. A report from the Institute of Medicine (IOM) describes why studies of the physician workforce vary. According to the IOM report, disagreement about the adequacy of physician supply arises because there is no single accepted approach to estimating physician supply or demand. Varying assumptions related to factors that may affect future supply or demand can lead to different conclusions about the adequacy of future physician supply. Projecting future physician supply depends on the approach used to count physicians, measure their productivity, and estimate the rate of entrance into and exit from the profession. Estimating demand for physicians’ services requires even more assumptions. Demand for physicians’ services can be estimated using current and projected service utilization patterns or by determining an ideal level of care to treat the projected incidence and prevalence of illness among the population. In addition, physician practice patterns, the use of new technology, the supply and role of nonphysician providers, and rates and levels of insurance coverage also affect estimates of the demand for and supply of physicians’ services. In spite of the difficulty of determining whether the overall number of physicians is indeed the right number, there is little disagreement that physicians have been located disproportionately in metropolitan areas relative to the U.S. population. Geographic disparities in physician supply have persisted even as the national physician supply has increased steadily. Economic factors and professional preferences have all been offered as evidence to explain why physicians, and specialists in particular, locate in metropolitan areas. For example, physicians depend on the availability of hospitals, laboratories, and other technology, and metropolitan areas tend to have more of these facilities and equipment than nonmetropolitan areas. Small nonmetropolitan areas generally lack a large enough population or hospital resources to support a specialty practice, because specialists handle less prevalent but more complicated illnesses and require more specialized support facilities and technology. To influence overall physician supply and address perceived physician shortages in certain areas, several federal programs fund efforts to address these issues. The bulk of federal dollars to support physician education is through Medicare’s graduate medical education (GME) payments to teaching hospitals, which totaled an estimated $7.8 billion in 2000, the latest year for which data were available. These GME payments are distributed based on the number of physicians being trained and Medicare’s share of patient days in the hospital. Medicare also pays physicians a 10 percent bonus above the usual payment amount for services they provide to beneficiaries in health professional shortage areas (HPSAs). These Medicare Incentive Payments totaled $104 million in 2002. Programs intended to encourage health professionals to practice in underserved areas and to support the training and education of health professionals are administered by HRSA, within the Department of Health and Human Services (HHS). HRSA programs include the National Health Service Corps (NHSC) and grant and loan support programs for health professions education and training. Most of these programs address three objectives of improving the distribution of health professionals in underserved areas, increasing representation of minorities and individuals from disadvantaged backgrounds in health professions, and increasing the supply of health professionals. They also address other objectives such as improving the quality of education and training. In fiscal year 2001, spending for the NHSC was $70.8 million and spending for health professions education and training programs was $266 million. Funds for the NHSC and for health professions education and training programs support a range of health professions including medicine. See appendix II for more information about program spending or appropriations and program objectives. The number of physicians in the United States increased about 26 percent, from about 541,000 to about 681,000 from 1991 to 2001. Physician growth was twice that of national population growth during this period. As a result, the total number of physicians per 100,000 people in the United States climbed 12 percent, from 214 in 1991 to 239 in 2001. The number of generalists per 100,000 people increased at about the same rate as the number of specialists per 100,000 people. (See table 1.) The national physician workforce maintained approximately a one-third generalist to two-thirds specialist composition between 1991 and 2001. Growth in physician supply reduced the number of metropolitan and nonmetropolitan areas with fewer than 100 physicians per 100,000 people and increased the number of areas with greater than 300 physicians per 100,000 people. In 1991, 8 metropolitan areas and 27 statewide nonmetropolitan areas had fewer than 100 physicians per 100,000 people. By 2001, no metropolitan areas and 7 statewide nonmetropolitan areas had fewer than 100 physicians per 100,000 people. Twice as many metropolitan areas and statewide nonmetropolitan areas had at least 300 physicians per 100,000 people in 2001 as in 1991. (See figs. 1 and 2.) In 1991, the 25 percent of areas with the lowest physician supplies per 100,000 people had an average of 106 physicians per 100,000 people. By 2001, the 25 percent of areas with the lowest physician supplies per 100,000 people had an average of 132 physicians per 100,000 people. Similarly, the 25 percent of areas with the highest physician supplies per 100,000 people had an average of 319 physicians in 1991 and 362 physicians in 2001. See appendix III for information on physician supply by state metropolitan and nonmetropolitan areas in 1991 and 2001. All 48 statewide nonmetropolitan areas experienced an increase in the number of physicians per 100,000 people from 1991 to 2001 and 301 of 318 metropolitan areas experienced an increase in physicians per 100,000 people. Overall, the nonmetropolitan areas had higher proportional growth in physicians per 100,000 people than the metropolitan areas, but the disparity in the supply of physicians per 100,000 people between the metropolitan and nonmetropolitan areas persisted. Rates of growth in the number of physicians per 100,000 people, the supply of physicians per 100,000 people, and the mix of generalists and specialists among categories of metropolitan and nonmetropolitan counties varied. Among the five county categories we analyzed, nonmetropolitan counties with a large town had the biggest increase in physicians per 100,000 people from 1991 to 2001 and more physicians per 100,000 people than either nonmetropolitan counties without a large town or rural counties, but still fewer than metropolitan counties. Like metropolitan counties, nonmetropolitan counties with large towns had more specialists than generalists, while other nonmetropolitan counties had more generalists than specialists. The 48 statewide nonmetropolitan areas, including those with the lowest supplies of physicians per 100,000 people in 1991, registered gains in physicians per 100,000 people between 1991 and 2001. However, this growth rate was not even across all statewide nonmetropolitan areas and 7 areas remained below 100 physicians per 100,000 people. Five of these 7 statewide nonmetropolitan areas—Iowa, Indiana, Louisiana, Oklahoma, and Texas—that remained below 100 physicians per 100,000 people, had average increases in physicians per 100,000 people that were less than the 23 percent average increase for the nonmetropolitan United States. The remaining 2—statewide nonmetropolitan Alabama and Tennessee—had increases in physicians per 100,000 people that exceeded the national nonmetropolitan area average, but the number of physicians in these areas was so low in 1991 that this growth was not enough to elevate their physician supply above 100 per 100,000 people in 2001. In the aggregate, the 318 metropolitan areas of the United States experienced an increase in physicians per 100,000 people between 1991 and 2001. However, 17 (5 percent) metropolitan areas experienced declines in the number of physicians per 100,000 people during this period. (See table 2.) In 2001, 11 of these areas had physician supplies per 100,000 people that were below the national average of 239 physicians per 100,000 people. Only 2 individual metropolitan areas, however—the Topeka, Kansas and Enid, Oklahoma MSAs—experienced an actual decrease in their physician populations between 1991 and 2001. While the remaining 15 areas had more physicians in 2001 than in 1991, the population increase for all of them was large enough that they still experienced a decline over that decade in the number of physicians per 100,000 people. Five of these areas had physician population growth in excess of the national average of 26 percent. However, in these areas the higher-than-average growth in physician supply was exceeded by population growth that was also above the national average of 13 percent, resulting in a decline in physicians per 100,000 people. The number of physicians per 100,000 people in nonmetropolitan areas, in which 19 percent of the U.S. population resided in 2001, increased 23 percent from 1991 to 2001. During this same time, the number of physicians per 100,000 people in metropolitan areas, in which 81 percent Increases in Physicians Per of the U.S. population resided in 2001, increased 10 percent. (See table 3.) The higher growth rate in physicians per 100,000 people in nonmetropolitan areas over the decade did not translate into a reduction in the gap in the supply of physicians per 100,000 people in metropolitan versus nonmetropolitan areas. The disparity in the supply of physicians per 100,000 people between nonmetropolitan and metropolitan areas persisted because physicians continued to disproportionately locate in metropolitan areas. On net, about 17,000 physicians (12 percent of the physician population increase) went to nonmetropolitan areas between 1991 and 2001, while about 123,000 (88 percent of the physician population increase) went to metropolitan areas. The difference in physician supply between metropolitan and nonmetropolitan areas remained relatively unchanged from 1991, when the difference in supply was 143 per 100,000 people, to 2001 when the difference was 145 per 100,000 people. In nonmetropolitan areas, the number of specialists per 100,000 people increased faster than the number of generalists per 100,000 people. As a result, the generalist and specialist composition shifted from an even mix of generalists and specialists in 1991 to 48 percent generalists and 52 percent specialists in 2001. In metropolitan areas, generalists and specialists per 100,000 people increased at approximately the same rate, shifting the composition less than 1 percent from 36 percent generalists and 64 percent specialists in 1991 to 35 percent generalists and 65 percent specialists in 2001. To obtain additional information about physician supply within nonmetropolitan and metropolitan areas, we aggregated county physician and population data into five categories defined by a county’s nonmetropolitan or metropolitan status and the presence and size of a town within the county. All five county categories had an increase in physicians per 100,000 people from 1991 to 2001. (See fig. 3.) But the rates of growth in physician supply per 100,000 people, supply of physicians per 100,000 people, and mix of generalists and specialists varied by county category. While nonmetropolitan counties with a large town (10,000 to 49,999 residents) had the biggest percentage increase in physicians per 100,000 people of all county categories, their supplies per 100,000 people were still less than large and small metropolitan counties’ supplies in 1991 and 2001. Among nonmetropolitan counties, however, those with a large town had more physicians per 100,000 people than those without a large town or rural counties in 1991 and 2001. Like metropolitan counties, nonmetropolitan counties that are more urbanized—those with a large town—had more specialists than generalists per 100,000 people. Less urbanized nonmetropolitan counties—those without a large town—and rural counties had more generalists than specialists per 100,000 people in 1991 and 2001. We provided a draft of this report to HRSA for comment. HRSA said that the study supports the conclusion that the disparity in the distribution of physicians in rural and urban areas persists and has narrowed. HRSA also agreed with our assessment of the difficulties and variation associated with determining an appropriate supply for any given geographic area. However, HRSA noted that the report should draw conclusions to make the report more complete. HRSA also commented that rural citizens are still grossly underserved, noting that physician supply can be a rough measure of access to physician services in a given area and that even in areas with a large number of physicians many people still lack access due to a number of factors. HRSA’s comments are reprinted in appendix IV. Although we found that a geographic disparity persists, we did not find that the disparity in the distribution between metropolitan and nonmetropolitan areas has narrowed. Physician supply grew faster in nonmetropolitan than metropolitan areas, on a national basis, but this did not reduce the disparity because there are so many physicians in metropolitan areas. As we stated in the draft report that HRSA reviewed, while nonmetropolitan areas experienced higher growth rates in physicians per 100,000 people, the difference in physician supply per 100,000 people remained relatively unchanged from 1991 to 2001. HRSA noted that physician supply is only one of several factors affecting the accessibility of health care in an area. However, assessing the adequacy of access to physicians was beyond the scope of our work. HRSA also provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Administrator of the Health Resources Services Administration and other interested parties. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7119 or Christine Brudevold at (202) 512-2669. Major contributors to this report were Kathryn Linehan and Ann Tynan. To conduct this work, we counted active, nonfederal, patient care physicians with a known address, including interns and residents, in the United States. We used individual physician-level data on medical doctors (MD) from the 1991 and 2001 American Medical Association (AMA) Physicians’ Professional Data, also known as the AMA Masterfile and 1991 and 2001 data on doctors of osteopathic medicine (DO) from the American Osteopathic Association (AOA) Masterfile. These data are widely used in studies of physician supply because they are a comprehensive list of U.S. physicians and their characteristics. To determine physician supply per 100,000 people, we obtained county- level resident population data for 1991 and 2001 from the U.S. Census Bureau Web site. We used data from the Department of Agriculture Web site to determine urban influence codes for each county. For additional information about physician supply in the United States, we reviewed relevant literature and interviewed academic researchers on the topic of the U.S. physician workforce. To obtain federal program information, we interviewed officials from the Health Resources and Services Administrative (HRSA) and officials at the Centers for Medicare & Medicaid Services (CMS). HRSA officials provided information on the scope and expenditures of health professions training and education programs and the National Health Service Corps. CMS officials provided information on the Medicare Incentive Payments to physicians providing services in health professional shortage areas. We combined counts of MDs and DOs to determine the total number of physicians in each of our study years. Each physician was counted without adjustment for hours worked. To determine physicians per 100,000 people, we divided the physician population in a given area by the total population in the area in that same year. To count generalists and specialists we used each physician’s specialty information in the AMA and AOA data files to categorize physicians as generalists or specialists. Physicians whose specialty information was listed as family practice, general practice, general internal medicine, and general pediatrics were categorized as generalists. All other physicians with specialty information available were categorized as specialists. To assign physicians to a geographic area in the United States, we used address information in the AMA and AOA data files. The address information in these files does not specify whether the address refers to the physician’s home, office, or some other location and it is possible that some physicians live and work in different counties. Because of this limitation, we did not analyze the data at the individual county level. We combined multiple, adjacent counties into larger geographic units. We assigned counties to a metropolitan statistical area (MSA), primary metropolitan statistical areas (PMSA), or New England county metropolitan area (NECMA). We grouped data from all areas within a state that were not in a MSA, PMSA, or NECMA into one statewide nonmetropolitan area for each state. We used 2001 MSA, PMSA, and NECMA classifications for the 2001 and 1991 data. For analysis by county categories, we used urban influence codes, which group metropolitan and nonmetropolitan counties according to the official metropolitan status announced by the Office of Management and Budget in 1993, based on 1990 Census data. Urban influence codes group counties, county equivalents, and independent cities into nine categories.Metropolitan counties are grouped into two categories (1 and 2) by the size of the metropolitan area. Nonmetropolitan counties are grouped into seven groups (3 through 9) by their adjacency to metropolitan areas and the size of their own city. For this analysis of physician supply, we maintained categories 1, 2, and 9 and collapsed the remaining six categories into two, for a total of five categories. This analysis combines codes 3, 5, and 7 into one category (i.e., nonmetropolitan with a large town) and 4, 6, and 8 into one category (i.e., nonmetropolitan without a large town). HRSA administers programs that encourage health professionals to practice in underserved areas and support health professions education and training. The National Health Service Corps and the State Loan Repayment Program, authorized by Title III of the Public Health Service Act, offer scholarships and loan repayments to health professionals in exchange for a commitment to practice in health professional shortage areas. Grant and loan support programs that support health professions education and training, authorized by Title VII of the Public Health Service Act, have diverse objectives. Generally, these programs support education and training for a range of health professions including medicine, chiropractics, dentistry, optometry, pharmacy, physician assistants, allied health, and public health. While most of the Title VII programs address three objectives of improving the distribution of health professionals in underserved areas, increasing representation of minorities and individuals from disadvantaged backgrounds in health professions, and increasing the supply of health professionals, they also address other objectives such as improving the quality of education and training. Table 4 provides information on Title III and Title VII program spending or appropriations and objectives. Health Workforce: Ensuring Adequate Supply and Distribution Remains Challenging. GAO-01-1042T. Washington, D.C.: August 1, 2001. Nursing Workforce: Emerging Nurse Shortages Due to Multiple Factors. GAO-01-944. Washington, D.C.: July 10, 2001. Nursing Workforce: Multiple Factors Create Nurse Recruitment and Retention Problems. GAO-01-912T. Washington, D.C.: June 27, 2001. Nursing Workforce: Recruitment and Retention of Nurses and Nurse Aides Is a Growing Concern. GAO-01-750T. Washington, D.C.: May 17, 2001. Health Care Access: Programs for Underserved Populations Could Be Improved. GAO/T-HEHS-00-81. Washington, D.C.: March 23, 2000. Community Health Centers: Adapting to Changing Health Care Environment Key to Continued Success. GAO/HEHS-00-39, Washington, D.C.: March 10, 2000. Physician Shortage Areas: Medicare Incentive Payments Not an Effective Approach to Improve Access. GAO/HEHS-99-36. Washington, D.C.: February 26, 1999. Health Care Access: Opportunities to Target Programs and Improve Accountability. GAO/T-HEHS-97-204. Washington, D.C.: September 11, 1997. Foreign Physicians: Exchange Visitor Program Becoming Major Route to Practicing in U.S. Underserved Areas. GAO/HEHS-97-26. Washington, D.C.: December 30, 1996. National Health Service Corps: Opportunities to Stretch Scarce Dollars and Improve Provider Placement. GAO/HEHS-96-28. Washington, D.C.: November 24, 1995. Health Care Shortage Areas: Designations Not a Useful Tool for Directing Resources to the Underserved. GAO/HEHS-95-200. Washington, D.C.: September 8, 1995. Health Professions Education: Role of Title VII/VIII Programs in Improving Access to Care is Unclear. GAO/HEHS-94-164. Washington, D.C.: July 8, 1994.
|
Through a variety of programs, the federal government supports the training of physicians and encourages physicians to work in underserved areas or pursue primary care specialties. GAO was asked to provide information on the physician supply and the generalist and specialist mix of that supply in the United States and the changes in and geographic distribution of physician supply in metropolitan and nonmetropolitan areas. To address these objectives, GAO analyzed data on physician supply and geographic distribution from 1991 and 2001. The U.S. physician population increased 26 percent, which was twice the rate of total population growth, between 1991 and 2001. During this period the average number of physicians per 100,000 people increased from 214 to 239 and the mix of generalists and specialists in the national physician workforce remained about one-third generalists and two-thirds specialists. Growth in physician supply per 100,000 people between 1991 and 2001 was seen in historically high-supply metropolitan areas as well as low-supply statewide nonmetropolitan areas. Between 1991 and 2001, all statewide nonmetropolitan areas and 301 out of the 318 metropolitan areas gained physicians per 100,000 people. Of those 17 metropolitan areas that experienced declines in the number of physicians per 100,000 people, only 2 had fewer total physicians in 2001 than 1991. Overall, nonmetropolitan areas experienced higher proportional growth in physicians per 100,000 people than metropolitan areas, but the disparity in the supply of physicians per 100,000 people between nonmetropolitan and metropolitan areas persisted. Nonmetropolitan counties with a large town (10,000 to 49,999 residents) had the biggest increase in physicians per 100,000 people of all county categories but their supplies per 100,000 people were still less than large and small metropolitan counties' supplies in 1991 and 2001. In written comments on a draft of this report, the Health Resources and Services Administration agreed with GAO findings of persisting disparities between metropolitan and nonmetropolitan areas.
|
The complexity of the environment in which CMS and its contractors operate the Medicare program cannot be overstated. CMS is an agency within the Department of Health and Human Services (HHS) but has responsibilities over expenditures that are larger than those of most other federal departments. Under the fee-for-service system—which accounts for over 80 percent of program beneficiaries—physicians, hospitals, and other providers submit claims for services they provide to Medicare beneficiaries to receive reimbursement. The providers billing Medicare, whose interests vary widely, create with program beneficiaries and taxpayers a vast universe of stakeholders. About 50 Medicare claims administration contractors carry out the day-to- day operations of the program and are responsible not only for paying claims but for providing information and education to providers and beneficiaries that participate in Medicare. They periodically issue bulletins that outline changes in national and local Medicare policy, inform providers of billing system changes, and address frequently asked questions. To enhance communications with providers, the agency recently required contractors to maintain toll-free telephone lines to respond to provider inquiries. It also directed them to develop Internet sites to address, among other things, frequently asked questions. In addition, CMS is responsible for monitoring the claims administration contractors to ensure that they appropriately perform their claims processing duties and protect Medicare from fraud and abuse. In 1996, the Congress enacted the Health Insurance Portability and Accountability Act (HIPAA), in part to provide better stewardship of the program. This act gave HCFA the authority to contract with specialized entities, known as program safeguard contractors (PSC), to combat fraud, waste, and abuse. HCFA initially selected 12 firms to conduct a variety of program safeguard tasks, such as medical reviews of claims and audits of providers’ cost reports. Previously, only claims administration contractors performed these activities. In response to the escalation of improper Medicare payments, Congress and executive branch agencies have focused attention on efforts to safeguard the Medicare Trust Fund. HIPAA earmarked increased funds for the prevention and detection of health care fraud and abuse and increased sanctions for abusive providers. The HHS Office of Inspector General (OIG) and the Department of Justice (DOJ) subsequently became more aggressive in pursuing abusive providers. In response, the medical community has expressed concern about the complexity of the program and the fairness of certain program safeguard activities, such as detailed reviews of claims, and the process for appealing denied claims. Recent actions address some of these concerns. Since 1996, the HHS OIG has repeatedly estimated that Medicare contractors inappropriately paid claims worth billions of dollars annually. The depletion of Medicare’s hospital trust fund and the projected growth in Medicare’s share of the federal budget have focused attention on program safeguards to prevent and detect health care fraud and abuse. It has also reinforced the importance of having CMS and its contractors develop and implement effective strategies to prevent and detect improper payments. HIPAA provided the opportunity for HCFA to enhance its program integrity efforts by creating the Medicare Integrity Program (MIP). MIP gave the agency a stable source of funding for its safeguard activities. Beginning in 1997, funding for antifraud-and-abuse activities has increased significantly—by 2003, funding for these activities will have grown about 80 percent. In fiscal year 2000, HCFA used its $630 million in MIP funding to support a wide range of efforts, including audits of provider and managed care organizations and targeted medical review of claims. By concentrating attention on specific provider types or benefits where program dollars are most at risk, HCFA has taken a cost-effective approach to identify overpayments. Based on the agency’s estimates, MIP saved the Medicare program more than $16 for each dollar spent in fiscal year 2000. CMS is only one of several entities responsible for ensuring the integrity of the Medicare program. HIPAA also provided additional resources to both the HHS OIG and DOJ. The HHS OIG has emphasized the importance of safeguarding Medicare by auditing providers and issuing compliance guidance for various types of providers. It also pursues potential fraud brought to its attention by contractors and other sources, such as beneficiaries and whistleblowers. DOJ has placed a high priority on identifying patterns of improper billing by Medicare providers. DOJ investigates cases that have been referred by the HHS OIG and others to determine if health care providers have engaged in fraudulent activity, and it pursues civil actions or criminal prosecutions, as appropriate. The False Claims Act (31 U.S.C. sec. 3729 to 3733) gives DOJ a powerful enforcement tool as it provides for substantial damages and penalties against providers who knowingly submit false or fraudulent bills to Medicare, Medicaid, or other federal health programs. DOJ has instituted a series of investigations known as national initiatives, which involve examinations of similarly situated providers who may have engaged in common patterns of improper Medicare billing. As safeguard and enforcement actions have increased, so have provider concerns about their interaction with contractors. Individual physicians and representatives of medical associations have made a number of serious charges regarding the following. Inadequate communications from CMS’ contractors. Providers assert that the information they receive is poorly organized, difficult to understand, and not always communicated promptly. As a result, providers are concerned that they may inadvertently violate Medicare billing rules. Inappropriate targeting of claims for review and excessive paperwork demands of the medical review process. For example, some physicians have complained that the documentation required by some contractors goes beyond what is outlined in agency guidance or what is needed to demonstrate medical necessity. Unfair method used to calculate Medicare overpayments. Providers expressed concern that repayment amounts calculated through the use of samples that are not statistically representative do not accurately represent actual overpayments. Overzealous enforcement activities by other federal agencies. For example, providers have charged that DOJ has been overly aggressive in its use of the False Claims Act and has been too accommodating to the OIG’s insistence on including corporate integrity agreements in provider settlements. Lengthy process to appeal denied claim. Related to this issue is that a provider who successfully appeals a claim that was initially denied does not earn interest for the period during which the administrative appeal was pending. We have studies underway to examine the regulatory environment in which Medicare providers operate. At the request of the House Committee on the Budget and the House Ways and Means Subcommittee on Health, we are reviewing CMS’ communications with providers and have confirmed some provider concerns. For example, our review of several information sources, such as bulletins, telephone call centers, and Internet sites, found a disappointing performance record. Specifically, we reviewed recently issued contractor bulletins—newsletters from carriers to physicians outlining changes in national and local Medicare policy—from 10 carriers. Some of these bulletins contained lengthy discussions with overly technical and legalistic language that providers may find difficult to understand. These bulletins also omitted some important information about mandatory billing procedures. Similarly, we found that the calls we placed to telephone call centers this spring were rarely answered appropriately. For example, for 85 percent of our calls, the answers that call center representatives provided were either incomplete or inaccurate. Finally, we recently reviewed 10 Internet sites, which CMS requires carriers to maintain. We found that these sites rarely met all CMS requirements and often lacked user-friendly features such as site maps and search functions. We are continuing our work and formulating recommendations that should help CMS and its contractors improve their communications with providers. We are also in the preliminary stages of examining how claims are reviewed and how overpayments are detected to assess the actions of contractors as they perform their program safeguard activities. Although we have not yet formulated our conclusions, agency actions may address some provider concerns. For example, HCFA clarified the conditions under which contractors should conduct medical reviews of providers. In August 2000, the agency issued guidance to contractors regarding the selection of providers for medical reviews, noting, among other things, that a provider’s claims should only be reviewed when data suggest a pattern of billing problems. Although providers may be wary of the prospect of medical reviews, the extent to which they are subjected to such reviews is largely unknown. Last year, HCFA conducted a one-time limited survey of contractors to determine the number of physicians subject to complex medical reviews in fiscal year 2000. It found that only 1,891, or 0.3 percent, of all physicians who billed the Medicare program that year were selected for complex medical reviews—examinations by clinically trained staff of medical records. In regard to physician complaints about sampling methodologies, HCFA outlined procedures to give providers several options to determine overpayment amounts. Contractors would initially review a small sample (probe sample) of a provider’s claims and determine the amount of the overpayment. A provider could then (1) enter into a consent settlement, whereby the provider accepts the results of this probe review and agrees to an extrapolated “potential” overpayment amount based on the small sample, (2) accept the settlement but submit additional documentation on specific claims in the probe sample to potentially adjust downward the amount of the projected overpayment, or (3) require the contractor to review a larger statistically valid random sample of claims to extrapolate the overpayment amount. According to agency officials, although providers can select any of these options, consent settlements are usually chosen when offered because they are less burdensome for providers, as fewer claims have to be documented and reviewed. In response to concerns regarding its use of the False Claims Act, DOJ issued guidance in June 1998 to all of its attorneys that emphasized the fair and responsible use of the act in civil health care matters, including national initiatives. In 1999, we reviewed DOJ’s compliance with its False Claims Act guidance and found that implementation of this guidance varied among U.S. Attorneys’ Offices. However, the next year we reported that DOJ had made progress in incorporating the guidance into its ongoing investigations and had also developed a meaningful assessment of compliance in its periodic evaluations of U.S. Attorneys’ Offices. Regarding corporate integrity agreements, we noted in our March 2001 report that these agreements were not always a standard feature of DOJ settlements. For example, 4 of 11 recent settlements that we reviewed were resolved without the imposition of such agreements. Finally, some providers’ concerns about the timeliness of the appeals process could be addressed by the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA), which imposes deadlines at each step of the appeals process. For example, initial determination of a claim must be concluded within 45 days from the date of the claim, and redetermination must be completed within 30 days of receipt of the request. These revisions are scheduled to take effect on October 1, 2002. CMS’ oversight of its contractors is essential to ensuring that the Medicare program is administered efficiently and effectively. CMS is faced with the challenge of protecting program dollars and treating providers fairly. However, to accomplish these goals, contractors must implement CMS’ policies fully and consistently. Historically, the agency’s oversight of contractors has been weak, although it has made substantial improvements in the past 2 years. Continued vigilance in this area is critical as CMS tries to cope with known weaknesses and begins to rely on new specialty contractors for some of its payment safeguard activities. Medicare’s claims administration contractors are responsible for all aspects of claims administration, conduct particular safeguard activities, and are the primary source of Medicare communications to providers. However, oversight of Medicare contractors has historically been weak, leaving the agency without assurance that contractors are implementing program safeguards or paying providers appropriately. For years, HCFA’s contractor performance and evaluation program (CPE)—its principal tool used to evaluate contractor performance—lacked the consistency that agency reviewers need to make comparable assessments of contractor performance. HCFA reviewers had few measurable performance standards and little direction on monitoring contractors’ payment safeguard activities. The reviewers in HCFA’s 10 regional offices, who were responsible for conducting these evaluations, had broad discretion to decide what and how much to review as well as what disciplinary actions to take against contractors with performance problems. This highly discretionary evaluation process allowed key program safeguards to go unchecked and led to the inconsistent treatment of contractors with similar performance problems. Dispersed responsibility for contractor activities across many central office components, limited information about how many resources are used or needed for contractor oversight, and late and outdated guidance provided to regional offices have also weakened contractor oversight. Over the years, we have made several recommendations to improve HCFA’s oversight of its claims administration contractors. For example, we recommended that the agency strengthen accountability for evaluating contractor performance. In response to our recommendations, HCFA has established an executive-level position at its central office with ultimate responsibility for contractor oversight, instituted national review teams to conduct contractor evaluations, and provided more direction to its regional offices through standardized review protocols and detailed instructions for CPE reviews. Although the agency has taken a number of steps to improve its oversight efforts, our ongoing work suggests that opportunities for additional improvement exist. Last month, we joined CMS representatives as they conducted a CPE review at a contractor’s telephone center. Although providers’ ability to appropriately bill Medicare is dependent on their obtaining accurate and complete answers to their questions, the review focused primarily on adherence to call center procedures and the timeliness of responses to provider questions. Moreover, the CMS reviewer selected a small number of cases to evaluate—only 4 of the roughly 140,000 provider calls this center receives each year. While CMS’ management of claims administration contractors suffers from weak oversight, its contracting practices for selecting fiscal intermediaries and carriers may contribute to these difficulties. Unlike most of the federal government, the agency was exempted from conducting full and open competitions by the Social Security Act. Thus, for decades, HCFA has relied on many of the same contractors to perform program management activities, and has been at a considerable disadvantage in attracting new entities to perform these functions. Congress included provisions in HIPAA that provided HCFA with more flexibility in contracting for program safeguard activities. It allowed the agency to contract with any entity that was capable of performing certain antifraud activities. In May 1999, HCFA implemented its new contracting authority by selecting 12 program safeguard contractors—PSCs—using a competitive bidding process. These entities represent a mix of health insurance companies, information technology businesses, and several other types of firms. In May of this year, we reported on the opportunities and challenges that the agency faces as it integrates its PSCs into its overall program safeguard strategy. The PSCs represent a new means of promoting program integrity and enable CMS to test a multitude of options. CMS is currently experimenting with these options to identify how PSCs can be most effectively utilized. For example, some PSCs are performing narrowly focused tasks that are related to a specific service considered to be particularly vulnerable to fraud and abuse. Others are conducting more broadly based work that may have national implications for the way program safeguard activities are conducted in the future or which may result in the identification of best practices.
|
In fiscal year 2000, Medicare made more than $200 billion in payments to hundreds of thousands of health care providers who served nearly 40 million beneficiaries. Because of the program's vast size and complexity, GAO has included Medicare on its list of government areas at high risk for waste, fraud, abuse, and mismanagement. GAO first included Medicare on that list in 1990, and it remains there today. GAO has continually reported on the efforts of the Health Care Financing Administration -- recently renamed the Centers for Medicare and Medicaid Services (CMS) -- to safeguard Medicare payments and streamline operations. CMS relies on its claims administration contractors to run Medicare. As these contractors have become more aggressive in identifying and pursuing inappropriate payments, providers have expressed concern that Medicare has become to complex and difficult to navigate. CMS's oversight of its contractors has historically been weak. In the last two years, however, CMS has made substantial progress. GAO has identified several areas in which CMS still need improvement, especially in ensuring that contractors provide accurate, complete, and timely information to providers on Medicare billing rules and coverage policies.
|
To determine which federal government programs and functions should be added to the High Risk List, we consider whether the program or function is of national significance or is key to government performance and accountability. Further, we consider qualitative factors, such as whether the risk involves public health or safety, service delivery, national security, national defense, economic growth, or privacy or citizens’ rights, or could result in significant impaired service, program failure, injury or loss of life, or significantly reduced economy, efficiency, or effectiveness. In addition, we also review the exposure to loss in quantitative terms such as the value of major assets being impaired, revenue sources not being realized, or major agency assets being lost, stolen, damaged, or wasted. We also consider corrective measures planned or under way to resolve a material control weakness and the status and effectiveness of these actions. This year, we added two new areas, delineated below, to the High Risk List based on those criteria. In response to serious and long-standing problems with veterans’ access to care, which were highlighted in a series of congressional hearings in the spring and summer of 2014, Congress enacted the Veterans Access, Choice, and Accountability Act of 2014 (Pub. L. No. 113-146, 128 Stat. 1754), which provides $15 billion in new funding for Department of Veterans Affairs (VA) health care. Generally, this law requires VA to offer veterans the option to receive hospital care and medical services from a non-VA provider when a VA facility cannot provide an appointment within 30 days, or when veterans reside more than 40 miles from the nearest VA facility. Under the law, VA received $10 billion to cover the expected increase in utilization of non-VA providers to deliver health care services to veterans. The $10 billion is available until expended and is meant to supplement VA’s current budgetary resources for medical care. Further, the law appropriated $5 billion to increase veterans’ access to care by expanding VA’s capacity to deliver care to veterans by hiring additional clinicians and improving the physical infrastructure of VA’s facilities. It is therefore critical that VA ensures its resources are being used in a cost- effective manner to improve veterans’ timely access to health care. We have categorized our concerns about VA’s ability to ensure the timeliness, cost-effectiveness, quality, and safety of the health care the department provides into five broad areas: (1) ambiguous policies and inconsistent processes, (2) inadequate oversight and accountability, (3) information technology challenges, (4) inadequate training for VA staff, and (5) unclear resource needs and allocation priorities. We have made numerous recommendations that aim to address weaknesses in VA’s management of its health care system—more than 100 of which have yet to be fully resolved. For example, to ensure that its facilities are carrying out processes at the local level more consistently—such as scheduling veterans’ medical appointments and collecting data on veteran suicides— VA needs to clarify its existing policies. VA also needs to strengthen oversight and accountability across its facilities by conducting more systematic, independent assessments of processes that are carried out at the local level, including how VA facilities are resolving specialty care consults, processing claims for non-VA care, and establishing performance pay goals for their providers. We also have recommended that VA work with the Department of Defense (DOD) to address the administrative burdens created by the lack of interoperability between their two IT systems. A number of our recommendations aim to improve training for staff at VA facilities, to address issues such as how staff are cleaning, disinfecting, and sterilizing reusable medical equipment, and to more clearly align training on VA’s new nurse staffing methodology with the needs of staff responsible for developing nurse staffing plans. Finally, we have recommended that VA improve its methods for identifying VA facilities’ resource needs and for analyzing the cost-effectiveness of VA health care. The recently enacted Veterans Access, Choice, and Accountability Act included a number of provisions intended to help VA address systemic weaknesses. For example, the law requires VA to contract with an independent entity to (1) assess VA’s capacity to meet the current and projected demographics and needs of veterans who use the VA health care system, (2) examine VA’s clinical staffing levels and productivity, and (3) review VA’s IT strategies and business processes, among other things. The new law also establishes a 15-member commission, to be appointed primarily by bipartisan congressional leadership, which will examine how best to organize the VA health care system, locate health care resources, and deliver health care to veterans. It is critical for VA leaders to act on the findings of this independent contractor and congressional commission, as well as on those of VA’s Office of the Inspector General, GAO, and others, and to fully commit themselves to developing long-term solutions that mitigate risks to the timeliness, cost- effectiveness, quality, and safety of the VA health care system. It is also critical that Congress maintains its focus on oversight of VA health care. In the spring and summer of 2014, congressional committees held more than 20 hearings to address identified weaknesses in the VA health care system. Sustained congressional attention to these issues will help ensure that VA continues to make progress in improving the delivery of health care services to veterans. We plan to continue monitoring VA’s efforts to improve the timeliness, cost-effectiveness, quality, and safety of veterans’ health care. To this end, we have ongoing work focusing on topics such as veterans’ access to primary care and mental health services; primary care productivity; nurse recruitment and retention; monitoring and oversight of VA spending on training programs for health care professionals; mechanisms VA uses to monitor quality of care; and VA and DOD investments in Centers of Excellence—which are intended to produce better health outcomes for veterans and service members. Although the executive branch has undertaken numerous initiatives to better manage the more than $80 billion that is annually invested in information technology (IT), federal IT investments too frequently fail or incur cost overruns and schedule slippages while contributing little to mission-related outcomes. We have previously testified that the federal government has spent billions of dollars on failed IT investments. These and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies have not consistently applied best practices that are critical to successfully acquiring IT investments. We have identified nine critical factors underlying successful major acquisitions that support the objective of improving the management of large-scale IT acquisitions across the federal government: (1) program officials actively engaging with stakeholders; (2) program staff having the necessary knowledge and skills; (3) senior department and agency executives supporting the programs; (4) end users and stakeholders involved in the development of requirements; (5) end users participating in testing of system functionality prior to end user acceptance testing; (6) government and contractor staff being stable and consistent; (7) program staff prioritizing requirements; (8) program officials maintaining regular communication with the prime contractor; and (9) programs receiving sufficient funding. While there have been numerous executive branch initiatives aimed at addressing these issues, implementation has been inconsistent. Over the past 5 years, we have reported numerous times on shortcomings with IT acquisitions and operations and have made about 737 related recommendations, 361 of which were to the Office of Management and Budget (OMB) and agencies to improve the implementation of the recent initiatives and other government-wide, cross-cutting efforts. As of January 2015, about 23 percent of the 737 recommendations had been fully implemented. Given the federal government’s continued experience with failed and troubled IT projects, coupled with the fact that OMB initiatives to help address such problems have not been fully implemented, the government will likely continue to produce disappointing results and will miss opportunities to improve IT management, reduce costs, and improve services to the public, unless needed actions are taken. Further, it will be more difficult for stakeholders, including Congress and the public, to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. Recognizing the severity of issues related to government-wide management of IT, in December 2014 the Federal Information Technology Acquisition Reform provisions were enacted as a part of the Carl Levin and Howard P. ‘Buck’ McKeon National Defense Authorization Act for Fiscal Year 2015. I want to acknowledge the leadership of this Committee and the House Committee on Oversight and Government Reform in leading efforts to enact this important legislation. To help address the management of IT investments, OMB and federal agencies should expeditiously implement the requirements of the December 2014 statutory provisions promoting IT acquisition reform. Doing so should (1) improve the transparency and management of IT acquisitions and operations across the government, and (2) strengthen the authority of chief information officers to provide needed direction and oversight. To help ensure that these improvements are achieved, congressional oversight of agencies’ implementation efforts is essential. Beyond implementing the recently enacted law, OMB and agencies need to continue to implement our previous recommendations in order to improve their ability to effectively and efficiently invest in IT. Several of these are critical, such as conducting TechStat reviews for at-risk investments, updating the public version of the IT Dashboard throughout the year, developing comprehensive inventories of federal agencies’ software licenses. To ensure accountability, OMB and agencies should also demonstrate measurable government-wide progress in the following key areas: OMB and agencies should, within 4 years, implement at least 80 percent of our recommendations related to the management of IT acquisitions and operations. Agencies should ensure that a minimum of 80 percent of the government’s major acquisitions should deliver functionality every 12 months. Agencies should achieve no less than 80 percent of the over $6 billion in planned PortfolioStat savings and 80 percent of the more than $5 billion in savings planned for data center consolidation. In the 2 years since the last high-risk update, two areas have expanded in scope. Enforcement of Tax Laws has been expanded to include IRS’s efforts to address tax refund fraud due to identity theft. Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure has been expanded to include the federal government’s protection of personally identifiable information and is now called Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure and Protecting Personally Identifiable Information (PII). Since 1990, we have designated one or more aspects of Enforcement of Tax Laws as high risk. The focus of the Enforcement of Tax Laws high- risk area is on the estimated $385 billion net tax gap—the difference between taxes owed and taxes paid—and IRS’s and Congress’s efforts to address it. Given current and emerging risks, we are expanding the Enforcement of Tax Laws area to include IRS’s efforts to address tax refund fraud due to identity theft (IDT), which occurs when an identity thief files a fraudulent tax return using a legitimate taxpayer’s identifying information and claims a refund. While acknowledging that the numbers are uncertain, IRS estimated paying about $5.8 billion in fraudulent IDT refunds while preventing $24.2 billion during the 2013 tax filing season. While there are no simple solutions to combating IDT refund fraud, we have identified various options that could help, some of which would require legislative action. Because some of these options represent a significant change to the tax system that could likely burden taxpayers and impose significant costs to IRS for systems changes, it is important for IRS to assess the relative costs and benefits of the options. This assessment will help ensure an informed discussion among IRS and relevant stakeholders—including Congress—on the best option (or set of options) for preventing IDT refund fraud. Since 1997, we have designated the security of our federal cyber assets as a high-risk area. In 2003, we expanded this high-risk area to include the protection of critical cyber infrastructure. The White House and federal agencies have taken steps toward improving the protection of our cyber assets. However, advances in technology which have dramatically enhanced the ability of both government and private sector entities to collect and process extensive amounts of Personally Identifiable Information (PII) pose challenges to ensuring the privacy of such information. The number of reported security incidents involving PII at federal agencies has increased dramatically in recent years. In addition, high-profile PII breaches at commercial entities have heightened concerns that personal privacy is not being adequately protected. Finally, both federal agencies and private companies collect detailed information about the activities of individuals–raising concerns about the potential for significant erosion of personal privacy. We have suggested, among other things, that Congress consider amending privacy laws to cover all PII collected, used, and maintained by the federal government and recommended that the federal agencies we reviewed take steps to protect personal privacy and improve their responses to breaches of PII. For these reasons, we added the protection of privacy to this high-risk area this year. Our experience with the high-risk series over the past 25 years has shown that five broad elements are essential to make progress.criteria for removal are as follows: Leadership commitment. Demonstrated strong commitment and top leadership support. Capacity. Agency has the capacity (i.e., people and resources) to resolve the risk(s). Action plan. A corrective action plan exists that defines the root cause and solutions and that provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated progress. Ability to demonstrate progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. Figure 1 shows the five criteria and examples of actions taken by agencies to address the criteria. Throughout my statement and in our high-risk update report, we have detailed many actions taken to address the high-risk areas aligned with the five criteria as well as additional steps that need to be addressed. In each of our high-risk updates, for more than a decade, we have assessed progress to address the five criteria for removing the high-risk areas from the list. In this high-risk update, we are adding additional clarity and specificity to our assessments by rating each high-risk area’s progress on the criteria, using the following definitions: Met. Actions have been taken that meet the criterion. There are no significant actions that need to be taken to further address this criterion. Partially met. Some, but not all, actions necessary to meet the criterion have been taken. Not met. Few, if any, actions towards meeting the criterion have been taken. Figure 2 is a visual representation of varying degrees of progress in each of the five criteria for a high-risk area. Each point of the star represents one of the five criteria for removal from the High Risk List and each ring represents one of the three designations: not met, partially met, or met. The progress ratings used to address the high-risk criteria are an important part of our efforts to provide greater transparency and specificity to agency leaders as they seek to address high-risk areas. Beginning in the spring of 2014 leading up to this high-risk update, we met with agency leaders across government to discuss preliminary progress ratings. These meetings focused on actions taken and on additional actions that need to be taken to address the high-risk issues. Several agency leaders told us that the additional clarity provided by the progress rating helped them better target their improvement efforts. Since our last high-risk update in 2013, there has been solid and steady progress on the vast majority of the 30 high-risk areas from our 2013 list. Progress has been possible through the concerted actions and efforts of Congress and the leadership and staff in agencies and OMB. As shown in table 1, 18 high-risk areas have met or partially met all criteria for removal from the list; 11 of these areas also fully met at least one criterion. Of the 11 areas that have been on the High Risk List since the 1990s, 7 have at least met or partially met all of the criteria for removal and 1 area—DOD Contract Management—is 1 of the 2 areas that has made enough progress to remove subcategories of the high-risk area. Overall, 28 high- risk areas were rated against the five criteria, totaling a possible 140 high- risk area criteria ratings. Of these, 122 (or 87 percent) were rated as met or partially met. On the other hand, 13 of the areas have not met any of the five criteria; 3 of those—DOD Business Systems Modernization, DOD Support Infrastructure Management, and DOD Financial Management— have been on the High Risk List since the 1990’s. Throughout the history of the high-risk program, Congress played an important role through its oversight and (where appropriate) through legislative action targeting both specific problems and the high-risk areas overall. Since our last high-risk report, several high-risk areas have received congressional oversight and legislation needed to make progress in addressing risks. Table 2 provides examples of congressional actions and of high-level administration initiatives—discussed in more detail throughout our report—that have led to progress in addressing high-risk areas. Additional congressional actions and administrative initiatives are also included in the individual high-risk areas discussed in this report. Since our 2013 update, sufficient progress has been made to narrow the scope of the following two areas. Our work has identified the following high-risk issues related to the Food and Drug Administration’s (FDA) efforts to oversee medical products: (1) oversight of medical device recalls, (2) implementation of the Safe Medical Devices Act of 1990, (3) the effects of globalization on medical product safety, and (4) shortages of medically necessary drugs. We added the oversight of medical products to our High Risk List in 2009. Since our 2013 high-risk update, FDA has made substantial progress addressing the first two areas; therefore, we have narrowed this area to remove these issues from our High Risk List. However, the second two issues, globalization and drug shortages, remain pressing concerns. FDA has greatly improved its oversight of medical device recalls by fully implementing all of the recommendations made in our 2011 report on this topic. Recalls provide an important tool to mitigate serious health consequences associated with defective or unsafe medical devices. We found that FDA had not routinely analyzed recall data to determine whether there are systemic problems underlying trends in device recalls. We made specific recommendations to the agency that it enhance its oversight of recalls. FDA is fully implementing our recommendations and has developed a detailed action plan to improve the recall process, analyzed 10 years of medical device recall trend data, and established explicit criteria and set thresholds for determining whether recalling firms have performed effective corrections or removals of defective products. These actions have addressed this high-risk issue. The Safe Medical Devices Act of 1990 requires FDA to determine the appropriate process for reviewing certain high-risk devices—either reclassifying certain high-risk medical device types to a lower-risk class or establishing a schedule for such devices to be reviewed through its most stringent premarket approval process. We found that FDA’s progress was slow and that it had never established a timetable for its reclassification or re-review process. As a result, many high-risk devices—including device types that FDA has identified as implantable, life sustaining, or posing a significant risk to the health, safety, or welfare of a patient—still entered the market through FDA’s less stringent premarket review process. We recommended that FDA expedite its implementation of the act. Since then, FDA has made good progress and began posting the status of its reviews on its website. FDA has developed an action plan with a goal of fully implementing the provisions of the act by the second quarter of calendar year 2015. While FDA has more work to do, it has made sufficient progress to address this high-risk issue. Based on our reviews of DOD’s contract management activities over many years, we placed this area on our High Risk List in 1992. For the past decade, our work and that of others has identified challenges DOD faces within four segments of contract management: (1) the acquisition workforce, (2) contracting techniques and approaches, (3) service acquisitions, and (4) operational contract support. DOD has made sufficient progress in one of the four segments—its management and oversight of contracting techniques and approaches—to warrant its removal as a separate segment within the overall DOD contract management high-risk area. Significant challenges still remain in the other three segments. We made numerous recommendations to address the specific issues we identified. DOD leadership has generally taken actions to address our recommendations. For example, DOD promulgated regulations to better manage its use of time-and-materials contracts and undefinitized contract actions (which authorize contractors to begin work before reaching a final agreement on contract terms). In addition, OMB directed agencies to take action to reduce the use of noncompetitive and time-and-materials contracts. Similarly, Congress has enacted legislation to limit the length of noncompetitive contracts and require DOD to issue guidance to link award fees to acquisition outcomes. Over the past several years, DOD’s top leadership has taken significant steps to plan and monitor progress in the management and oversight of contracting techniques and approaches. For example, through its Better Buying Power initiatives DOD leadership identified a number of actions to promote effective competition and to better utilize specific contracting techniques and approaches. In that regard, in 2010 DOD issued a policy containing new requirements for competed contracts that received only one offer—a situation OMB has noted deprives agencies of the ability to consider alternative solutions in a reasoned and structured manner and which DOD has termed “ineffective competition.” These changes were codified in DOD’s acquisition regulations in 2012. In May 2014, we concluded that DOD’s regulations help decrease some of the risks of one offer awards, but also that DOD needed to take additional steps to continue to enhance competition, such as establishing guidance for when contracting officers should assess and document the reasons only one offer was received. DOD concurred with the two recommendations we made in our report and has since implemented one of them. DOD also has been using its Business Senior Integration Group (BSIG)— an executive-level leadership forum—for providing oversight in the planning, execution, and implementation of these initiatives. In March 2014, the Director of the Office of Defense Procurement and Acquisition Policy presented an assessment of DOD competition trends that provided information on competition rates across DOD and for selected commands within each military department and proposed specific actions to improve competition. The BSIG forum provides a mechanism by which DOD can address ongoing and emerging weaknesses in contracting techniques and approaches and by which DOD can monitor the effectiveness of its efforts. Further, in June 2014, DOD issued its second annual assessment of the performance of the defense acquisition system. The assessment, included data on the system’s competition rate and goals, assessments of the effect of contract type on cost and schedule control, and the impact of competition on the cost of major weapon systems. An institution as large, complex, and diverse as DOD, and one that obligates hundreds of billions of dollars under contracts each year, will continue to face challenges with its contracting techniques and approaches. We will maintain our focus on identifying these challenges and proposing solutions. However, at this point DOD’s continued commitment and demonstrated progress in this area—including the establishment of a framework by which DOD can address ongoing and emerging issues associated with the appropriate use of contracting techniques and approaches—provide a sufficient basis to remove this segment from the DOD contract management high-risk area. In addition to the two areas that we narrowed—Protecting Public Health through Enhanced Oversight of Medical Products and DOD Contract Management—nine other areas met at least one of the criteria for removal from the High Risk List and were rated at least partially met for all four of the remaining criteria. These areas serve as examples of solid progress made to address high-risk issues through implementation of our recommendations and through targeted corrective actions. Further, each example underscores the importance of high-level attention given to high- risk areas within the context of our criteria by the administration and by congressional action. To sustain progress in these areas and to make progress in other high-risk areas—including eventual removal from the High Risk List—focused leadership attention and ongoing oversight will be needed. The National Aeronautics and Space Administration’s (NASA) acquisition management was included on the original High Risk List in 1990. NASA’s continued efforts to strengthen and integrate its acquisition management functions have resulted in the agency meeting three criteria for removal from our High Risk List—leadership commitment, a corrective action plan, and monitoring. For example, NASA has completed the implementation of its corrective action plan, which was managed by the Deputy Administrator, with the Chief Engineer, the Chief Financial Officer, and the agency’s Associate Administrator having led implementation of the The plan identified metrics to assess the progress of individual initiatives.implementation, which NASA continues to track and report semi-annually. These metrics include cost and schedule performance indicators for NASA’s major development projects. We have found that NASA’s performance metrics generally reflect improved performance. For example, average cost and schedule growth for NASA’s major projects has declined since 2011 and most of NASA’s major projects are tracking metrics, which we recommended in 2011 to better assess design stability and decrease risk. In addition, NASA has taken action in response to our recommendations to improve the use of earned value management—a tool designed to help project managers monitor progress—such as by conducting a gap analysis to determine whether each center has the requisite skills to effectively utilize earned value management. These actions have helped NASA to create better baseline estimates and track performance so that NASA has been able to launch more projects on time and within cost estimates. However, we found that NASA needs to continue its efforts to increase agency capacity to address ongoing issues through additional guidance and training of personnel. Such efforts should help maximize improvements and demonstrate that the improved cost and schedule performance will be sustained, even for the agency’s most expensive and complex projects. Recently, a few of NASA’s major projects are rebaselining their cost, schedule, or both in light of management and technical issues, which is tempering the progress of the whole portfolio. In addition, several of NASA’s largest and most complex projects, such as NASA’s human spaceflight projects, are at critical points in implementation. We have reported on several challenges that may further impact NASA’s ability to demonstrate progress in improving acquisition management. The federal government has made significant progress in promoting the sharing of information on terrorist threats since we added this issue to the High Risk List in 2003. As a result, the federal government has met our criteria for leadership commitment and capacity and has partially met the remaining criteria for this high-risk area. Significant progress was made in this area by developing a more structured approach to achieving the Information Sharing Environment (Environment) and by defining the highest priority initiatives to accomplish. In December 2012, the President released the National Strategy for Information Sharing and Safeguarding (Strategy), which provides guidance on the implementation of policies, standards, and technologies that promote secure and responsible national security information sharing. In 2013, in response to the Strategy, the Program Manager for the Environment released the Strategic Implementation Plan for the National Strategy for Information Sharing and Safeguarding (Implementation Plan). The Implementation Plan provides a roadmap for the implementation of the priority objectives in the Strategy. The Implementation Plan also assigns stewards to coordinate each priority objective—in most cases, a senior department official—and provides time frames and milestones for achieving the outcomes in each objective. Adding to this progress is the work the Environment has done to address our previous recommendations. In our 2011 report on the Environment, we recommended that key departments better define incremental costs for information sharing activities and establish an enterprise architecture management plan. Since then, senior officials in each key department reported that any incremental costs related to implementing the Environment are now embedded within each department’s mission activities and operations and do not require separate funding. Further, the 2013 Implementation Plan includes actions for developing aspects of an architecture for the Environment. In 2014, the program manager issued the Information Interoperability Framework, which begins to describe key elements intended to help link systems across departments to enable information sharing. Going forward, in addition to maintaining leadership commitment and capacity, the program manager and key departments will need to continue working to address remaining action items informed by our five high-risk criteria, thereby helping to reduce risks and enhance the sharing and management of terrorism-related information. The Department of Homeland Security (DHS) has continued efforts to strengthen and integrate its management functions since those issues were placed on the High Risk List in 2003. These efforts resulted in the department meeting two criteria for removal from the High Risk List (leadership commitment and a corrective action plan) and partially meeting the remaining three criteria (capacity, a framework to monitor progress, and demonstrated, sustained progress). DHS’s top leadership, including the Secretary and Deputy Secretary of Homeland Security, have continued to demonstrate exemplary commitment and support for addressing the department’s management challenges. For instance, the Department’s Under Secretary for Management and other senior management officials have routinely met with us to discuss the department’s plans and progress, which helps ensure a common understanding of the remaining work needed to address our high-risk designation. In April 2014, the Secretary of Homeland Security issued Strengthening Departmental Unity of Effort, a memorandum committing the agency to, among other things, improving DHS’s planning, programming, budgeting, and execution processes through strengthened departmental structures and increased capability. In addition, DHS has continued to provide updates to the report Integrated Strategy for High Risk Management, demonstrating a continued focus on addressing its high-risk designation. The integrated strategy includes key management initiatives and related corrective action plans for achieving 30 actions and outcomes, which we identified and DHS agreed are critical to addressing the challenges within the department’s management areas and to integrating those functions across the department. Further, DHS has demonstrated progress to fully address nine of these actions and outcomes, five of which it has sustained as fully implemented for at least 2 years. For example, DHS fully addressed two outcomes because it received a clean audit opinion on its financial statements for 2 consecutive fiscal years, 2013 and 2014. In addition, the department strengthened its enterprise architecture program (or technology blueprint) to guide IT acquisitions by, among other things, largely addressing our prior recommendations aimed at adding needed architectural depth and breadth. DHS needs to continue implementing its Integrated Strategy for High Risk Management and show measurable, sustainable progress in implementing its key management initiatives and corrective actions and achieving outcomes. In doing so, it will be important for DHS to identify and work to mitigate any resource gaps, and prioritize initiatives as needed to ensure it can implement and sustain its corrective actions, closely track and independently validate the effectiveness and sustainability of its corrective actions and make midcourse adjustments as needed; and make continued progress in achieving the 21 actions and outcomes it has not fully addressed, and demonstrate that systems, personnel, and policies are in place to ensure that progress can be sustained over time. DOD supply chain management is one of the six issues that has been on the High Risk List since 1990. DOD has made progress in addressing weaknesses in all three dimensions of its supply chain management areas: inventory management, materiel distribution, and asset visibility. With respect to inventory management, DOD has demonstrated considerable progress in implementing its statutorily mandated corrective action plan. This plan is intended to reduce excess inventory and improve inventory management practices. Additionally, DOD has established a performance management framework, including metrics and milestones, to track the implementation and effectiveness of its corrective action plan and has demonstrated considerable progress in reducing its excess inventory and improving its inventory management. For example, DOD reported that its percentage of on-order excess inventory dropped from 9.5 percent in fiscal year 2009 to 7.9 percent in fiscal year 2013. DOD calculates the percentage by dividing the amount of on-order excess inventory by the total amount of on-order inventory. In response to our 2012 recommendations on the implementation of the plan, DOD continues to re-examine its goals for reducing excess inventory, has revised its goal for reducing on-hand excess inventory (it achieved its original goal early), and is in the process of institutionalizing its inventory management metrics in policy. DOD has also made progress in addressing its materiel distribution challenges. Specifically, DOD has implemented, or is implementing, distribution-related initiatives that could serve as a basis for a corrective action plan. For example, DOD developed its Defense Logistics Agency Distribution Effectiveness Initiative, formerly called Strategic Network Optimization, to improve logistics efficiencies in DOD’s distribution network and to reduce transportation costs. This initiative accomplishes these objectives by storing materiel at strategically located Defense Logistics Agency supply sites. Further, DOD has demonstrated significant progress in addressing its asset visibility weaknesses by taking steps to implement our February 2013 recommendation that DOD develop a strategy and execution plans that contain all the elements of a comprehensive strategic plan, including, among other elements, performance measures for gauging results. The National Defense Authorization Act for Fiscal Year 2014 required that DOD’s strategy and implementation plans for asset visibility, which were in development, incorporate, among other things, the missing elements that we identified. DOD’s January 2014 Strategy for Improving DOD Asset Visibility represents a corrective action plan and contains goals and objectives—as well as supporting execution plans—outlining specific objectives intended to improve asset visibility. DOD’s Strategy calls for organizations to identify at least one outcome or key performance indicator for assessing performance in implementing the initiatives intended to improve asset visibility. DOD has also established a structure, including its Asset Visibility Working Group, for monitoring implementation of its asset visibility improvement initiatives. Moving forward, the removal of DOD supply chain management from GAO’s High Risk List will require DOD to take several steps. For inventory management, DOD needs to demonstrate sustained progress by continuing to reduce its on-order and on-hand excess inventory, developing corrective actions to improve demand forecast accuracy, and implementing methodologies to set inventory levels for reparable items (i.e., items that can be repaired) with low or highly variable demand. For materiel distribution, DOD needs to develop a corrective action plan that includes reliable metrics for, among other things, identifying gaps and measuring distribution performance across the entire distribution pipeline. For asset visibility, DOD needs to (1) specify the linkage between the goals and objectives in its Strategy and the initiatives intended to implement it and (2) refine, as appropriate, its metrics to ensure they assess progress towards achievement of those goals and objectives. DOD weapon systems acquisition has also been on the High-Risk List since 1990. Congress and DOD have long sought to improve the acquisition of major weapon systems, yet many DOD programs are still falling short of cost, schedule, and performance expectations. The results are unanticipated cost overruns, reduced buying power, and in some cases delays or reductions in the capability ultimately delivered to the warfighter. Our past work and prior high-risk updates have identified multiple weaknesses in the way DOD acquires the weapon systems it delivers to the warfighter and we have made numerous recommendations on how to address these weaknesses. Recent actions taken by top leadership at DOD indicate a firm commitment to improving the acquisition of weapon systems as demonstrated by the release and implementation of the Under Secretary of Defense for Acquisition, Technology, and Logistics’ “Better Buying Power” initiatives. These initiatives include measures such as setting and enforcing affordability constraints, instituting a long-term investment plan for portfolios of weapon systems, implementing “should cost” management to control contract costs, eliminating redundancies within portfolios, and emphasizing the need to adequately grow and train the acquisition workforce. DOD also has made progress in its efforts to assess the root causes of poor weapon system acquisition outcomes and in monitoring the effectiveness of its actions to improve its management of weapon systems acquisition. Through changes to acquisition policies and procedures, DOD has made demonstrable progress and, if these reforms are fully implemented, acquisition outcomes should improve. At this point, there is a need to build on existing reforms by tackling the incentives that drive the process and behaviors. In addition, further progress must be made in applying best practices to the acquisition process, attracting and empowering acquisition personnel, reinforcing desirable principles at the beginning of the program, and improving the budget process to allow better alignment of programs and their risks and needs. While DOD has made real progress on the issues we have identified in this area, with the prospect of slowly growing or flat defense budgets for years to come, the department must continue this progress and get better returns on its weapon system investments than it has in the past. DOD has made some progress in updating its policies to enable better weapon systems outcomes. However, even with this call for change we remain concerned about the full implementation of proposed reforms as DOD has, in the past, failed to convert policy into practice. In addition, although we reported in March 2014 on the progress many DOD programs are making in reducing their cost in the near term, individual weapon programs are still failing to conform to best practices for acquisition or to implement key acquisition reforms and initiatives that could prevent long-term cost and schedule growth. We added this high-risk area in 1997 and expanded it this year to include protection of PII. Although significant challenges remain, the federal government has made progress toward improving the security of its cyber assets. For example, Congress, as part of its ongoing oversight, passed five bills, which became law, for improving the security of cyber assets. The first, The Federal Information Security Modernization Act of 2014, revises the Federal Information Security Management Act of 2002 and clarifies roles and responsibilities for overseeing and implementing federal agencies’ information security programs. The second law, the Cybersecurity Workforce Assessment Act, requires DHS to assess its cybersecurity workforce and develop a strategy for addressing workforce gaps. The third, the Homeland Security Cybersecurity Workforce Assessment Act, requires DHS to identify all of its cybersecurity positions and calls for the department to identify specialty areas of critical need in its cybersecurity workforce. The fourth, the National Cybersecurity Protection Act of 2014, codifies the role of DHS’ National Cybersecurity and Communications Integration Center as the nexus of cyber and communications integration for the federal government, intelligence community, and law enforcement. The fifth, the Cybersecurity Enhancement Act of 2014,through the National Institute of Standards and Technology, to facilitate and support the development of voluntary standards to reduce cyber risks to critical infrastructure. authorizes the Department of Commerce, The White House and senior leaders at DHS have also committed to securing critical cyber assets. Specifically, the President has signed legislation and issued strategy documents for improving aspects of cybersecurity, as well as an executive order and a policy directive for improving the security and resilience of critical cyber infrastructure. In addition, DHS and its senior leaders have committed time and resources to advancing cybersecurity efforts at federal agencies and to promoting critical infrastructure sectors’ use of a cybersecurity framework. However, securing cyber assets remains a challenge for federal agencies. Continuing challenges, such as shortages in qualified cybersecurity personnel and effective monitoring of, and continued weaknesses in, agencies’ information security programs need to be addressed. Until the White House and executive branch agencies implement the hundreds of recommendations that we and agency inspectors general have made to address cyber challenges, resolve identified deficiencies, and fully implement effective security programs and privacy practices, a broad array of federal assets and operations may remain at risk of fraud, misuse, and disruption, and the nation’s most critical federal and private sector infrastructure systems will remain at increased risk of attack from adversaries. In addition to the recently passed laws addressing cybersecurity and the protection of critical infrastructures, Congress should also consider amending applicable laws, such as the Privacy Act and E-Government Act, to more fully protect PII collected, used, and maintained by the federal government. The Department of the Interior’s (Interior) continued efforts to improve its management of federal oil and gas resources since we placed these issues on the High Risk List in 2011 have resulted in the department meeting one of the criteria for removal from our High Risk List— leadership commitment. Interior has implemented a number of strategies and corrective measures to help ensure the department collects its share of revenue from oil and gas produced on federal lands and waters. Additionally, Interior is developing a comprehensive approach to address its ongoing human capital challenges. In November 2014, Interior senior leaders briefed us on the department’s commitment to address the high- risk issue area by describing the following corrective actions. To help ensure Interior collects revenues from oil and gas produced on federal lands and waters, Interior has taken steps to strengthen its efforts to improve the measurement of oil and gas produced on federal leases by ensuring a link between what happens in the field (measurement and operations) and what is reported to Interior’s Office of Natural Resources Revenue or ONRR (production volumes and dispositions). To ensure that federal oil and gas leases are inspected, Interior is hiring inspectors and engineers with an understanding of metering equipment and measurement accuracy. The department has several efforts under way to assure that oil and gas are accurately measured and reported. For example, ONRR contracted for a study to automate data collection from production metering systems. In 2012, the Bureau of Safety and Environmental Enforcement hired and provided measurement training to a new measurement inspection team. To better ensure a fair return to the federal government from leasing and production activities from federal offshore leases, Interior raised royalty rates, minimum bids, and rental rates. For onshore federal leases, according to Interior’s November 2014 briefing document, ONRR’s Economic Analysis Office will provide the Bureau of Land Management (BLM) monthly analyses of global and domestic market conditions as BLM initiates a rulemaking effort to provide greater flexibility in setting onshore royalty rates. To address the department’s ongoing human capital challenges, Interior is working with the Office of Personnel Management to establish permanent special pay rates for critical energy occupations in key regions, such as the Gulf of Mexico. Bureau managers are being trained on the use of recruitment, relocation, and retention incentives to improve hiring and retention. Bureaus are implementing or have implemented data systems to support the accurate capture of hiring data to address delays in the hiring process. Finally, Interior is developing strategic workforce plans to assess the critical skills and competencies needed to achieve current and future program goals. To address its revenue collection challenges, Interior will need to identify the staffing resources necessary to consistently meet its annual goals for oil and gas production verification inspections. Interior needs to continue meeting its time frames for updating regulations related to oil and gas measurement and onshore royalty rates. It will also need to provide reasonable assurance that oil and gas produced from federal leases is accurately measured and that the federal government is getting an appropriate share of oil and gas revenues. To address its human capital challenges, Interior needs to consider how it will address staffing shortfalls over time in view of continuing hiring and retention challenges. It will also need to implement its plans to hire additional staff with expertise in inspections and engineering. Interior needs to ensure that it collects and maintains complete and accurate data on hiring times—such as the time required to prepare a job description, announce the vacancy, create a list of qualified candidates, conduct interviews, and perform background and security checks—to effectively implement changes to expedite its hiring process. The Centers for Medicare & Medicaid Services (CMS), in the Department of Health and Human Services (HHS), administers Medicare, which has been on the High Risk List since 1990. CMS has continued to focus on reducing improper payments in the Medicare program, which has resulted in the agency meeting our leadership commitment criterion for removal from the High Risk List and partially meeting our other four criteria. HHS has demonstrated top leadership support for addressing this risk area by continuing to designate “strengthened program integrity through improper payment reduction and fighting fraud” an HHS strategic priority and, through its dedicated Center for Program Integrity, CMS has taken multiple actions to improve in this area. For example, as we recommended in November 2012, CMS centralized the development and implementation of automated edits—prepayment controls used to deny Medicare claims that should not be paid—based on a type of national policy called national coverage determinations. Such action will ensure greater consistency in paying only those Medicare claims that are consistent with national policies. In addition, CMS has taken action to implement provisions of the Patient Protection and Affordable Care Act that Congress enacted to combat fraud, waste, and abuse in Medicare. For instance, in March 2014, CMS awarded a contract to a Federal Bureau of Investigation-approved contractor that will enable the agency to conduct fingerprint-based criminal history checks of high-risk providers and suppliers. This and other provider screening procedures will help block the enrollment of entities intent on committing fraud. CMS made positive strides, but more needs to be done to fully meet our criteria. For example, CMS has demonstrated leadership commitment by taking actions such as strengthening provider and supplier enrollment provisions, and improving its prepayment and postpayment claims review However, all parts of the process in the fee-for-service (FFS) program.Medicare program are on the Office of Management and Budget’s list of high-error programs, suggesting additional actions are needed. By implementing our open recommendations, CMS may be able to reduce improper payments and make progress toward fulfilling the four outstanding criteria to remove Medicare improper payments from our High Risk List. The following summarizes open recommendations and procedures authorized by the Patient Protection and Affordable Care Act that CMS should implement to make progress toward fulfilling the four outstanding criteria to remove Medicare improper payments from our High Risk List. CMS should require a surety bond for certain types of at-risk providers and suppliers; publish a proposed rule for increased disclosures of prior actions taken against providers and suppliers enrolling or revalidating enrollment in Medicare, such as whether the provider or supplier has been subject to a payment suspension from a federal health care program; establish core elements of compliance programs for providers and improve automated edits that identify services billed in medically unlikely amounts; develop performance measures for the Zone Program Integrity Contractors who explicitly link their work to the agency’s Medicare FFS program integrity performance measures and improper payment reduction goals; reduce differences between contractor postpayment review requirements, when possible; monitor the database used to track Recovery Auditors’ activities to ensure that all postpayment review contractors are submitting required data and that the data the database contains are accurate and complete; require Medicare administrative contractors to share information about the underlying policies and savings related to their most effective edits; and efficiently and cost-effectively identify, design, develop, and implement an information technology solution that addresses the removal of Social Security numbers from Medicare beneficiaries’ health insurance cards. The National Oceanic and Atmospheric Administration (NOAA) has made progress toward improving its ability to mitigate gaps in weather satellite data since the issue was placed on the High Risk List in 2013. NOAA has demonstrated leadership on both its polar-orbiting and geostationary satellite programs by making decisions on how it plans to mitigate anticipated and potential gaps, and in making progress on multiple mitigation-related activities. In addition, the agency implemented our recommendations to improve its polar-orbiting and geostationary satellite gap contingency plans. Specifically, in September 2013, we recommended that NOAA establish a comprehensive contingency plan for potential polar satellite data gaps that was consistent with contingency planning best practices. In February 2014, NOAA issued an updated plan that addressed many, but not all, of the best practices. For example, the updated plan includes additional contingency alternatives; accounts for additional gap scenarios; identifies mitigation strategies to be executed; and identifies specific activities for implementing those strategies along with associated roles and responsibilities, triggers, and deadlines. In addition, in September 2013, we reported that while NOAA had established contingency plans for the loss of geostationary satellites, these plans did not address user concerns over potential reductions in capability and did not identify alternative solutions and timelines for preventing a delay in the Geostationary Operational Environmental Satellite-R (GOES-R) launch date. We recommended the agency revise its contingency plans to address these weaknesses. In February 2014, NOAA released a new satellite contingency plan that improved in many, but not all, of the best practices. For example, the updated plan clarified requirements for notifying users regarding outages and impacts and provided detailed information on responsibilities for each action in the plan. NOAA has demonstrated leadership commitment in addressing data gaps of its polar-orbiting and geostationary weather satellites by making decisions about how to mitigate potential gaps and by making progress in implementing multiple mitigation activities. However, capacity concerns— including computing resources needed for some polar satellite mitigation activities and the limited time available for integration and testing prior to the scheduled launch of the next geostationary satellite—continue to present challenges. In addition, while both programs have updated their satellite contingency plans, work remains to implement and oversee efforts to ensure that mitigation plans will be viable if and when they are needed. Overall, the government continues to take high-risk problems seriously and is making long-needed progress toward correcting them. Congress has acted to address several individual high-risk areas through hearings and legislation. Our high-risk update and high-risk website, http://www.gao.gov/highrisk/, can help inform the oversight agenda for the 114th Congress and guide efforts of the administration and agencies to improve government performance and reduce waste and risks. In support of Congress and to further progress to address high-risk issues, we continue to review efforts and make recommendations to address high- risk areas. Continued perseverance in addressing high-risk areas will ultimately yield significant benefits. Thank you, Chairman Johnson, Ranking Member Carper, and Members of the Committee. This concludes my testimony. I would be pleased to answer any questions. For further information on this testimony, please contact J. Christopher Mihm at (202) 512-6806 or [email protected]. Contact points for the individual high-risk areas are listed in the report and on our high-risk web site. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The federal government is one of the world's largest and most complex entities; about $3.5 trillion in outlays in fiscal year 2014 funded a broad array of programs and operations. GAO maintains a program to focus attention on government operations that it identifies as high risk due to their greater vulnerabilities to fraud, waste, abuse, and mismanagement or the need for transformation to address economy, efficiency, or effectiveness challenges. Since 1990, more than one-third of the areas previously designated as high risk have been removed from the list because sufficient progress was made in addressing the problems identified. The five criteria for removal are: (1) leadership commitment, (2) agency capacity, (3) an action plan, (4) monitoring efforts, and (5) demonstrated progress. This biennial update describes the status of high-risk areas listed in 2013 and identifies new high-risk areas needing attention by Congress and the executive branch. Solutions to high-risk problems offer the potential to save billions of dollars, improve service to the public, and strengthen government performance and accountability. Solid, steady progress has been made in the vast majority of the high-risk areas. Eighteen of the 30 areas on the 2013 list at least partially met all of the criteria for removal from the high risk list. Of those, 11 met at least one of the criteria for removal and partially met all others. Sufficient progress was made to narrow the scope of two high-risk issues— Protecting Public Health through Enhanced Oversight of Medical Products and DOD Contract Management. Overall, progress has been possible through the concerted actions of Congress, leadership and staff in agencies, and the Office of Management and Budget. This year GAO is adding 2 areas, bringing the total to 32. Managing Risks and Improving Veterans Affairs (VA) Health Care. GAO has reported since 2000 about VA facilities' failure to provide timely health care. In some cases, these delays or VA's failure to provide care at all have reportedly harmed veterans. Although VA has taken actions to address some GAO recommendations, more than 100 of GAO's recommendations have not been fully addressed, including recommendations related to the following areas: (1) ambiguous policies and inconsistent processes, (2) inadequate oversight and accountability, (3) information technology challenges, (4) inadequate training for VA staff, and (5) unclear resource needs and allocation priorities. The recently enacted Veterans Access, Choice, and Accountability Act included provisions to help VA address systemic weaknesses. VA must effectively implement the act. Improving the Management of Information Technology (IT) Acquisitions and Operations. Congress has passed legislation and the administration has undertaken numerous initiatives to better manage IT investments. Nonetheless, federal IT investments too frequently fail to be completed or incur cost overruns and schedule slippages while contributing little to mission-related outcomes. GAO has found that the federal government spent billions of dollars on failed and poorly performing IT investments which often suffered from ineffective management, such as project planning, requirements definition, and program oversight and governance. Over the past 5 years, GAO made more than 730 recommendations; however, only about 23 percent had been fully implemented as of January 2015. GAO is also expanding two areas due to evolving high-risk issues. Enforcement of Tax Laws. This area is expanded to include IRS's efforts to address tax refund fraud due to identify theft. IRS estimates it paid out $5.8 billion (the exact number is uncertain) in fraudulent refunds in tax year 2013 due to identity theft. This occurs when a thief files a fraudulent return using a legitimate taxpayer's identifying information and claims a refund. Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure and Protecting the Privacy of Personally Identifiable Information (PII). This risk area is expanded because of the challenges to ensuring the privacy of personally identifiable information posed by advances in technology. These advances have allowed both government and private sector entities to collect and process extensive amounts of PII more effectively. The number of reported security incidents involving PII at federal agencies has increased dramatically in recent years. This report contains GAO's views on progress made and what remains to be done to bring about lasting solutions for each high-risk area. Perseverance by the executive branch in implementing GAO's recommended solutions and continued oversight and action by Congress are essential to achieving greater progress.
|
DOD acquisition policy defines an acquisition program as a directed, funded effort that provides a new, improved, or continuing materiel, weapon, or information system, or a service capability in response to an As shown in table 1, defense acquisition programs are approved need.classified into acquisition categories that depend on the value and type of acquisition. The Army, Navy, Air Force, and SOCOM also have supplemental acquisition policies that address certain aspects of acquisition program categorization and management. ACAT II and III programs encompass a wide range of efforts and program sizes. Programs may range from an ACAT II program with a total acquisition cost of more than $3 billion to an ACAT III program with an acquisition cost in the millions of dollars or lower. DOD’s acquisition policy does not establish a minimum cost for ACAT III programs. The level of oversight for acquisition programs varies based on the assigned ACAT level. DOD and component acquisition policies specify the organizational level of the milestone decision authority—the designated individual with overall responsibility for a program—for each ACAT level. The organizational level at which program requirements and requirements changes are approved may vary by ACAT level as well. The organizational level of the milestone decision authority for Air Force ACAT I-III programs is shown in figure 1 as an example. All acquisition programs are required by statute or DOD guidance to provide program information at milestones and other decision points, although these requirements differ by ACAT level. MDAP and MAIS programs, also known as ACAT I and IA programs, require more documentation and analysis to support program decisions and have to regularly report to Congress on their cost, schedule, and technical performance. These programs are required to enter and maintain program cost, schedule, and performance data and create APBs within DOD’s Defense Acquisition Management Information Retrieval (DAMIR) system, a web-based data system intended to provide data transparency of acquisition management information across DOD. Components may use DAMIR for other programs, but it is not required. Appendix II provides additional detail on acquisition documentation requirements and congressional reporting requirements by ACAT level. DOD components could not provide sufficiently reliable data for us to accurately determine the number, total cost, or performance of DOD’s current ACAT II and III programs. We found that the accuracy, completeness, and consistency of DOD’s data was undermined by (1) widespread data entry issues and missing data, and (2) inconsistent identification of current ACAT II and III programs across and within components. DOD components have taken some steps to improve ACAT II and III program data, but their efforts do not fully address the causes of the problems we identified. In addition to data reliability problems, DOD lacks consistent cost and schedule metrics across components to assess ACAT II and III program performance. Further, the lack of baseline cost and schedule data and comparable schedule milestones prevents DOD from consistently measuring the performance of ACAT II and III programs. Taken together, these issues limit the utility of DOD’s data on ACAT II and III programs for oversight, decision-making, and reporting purposes. We identified data reliability issues related to accuracy, completeness, or consistency with data for about 60 percent of ACAT II and III programs reported to us by DOD components. These issues prevented us from accurately determining the number, total cost, or performance of DOD’s current ACAT II and III programs. According to DOD acquisition policy, complete and current program information is essential to the acquisition process. Internal control standards for federal executive branch agencies also emphasize that agencies should have relevant, reliable, and timely information for decision-making and external reporting purposes. We found obvious accuracy and completeness issues in program cost data for ACAT II and III programs reported to us by DOD components. We also observed consistency issues in program data across components and within some components that affected the comparability of the data. Inaccurate data were evident across all of the components; issues we observed included reported dollar values outside the range of ACAT II and III programs and basic math errors. Inaccuracies like these suggest overall data quality problems. Further, when we reviewed a sample of programs and compared reported cost estimates to source documents, we found that cost estimate data was often misreported. Components incorrectly reported data or data was missing for 64 out of 95 programs for which we had complete source documents in our non-generalizable sample. We also observed missing data elements to varying degrees at all of the components except CBDP. For example, 333 out of 836 programs reported by the components were missing one or more cost estimate elements or basic information such as the ACAT level. Lastly, in numerous instances components did not follow the instructions of the data collection instrument, which also affected our ability to use the data. See table 2 for examples of the reliability issues we identified. In some instances, the accuracy and completeness issues we identified were consistent with known limitations of information in component systems. For example, Army acquisition officials told us that in the past there has been conflicting guidance about whether completed programs should be deleted from the Army’s acquisition information system, and some completed programs were never removed from the system. Officials at all components, except CBDP, further told us that accuracy of data in their systems relies primarily on the quality of information submitted by the Program Executive Officer (PEO) or program offices.Army, Navy, and Air Force acquisition officials told us that they work with PEOs to address data quality problems, such as by conducting ad hoc checks to flag obvious errors or missing data elements and following up with PEOs as necessary. SOCOM acquisition officials told us the Acquisition Executive emphasizes the importance of maintaining up-to- date data to program managers. Based on the data provided to us by DOD components, these existing data quality practices are insufficient to ensure the accuracy and completeness of data on DOD’s ACAT II and III programs as required by DOD policy and federal internal control standards. Data provided in response to our request for information on current ACAT II and III programs (1) included acquisitions that were not “current” ACAT II and III programs in accordance with our definition, (2) likely reflected inconsistent reporting of acquisitions at lower dollar levels across components, and (3) likely excluded certain acquisitions considered current ACAT II and III programs in accordance with certain component acquisition policies. For the purposes of our report, we define a current program as one that has been formally initiated in the acquisition process, but has not yet delivered 90 percent of its items or made 90 percent of planned expenditures. This definition is consistent with statutory reporting thresholds used by Congress in its reporting requirements for current MDAP programs. Inconsistent interpretations of what constitutes a current program and the inability of some components to reliably identify these programs contributed to the inclusion of programs that were not current in the data that components reported to us. DOD acquisition policy and component guidance generally do not define which ACAT II or III programs are considered to be current for management and reporting purposes. For example, DOD acquisition policy defines when a program is formally initiated and its operations and support phase begins, but it does not identify when a program should be considered current. Further, some components told us PEOs may have different interpretations of what constitutes a current program. Components also told us that they could not consistently use the information in their data systems to readily identify current programs. Of the 836 programs initially reported by the five components, we identified 199 programs across the Army, Navy, Air Force, and SOCOM that should not have been included because they did not meet our criteria for current programs. For example, according to an Army PEO, 90 of 140 programs originally reported to us as current ACAT II or III programs by the Army were not current based on our definition because these programs had delivered more than 90 percent of planned items or expended more than 90 percent of planned funds. Without the consistent identification of current ACAT II and III programs, DOD and component officials do not know the accurate number of these programs and may miss opportunities to identify programs that may need more or less oversight depending on whether or not most of the anticipated acquisition funding has been spent. Additionally, component guidance for defining acquisition programs varies and likely resulted in inconsistent reporting across components of acquisitions at lower dollar values. DOD acquisition policy establishes a cost ceiling and cost floor for ACAT II programs and a cost ceiling, but no cost floor, for ACAT III programs. However, some components have supplemental guidance that establishes additional acquisition categories or exclusions. For example, SOCOM and Navy have guidance that provides for certain acquisitions with less than $10 million in total RDT&E contracts, less than $25 million per year in annual procurement funding, and less than $50 million total in procurement contracts to be categorized as non-ACAT programs. Specifically, SOCOM designates these low cost, schedule, and technical risk efforts to field special operations-peculiar capabilities as abbreviated acquisition projects. Similarly, the Navy designates lower dollar value programs that do not require operational testing and evaluation as abbreviated acquisition programs.Air Force policies do not provide for lower dollar threshold categories Army and beneath the ACAT III level. According to officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, acquisition programs at these dollar levels should be reported as ACAT III programs. However, Navy and SOCOM did not include their non-ACAT programs in the ACAT II and III program data reported to us. There was also variation within components as to how acquisition programs were categorized. Army and Air Force acquisition officials told us that some programs that should have been considered ACAT II or III programs in accordance with component acquisition policy were not. These officials also told us that PEOs may have counted and handled programs differently in the absence of a clear definition of what should be considered a program of record. For example, Army officials told us that categorizing information technology programs was sometimes challenging, and they have worked with PEOs to review the categorization of certain information technology programs. Army and Air Force officials told us that as a result of confusion among PEOs about whether or not certain programs should be considered ACAT II or III programs, they have needed to add and remove numerous ACAT II and III programs from component information systems over the past year. The types of issues we identified may have also contributed to components reporting varying numbers of ACAT II and III programs in response to different requests for information during the same time frame. Specifically, concurrent with reporting 755 current ACAT II and III programs to us, the Army, Navy, and Air Force reported 1,360 ACAT II and III programs in a presentation to the DOD Business Senior Integration Group, which is chaired by the Under Secretary of Defense for Acquisition, Technology, and Logistics and oversees DOD’s Better Buying Power initiatives. Acquisition officials from these components told us they were unable to fully explain the reasons for the difference between the numbers of programs reported. The Army, Navy, Air Force, and SOCOM have established information systems to track cost and schedule data for ACAT II and III programs and taken some steps to address issues related to the completeness and accuracy of information tracked in these systems. CBDP officials told us they recognize the value of establishing a system to track data on ACAT II and III programs and are determining the capabilities that would be needed in such a system. Specifics of component efforts follow: According to Navy officials, data in the Navy’s Research, Development & Acquisition Information System has potentially been incomplete because ACAT II and III program data has not been consistently entered into the system. The Navy issued an updated policy in August 2014 that requires input of programmatic information into the system for all ACAT programs. All Air Force ACAT II and III programs have been required to enter cost and schedule data into the System Metric and Reporting Tool since 2012, but Air Force officials told us that not all programs had complied with the requirement. They told us in June 2014 that they had an ongoing effort to review ACAT II and III programs in the system, including assessing whether cost and schedule data has been populated. Further, the Air Force has established an investment master list that will capture all programs receiving RDT&E and procurement funding. SOCOM requires that all ACAT II and III programs enter program and cost data into its centralized acquisition portal data system, however SOCOM officials told us some program managers have been more diligent than others in populating the system. These officials told us that data from the portal is now used to conduct monthly program reviews, rather than having the program prepare briefing slides, as a way to encourage program managers to populate and regularly update the system. Components have also taken steps to improve the consistency with which they identify current ACAT II and III programs. For example, officials at four of the five components told us they are exploring ways to identify the acquisition phase—technology maturation and risk reduction, engineering and manufacturing development, production and deployment, operations and support—for programs in component information systems, which could improve their ability to reliably identify current programs. The Air Force and Army have also made efforts to address concerns they had previously identified related to the consistent identification of ACAT II and III programs within their components. The Air Force issued guidance in January 2014, detailing which programs are and are not considered to be acquisition programs and acquisition policy officials told us they have been meeting with individual PEOs to clarify any misunderstandings about how acquisition programs should be categorized. Officials from the Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology also told us they are in the process of revising the Army’s acquisition guidance, with input from PEOs, to more precisely define the types of programs that should and should not be considered to be acquisition programs. However, the components’ efforts do not fully address the accuracy, completeness, and consistency issues we identified with ACAT II and III program data. For example, the components have not established systematic processes to perform data quality tests on PEO-submitted data and assess the results to help identify problems, such as basic math errors or missing data, for further review. These types of tests and assessments can be an important step in determining whether data can be used for its intended purposes. Additionally, the components have not developed plans that detail how they will implement or sustain data improvement efforts. For example, the components have not developed implementation steps for assessing data reliability on an ongoing basis or metrics to assess the success of data cleanup efforts. Developing such Without establishing this plans is a key project management practice.planning foundation, the components will not be in a sound position to effectively monitor and evaluate the implementation of efforts to improve component data. Finally, efforts to consistently identify current ACAT II and III programs have been focused within individual components. Without a consistent understanding about which programs should be considered to be current ACAT II or III programs across components, similar programs will continue to be reported on differently, thereby limiting the consistency and comparability of ACAT II and III program data across DOD. DOD components lack consistent cost and schedule performance metrics to assess performance trends across ACAT II and III programs. As part of the department’s Better Buying Power initiatives, the Under Secretary of Defense for Acquisition, Technology, and Logistics instructed DOD components to determine how best to measure performance trends for non-ACAT I programs. Federal internal control standards also emphasize the importance of comparing actual performance to planned or expected results throughout the organization to help ensure effective results are achieved and actions are taken to address risks. The Army, Navy, and Air Force briefed DOD’s Business Senior Integration Group in November 2013 on their current efforts and plans regarding assessing ACAT II and III cost and schedule performance, but no specific follow-on actions or action plans have been developed. Unlike MDAPs and MAIS programs, ACAT II and III programs are not required to report cost and schedule data in a consistent fashion, despite the potential benefits of such reporting. MDAP and MAIS programs are required to report key cost and schedule metrics to Congress in a standardized format through Selected Acquisition Reports and MAIS Annual Reports, respectively. Cost and schedule data for these reports are pulled from DOD’s web-based DAMIR system. MDAP cost and schedule data are used by DOD for its annual assessment of the performance of the defense acquisition system, which the department uses to improve acquisition program performance and inform policy and programmatic decisions. ACAT II and III programs are not required to produce similar cost and schedule reporting as larger programs and do not have to provide program data in DAMIR. According to officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, although minor adjustments may be needed for reporting purposes, there is nothing that prevents components from using DAMIR to capture data on ACAT II and III programs, and acquisition officials from the Army have considered using it. Additionally, CBDP officials told us they would consider DAMIR when exploring potential systems to track ACAT II and III program data. DOD component officials told us they are not yet sure how best to measure cost and schedule performance across ACAT II and III programs. For example, Army officials told us that analysis of component- wide ACAT II and III performance trends may not make sense given the differences across programs. Navy, SOCOM, Air Force, and CBDP officials told us they are interested in tracking cost and schedule performance trends across ACAT II or III programs, but are still working to define performance metrics and address limitations in existing data or reporting capabilities. For example, the Air Force has attempted to assess cost and schedule performance for a subset of ACAT II and III programs, but acquisition officials noted that the process was very resource- intensive, and they had concerns about the reliability of the cost and schedule information used in their analysis given the lack of variability in program performance over time. While the components have developed oversight mechanisms to review individual ACAT II and III program performance, such as through periodic program status reviews at the PEO level and program or portfolio reviews by senior component acquisition officials, without assessing performance trends across ACAT II and III programs, DOD and its components may be missing opportunities to identify and analyze differences between actual and expected performance and develop strategies to address related risks throughout the department. When we analyzed information available on cost and schedule performance, we determined that we could not assess cost performance for 139 programs out of a non-generalizable sample of 170 programs and schedule performance for 105 of the 170 programs. In addition to missing or misreported cost data, we identified two challenges to measuring cost and schedule performance trends for ACAT II and III programs: (1) programs without available APBs and (2) a lack of consistent and comparable key schedule milestones across programs. See figure 2 for a summary of our assessment of the data available to measure cost and schedule performance and appendix III for additional details. For 75 of the 170 programs that we examined in detail, we could not assess cost or schedule performance because DOD components had not developed, or did not provide, an original APB, a current one, or both. The components were unable to provide APBs for various reasons, such as because they could not locate the original or an APB was not developed at program start or to this point in the life of the program. APBs are critical management tools that establish how systems will perform, when they will be delivered, and what they will cost. According to DOD acquisition policy, APBs are required of all acquisition programs, and Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics officials told us they generally expect all acquisition programs to have one.prior to entry into system development (Milestone B), or at program initiation, whichever occurs later. APBs may be revised at the time of significant program decisions, such as milestones, or as a result of major program changes or breaches to cost, schedule, or performance parameters. Table 3 shows the number of missing APBs by component in our sample. Thirteen of the 15 ACAT II or III programs we reviewed in-depth had exceeded the cost or schedule targets in their original APBs. These programs cited changing requirements, testing issues, quantity changes, and flaws in original cost estimates, among other factors, as the reasons for cost and schedule growth. The programs we reviewed cited other factors, such as a reliance on mature technology—including commercial or government off-the-shelf or other non-developmental items—and early involvement of stakeholders or users as contributing to reduced risk of cost or schedule growth. We have previously reported that similar factors affect the performance of DOD’s MDAPs. Appendix IV provides additional details about the programs we reviewed. Thirteen of the 15 ACAT II or III programs we reviewed in-depth had exceeded the cost or schedule targets in their original APBs.attempt to quantify the extent to which these programs had exceeded cost or schedule targets due to overall concerns about the reliability of ACAT II and III cost and schedule data and because not all of these programs had developed APBs at program start. These programs most frequently attributed cost growth or schedule delays to changing requirements. Testing issues, quantity changes, and flaws in original cost estimates were also cited by at least 5 of the 13 programs as contributing We did not to cost growth or schedule delays. All but 1 of the 13 programs cited multiple causes for cost growth or schedule delays, including factors beyond those listed in table 4. Requirements changes were associated with cost growth or schedule delays by at least one program at each of the five components in our review. According to program officials, programs added or increased requirements due to situations such as: adding capability to a new platform that had not been planned for when the original requirements were approved; creating additional variants to meet requirements that emerged after the original requirements were approved; or making improvements or refinements to a system in development or production as a result of changes in the operational environment, including new threats. For example, officials from the Army’s Synthetic Environment Core program, which is providing the Army a common virtual environment that links virtual simulators and simulations into an integrated and interoperable training environment, told us that increasing terrain database requirements to meet additional training needs have contributed to program cost increases significant enough to require the program to be recategorized from an ACAT III to an ACAT II program. Program officials stated that in some cases, the additional requirements have been unrealistic from either a cost or technological perspective, but that historically there had not been an effective process to prioritize requirements or enforce capability tradeoffs. Table 5 provides additional examples from our case studies of factors cited by program offices as contributing to cost growth or schedule delays. We have previously reported that similar factors have negatively affected the cost and schedule performance of MDAPs. DOD’s weapons system programs often enter the acquisition process without a full understanding of requirements, and we have reported numerous times that requirements changes or changes to designs to meet requirements are factors in poor cost and schedule outcomes. Additionally, in part due to high levels of uncertainty about requirements, program cost estimates and their related funding needs are often flawed. For example, in 2008 we assessed cost estimates for 20 MDAPs and found that the estimates were too low in most cases and that in some programs, cost estimates were off by billions of dollars.knowledge and detail to develop sound cost estimates, which effectively set programs up for cost growth and schedule delays. Program officials for the ACAT II and III programs we reviewed most frequently cited the reliance on mature technology—including commercial or government off-the-shelf or other non-developmental items—and early involvement of stakeholders or users as factors that helped to reduce the risk of cost or schedule growth. Both of these factors were cited by 5 or more of the 15 ACAT II or III program offices we reviewed. In some cases, these factors were cited by programs that experienced cost growth or schedule delays, for example, because one of these factors may have helped a program partially recover from a cost or schedule breach or keep initial program costs lower or schedules shorter than otherwise would be expected. Reliance on existing mature technologies was a relevant factor for the two programs we reviewed that did not report cost growth or schedule delays, and the most frequently cited factor contributing to reduced risk of cost or schedule growth among all of the programs we reviewed. The two programs we reviewed that appeared to be on track to meet original cost and schedule targets—the Army’s 5.56 millimeter Enhanced Performance Round program and SOCOM’s Nonstandard Aviation program—relied on modified commercial off-the-shelf equipment or modified existing military service equipment or assets. The Army’s 5.56 millimeter Enhanced Performance Round was an incremental engineering change to replace the Army’s general purpose 5.56 millimeter bullet with a new bullet design, which features a copper slug and exposed hardened steel penetrator. SOCOM’s Nonstandard Aviation program acquires, modifies, fields, and sustains commercial aircraft to transport special operations forces. The use of mature technologies was also cited as contributing to reduced risk of cost or schedule growth by 6 of the 13 other programs we reviewed. For example, according to program documentation for the Air Force’s F-15E Radar Modernization Program, the program planned to leverage existing commercial and government off-the-shelf technology from other fighter aircraft radar systems and the maturity of these technologies significantly lowered program development risk and costs. Early stakeholder or user involvement was cited by 5 of the 15 programs we reviewed as contributing to reduced risk of cost or schedule growth, including 1 of the 2 programs that did not experience cost growth or schedule delays. For example, officials with the Army’s 5.56 millimeter Enhanced Performance Round program noted that constant communication with all stakeholders, engineers, testers, and contractors was essential and a key success factor for the program. Similarly, program officials for CBDP’s Dismounted Reconnaissance Sets, Kits, and Outfits program—which provides protective equipment for chemical, biological, radiological, or nuclear hazards—told us that the participation of all of the military services at the beginning of the program helped to the keep program cost and schedule on track. According to program officials, they integrated user input from the outset, including in developing the concept of operations, which reduced the number of later requirements changes. At the time of our review, the program was on track to meet its original schedule targets. The program’s unit cost also decreased between the start of development and production. We have previously reported that similar factors appear to positively affect the cost and schedule performance of MDAPs. For example, in 2010, we reported on MDAPs that appeared to be stable and on track to meet their original cost and schedule targets. We found that the stable programs we reviewed leveraged mature technologies that had been demonstrated to work in relevant or realistic environments, and either did not consider immature technologies or deferred immature technologies to later program increments. We also reported in 2012 that early stakeholder involvement in pre-system development reviews helped facilitate trade-offs among cost, schedule, and technical performance requirements. For example, by involving both the requirements and acquisition communities in these reviews, the Army was able to identify trade-offs that reduced the projected unit costs for the Joint Light Tactical Vehicle without impairing operational needs. Data provided by DOD components indicated that at least five current ACAT II programs were approaching or had exceeded ACAT I cost thresholds as of November 2013, though DOD component officials told us that most were not expected to become MDAPs.identify with certainty the number of programs likely to become MDAPs because of data reliability issues related to identifying the population of ACAT II and III programs and their estimated cost. Using the 836 programs initially reported by DOD components as our starting point, we identified two current ACAT II programs that exceeded the ACAT I threshold for RDT&E—$480 million in fiscal year 2014 constant dollars— and three current ACAT II programs that were within 10 percent of the ACAT I RDT&E or procurement threshold—$2.79 billion in fiscal year 2014 constant dollars—as of November 2013. Of these five programs, DOD component officials told us that four would not become MDAPs because, for example, they did not expect further program cost growth or were considering restructuring the program, and that component-level discussions were underway with regard to the status of the remaining program (see table 6). DOD weapon system acquisition represents one of the largest areas of the government’s discretionary spending, but much of this spending is still not well understood. DOD’s primary focus has been on overseeing and assessing the performance of its large ACAT I major defense acquisition programs, but the annual funding spent on ACAT II and III acquisition programs may be just as significant. Yet data provided by DOD components were so unreliable that we were unable to accurately identify even a minimum number or total cost of DOD’s ACAT II and III programs. While tailoring documentation and reporting requirements for “smaller” programs can be a reasonable approach to help prioritize limited oversight resources, if DOD and its components are to effectively manage their investment dollars, they must be able to account for how they are spending their money and how well they are spending it on the full range of acquisition programs. Having timely and reliable data on smaller acquisition programs is also critical for providing effective oversight and bringing the right oversight resources to bear, when needed, to make sure troubled smaller programs do not grow into major ones. The Under Secretary of Defense for Acquisition, Technology, and Logistics has recognized the value of having good data on DOD’s acquisition programs—including its ACAT II and III programs—to assess the performance of the defense acquisition system and identify the factors that affect program performance. But work remains to make sure information on the complete range of DOD acquisition programs is consistently available. DOD components have taken and continue to take steps to improve the reliability of ACAT II and III program data, but they do not fully address the limitations we identified—missing data, widespread data entry issues and inconsistent reporting—or the causes of these issues, including: the lack of a common definition of a current acquisition program; insufficient data reliability testing; and inconsistent compliance with requirements for acquisition program baselines and reporting on ACAT II and III programs that may become major programs due to cost growth. Components also lacked plans to ensure their intended actions are implemented and improvements to data collection and analysis are sustained over the long term. Until these limitations are addressed, DOD components will be unable to generate reliable information to effectively manage and oversee their ACAT II and III programs. We are making four recommendations to improve DOD’s ability to collect and maintain reliable data on its acquisitions. Specifically, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in consultation with DOD components, to take the following actions: establish guidelines on what constitutes a “current” ACAT II or III program for reporting purposes; the types of programs, if any, that do not require ACAT designations; and whether the rules for identifying current MDAPs would be appropriate for ACAT II and III programs; and determine what metrics should be used and what data should be collected on ACAT II and III programs to measure cost and schedule performance; and whether the use of DAMIR and the MDAP selected acquisition report format may be appropriate for collecting data on ACAT II and III programs. We also recommend that the Secretary of Defense direct the Secretaries of the Air Force, Army, and Navy and the Commander of SOCOM to take the following actions: assess the reliability of data collected on ACAT II and III programs and work with PEOs to develop a strategy to improve procedures for the entry and maintenance of data; and develop implementation plans to coordinate and execute component initiatives to improve data on ACAT II and III programs. We are also making two recommendations to help ensure compliance with relevant provisions of DOD acquisition policy with the purpose of improving DOD’s ability to provide oversight for ACAT II and III programs, including those programs that may become MDAPs. We recommend that the Secretary of Defense direct the Secretary of the Air Force and Commander of SOCOM to establish a mechanism to ensure compliance with APB requirements in DOD policy. We recommend that the Secretary of Defense direct the Secretaries of the Air Force, Army, and Navy to improve component procedures for notifying the Defense Acquisition Executive of programs with a cost estimate within 10 percent of ACAT I cost thresholds. We provided a draft of this report to DOD for review and comment. In its written comments, which are reprinted in full in appendix V, DOD partially concurred with all six of our recommendations. However, as discussed below, it is unclear whether the actions that DOD plans to take will fully address the issues we raised in this report. DOD partially concurred with our first recommendation to establish guidelines on what constitutes a “current” ACAT II or III program for reporting purposes; the types of programs, if any, that do not require ACAT designations; and whether the rules for identifying current MDAPs would be appropriate for ACAT II and III programs. DOD also partially concurred with our second recommendation related to determining what metrics should be used and what data should be collected on ACAT II and III programs to measure cost and schedule performance. In its response, DOD stated that the Under Secretary of Defense for Acquisition, Technology, and Logistics would review the existing policy direction for ACAT II and III programs to determine whether it needs to be altered or supplemented to facilitate data collection or reporting. However, as our review found, the question is not whether policy needs to be revised, but how it needs to be revised. We found that the existing policy direction was not adequate to ensure consistent data collection and reporting on ACAT II and III programs or their cost and schedule performance and our recommendations were designed to address those issues. We continue to believe that additional guidelines for components regarding which programs should be considered current ACAT II and III programs for reporting purposes and consistent metrics to measure performance trends, among other actions, are needed to correct the issues we found. DOD partially concurred with our third and fourth recommendations to assess the reliability of data collected on ACAT II and III programs and work with PEOs to develop a strategy to improve procedures for the entry and maintenance of data; and develop implementation plans to coordinate and execute component initiatives to improve data on ACAT II and III programs, respectively. In its response, DOD stated the Under Secretary of Defense for Acquisition, Technology, and Logistics will direct the DOD components to evaluate the data collected on ACAT II and III programs and report back to him on their assessment of the reliability of that data and the status of the plans to improve the availability and quality of the data. DOD’s response represents a good first step towards assessing the reliability of its ACAT II and III program data, but the response does not fully address our recommendations. DOD’s response does not address whether components would be required to develop strategies with PEOs to improve the entry and maintenance of data, as we recommended. We continue to believe that developing these strategies with those responsible for entering and maintaining program data on a day-to-day basis, including PEOs, is important to make sure the causes of DOD’s data quality problems are fully understood and addressed in a manner that can be implemented. Further, DOD’s response does not directly address our recommendation to develop implementation plans for component efforts. We believe that fully implementing this recommendation is essential for ensuring that DOD and its components can effectively monitor and evaluate the implementation of component initiatives to improve ACAT II and III data. DOD partially concurred with our fifth recommendation to direct the Secretary of the Air Force and Commander of SOCOM to establish a mechanism to ensure compliance with APB requirements in DOD policy. DOD also partially concurred with our sixth recommendation to direct the Secretaries of the Air Force, Army, and Navy to improve component procedures for notifying the Defense Acquisition Executive of programs with a cost estimate within 10 percent of ACAT I cost thresholds. In its response, DOD stated that the Under Secretary of Defense for Acquisition, Technology, and Logistics will issue guidance to DOD components reiterating the APB requirements for ACAT II and III programs and directing that the Defense Acquisition Executive be notified when an increase or estimated increase in program cost is within 10 percent of the ACAT I cost thresholds. Reiterating existing departmental policy on these issues may help raise awareness at the component level, but without additional enforcement mechanisms it may not address the causes of the deficiencies we discuss in this report. For example, the Air Force has issued component-level guidance directing the development of APBs. However, we found that programs were not in compliance with the guidance, which demonstrates the need to improve enforcement mechanisms, such as ensuring milestone decision authorities do not approve programs to proceed through acquisition milestones without APBs. Similarly, with regard to our recommendation on notification requirements for programs approaching the ACAT I threshold, we found that component officials cited reasons other than a lack of awareness of the policy for not notifying the Defense Acquisition Executive of these programs’ cost growth. As a result, we continue to believe that DOD should fully implement our recommendation by directing components to improve their notification procedures. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Secretaries of the Army, Navy, and Air Force; the Commander of U.S. Special Operations Command; the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs; and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our objectives were to assess (1) the extent to which information is available on the number of the Department of Defense’s (DOD) current acquisition category (ACAT) II and III programs, their total estimated acquisition cost, and cost and schedule performance; (2) the factors affecting the cost and schedule performance of selected ACAT II and III programs; and (3) the number of current ACAT II and III programs that are likely to become major defense acquisition programs (MDAP). To address our first objective, we used a data collection instrument (DCI) to identify and collect data on the number and cost of current ACAT II and III programs from five DOD components that accounted for approximately 88 percent of DOD’s requested research, development, test, and evaluation (RDT&E) and procurement funding in the President’s Fiscal Year 2014 budget request: Army, Air Force, Navy, U.S. Special Operations Command (SOCOM), and DOD’s Chemical and Biological Defense Program (CBDP). We used a DCI to obtain ACAT II and III program data based on preliminary discussions with DOD and component officials that a DCI would be the best way to collect the information of interest. We requested that each component identify all of its current ACAT II and III programs and provide cost data and descriptive information for each program. For the purposes of this report, we defined a current program as one that has been formally initiated in the acquisition process but has not yet delivered 90 percent of its planned units or expended 90 percent of its planned expenditures. For cost data, we requested components provide baseline and current program estimates in millions of base year dollars, to include estimates for RDT&E, procurement, acquisition operation and maintenance, and military construction, as well as the program’s total acquisition cost estimate and the base year associated with the estimate. We also collected pertinent information for each program including program name, ACAT level, type of acquisition (automated information system or non- automated information system), milestone decision authority, lead DOD component, and program executive office. To obtain additional information on schedule performance, we collected and analyzed acquisition program baseline (APB) documents, which contain program schedule and cost parameters, for a non-generalizable random sample of 170 non-automated information system ACAT II and III programs. To select the programs, we used the initial data provided to us by DOD components that included 836 reported ACAT II or III programs as a starting point. We adjusted our selection as appropriate to account for known errors in the data at the time of selection in May 2014, such as programs that were known to not be current ACAT II or III programs. Our intention was to select a sample that would be generalizable to the population of current ACAT II and III programs. However, after selecting our sample we determined through our data reliability assessment that the population of current ACAT II and III programs could not be reliably determined and that our sample would therefore be non-generalizable.As such, results of this analysis cannot be used to make inferences about all current ACAT II and III programs. When APB documents were available for programs in our sample, we reviewed them to determine whether they contained comparable program start and initial operational capability milestones to allow us to measure program schedule performance. We also used the APBs collected from this sample of programs as part of our reliability assessment of ACAT II and III cost data provided by DOD components. Our observations on DOD’s ACAT II and III program data are based on the original data submitted by the components. We did not assess the reliability of any underlying data systems that may have been used to generate this information. We analyzed the original data provided by the components because it reflects the information DOD would have had on ACAT II and III programs at the time we collected data. From January through July 2014, we worked with components to attempt to correct problems we identified in the data. However, we continued to identify additional errors. As a result, we determined that the data provided by DOD components in response to our DCI were not sufficiently reliable to identify the number of current ACAT II and III programs, their estimated acquisition cost, or the cost performance of DOD’s ACAT II and III programs. Appendix III contains a more detailed discussion of our data reliability assessment. We also determined that we could not assess schedule performance for ACAT II and III programs because more than half of programs we reviewed in our sample of 170 programs were missing source documents or lacked comparable schedule milestones. To address our second objective, we selected a non-generalizable sample of 15 programs from the data provided by DOD in response to our DCI. We selected 3 programs from each component included in our review. For each component, these programs were selected to include the largest current non-automated information system ACAT II and III program based on total acquisition cost as of the President’s Fiscal Year 2014 budget submission and one additional program based on factors such as significant cost growth or whether the program was part of a family of systems, which we defined as a related group of programs consisting of multiple increments or fielding similar capabilities for multiple To select the programs, we used the initial data provided to us platforms.by DOD components that included 836 reported ACAT II or III programs as a starting point. Programs that lacked data for current acquisition cost, commodity type, or ACAT level were excluded from selection. We also adjusted our selection as appropriate to account for known errors in the data at the time of selection, such as incorrectly-reported cost estimates, or programs that were known to not be current ACAT II or III programs. However, after our selection we identified additional concerns with the data reported by DOD that would likely have changed the results of our selection of the largest ACAT II or III programs at certain components. We did not make any subsequent adjustments to our original selection because we determined that the data provided by DOD was not sufficiently reliable to enable us to determine the largest ACAT II or III program at each component. For each program, we analyzed key program documents, such as APBs, program status reports, acquisition strategies, acquisition decision memoranda, and requirements documentation, to assess cost and schedule performance and identify factors affecting that performance. We also conducted semi-structured interviews with program officials to discuss the information identified through reviews of program documentation and obtain additional insights into factors that affected program cost or schedule performance. Additionally, we analyzed prior GAO reports to determine the extent to which the factors we identified as affecting cost and schedule performance for selected ACAT II and III programs were similar to factors that we have identified in prior work as affecting performance of MDAPs. To address our third objective, we reviewed DOD acquisition policy related to the reclassification of ACAT II or III programs to ACAT I programs and analyzed program cost data provided by DOD components. Based on the requirement in DOD acquisition policy for components to notify the Defense Acquisition Executive of ACAT II or III programs within 10 percent of the next ACAT level, we analyzed data provided by DOD through our DCI to identify programs that appeared to be within 10 percent of or have exceeded either the ACAT I RDT&E or procurement threshold. We were unable to identify an actual number of programs likely to become MDAPs because of reliability issues related to identifying the population of ACAT II and III programs. However, we determined the initial data provided to us by DOD that included 836 reported ACAT II or III programs were sufficiently reliable to serve as a starting point to identify the minimum number of programs likely to become MDAPs because we were able to confirm data with relevant program offices for those programs that appear to be within 10 percent of or have exceeded either ACAT I threshold. We excluded certain programs from further review that were known at the time that we initially identified programs to have incorrectly-reported cost estimates. For programs that appeared to meet our criteria for current ACAT II or III programs likely to become MDAPs, we collected additional information using a structured set of questions to determine whether the relevant DOD component had notified the Defense Acquisition Executive that the program was approaching or had exceeded the ACAT I threshold and whether the program had been or was expected to be reclassified as an ACAT I program. We also requested and reviewed supporting documentation when available, including documentation of notification to the Defense Acquisition Executive that the program was within 10 percent of the ACAT I threshold. After we received the information from the components, we identified additional programs that had incorrectly reported cost estimates or were no longer current ACAT II or III programs and we removed these programs from our analysis as appropriate. We conducted this performance audit from October 2013 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Summarizes program cost, schedule, and performance parameters Program cost estimate completed outside of the supervision of the entity responsible for the acquisition program Documents capability requirements to which the program responds Describes program’s overall technical approach and details timing and criteria for technical reviews Technology Readiness Assessment Assessment of the maturity of critical Primary planning and management tool for integrated test program 10 U.S.C. § 2366a and b. Provides notification of unit cost breaches above a certain threshold Provides notification of cost or schedule changes above a certain threshold 10 U.S.C § 2433 and 2433a. Notification must be provided to Congress when the program acquisition unit cost or average procurement unit cost increases by at least 15 percent over the current baseline estimate or 30 percent over the original baseline estimate. 10 U.S.C § 2445c. Notification must be provided to Congress when there is a schedule change that will cause a delay of more than 6 months; an increase in the expected development cost or full life- cycle cost for the program by at least 15 percent; or a significant, adverse change in the expected performance of the major automated information system to be acquired. We conducted an analysis to determine whether data provided by Department of Defense (DOD) components were sufficiently reliable for the purpose of determining the number, total acquisition cost, and cost performance of DOD’s current acquisition category (ACAT) II and III programs. For our analysis, we conducted electronic and manual testing on data for all programs reported by components in response to our request for completion of a data collection instrument (DCI) and compared cost data for a non-generalizable sample of programs to source documents when available. We also reviewed relevant DOD and component acquisition policy, and interviewed knowledgeable officials. We identified reliability issues with the data for about 60 percent of the programs components initially reported to us. As a result, we determined that the data provided by DOD components were not sufficiently reliable to identify the number of current ACAT II and III programs, their estimated total acquisition cost, or the cost performance of DOD’s ACAT II and III programs. To assess the accuracy and completeness of the ACAT II and III program data reported by DOD components, we electronically tested the data for: values outside the designated range of values for ACAT II and III programs, defined per DOD acquisition policy; obvious calculation or data entry errors (for example, individual cost elements do not sum to total reported); missing data in baseline or current cost estimate data elements, including estimates for research, development, test, and evaluation; procurement; military construction; and acquisition operation and maintenance, as well as the total acquisition cost estimate, and base year; and missing data in program descriptive data elements, such as ACAT level, milestone decision authority, or commodity type. Additionally, we compared cost data for our sample of 170 programs to source documents when available. Specifically, for each program in our sample, we first requested and reviewed original and current acquisition program baselines (APB) to determine whether or not they reflected the actual baseline from program start and the current APB based on the approval date of the APB and relevant schedule milestones that trigger the development of an APB or an APB revision in accordance with DOD acquisition policy. When APBs were not provided or did not appear to reflect the actual baseline and/or current APB, we followed up with DOD components to obtain the correct documents when possible. When we were able to obtain both original and current APBs, we took the following steps to assess the accuracy of the information reported in the DCI: Compared baseline cost data from the program’s original APB to the baseline cost data reported in the DCI. Compared cost data in the current APB to cost data reported in the DCI to identify obvious errors in the cost data reported in the DCI, such as current cost data in the DCI that was significantly less than the amount reported in the APB without explanation. To assess the consistency of ACAT II and III program data, we manually reviewed the data provided in response to the DCI and subsequent requests to identify programs that did not appear to meet our criteria for current ACAT II and III programs. For the purposes of this report, we defined a current program as one that has been formally initiated in the acquisition process but has not yet delivered 90 percent of its planned units or expended 90 percent of its planned expenditures. For each program that did not appear to be a current ACAT II or III program, we analyzed whether the program was pre-program start, in sustainment, completed, or not a separate ACAT II or III program (for example, was a subprogram of another ACAT II or III program reported to us). The results of our data reliability analysis capture accuracy, completeness, and consistency issues with the data provided by DOD components. Accuracy, completeness, and consistency are key characteristics of reliable data and refer to (1) the extent that recorded data reflect the actual underlying information; (2) data elements for each program are populated appropriately; and (3) the need to obtain and use data that are clear and well defined enough to yield similar results in similar analyses, respectively. We identified numerous types of accuracy and completeness issues with the data provided by DOD components, including cost estimate values outside of the ACAT II and III program range, basic math errors, and missing data. For example, 333 out of 836 ACAT II and III programs reported by the components were missing a baseline or current cost estimate element or descriptive program data. Table 9 provides detail on accuracy and completeness issues by component. We identified additional issues with the accuracy of the ACAT II and III program cost information when we compared reported cost estimates to available source documents for a non-generalizable sample of ACAT II and III programs. Specifically, of the 81 programs in our sample that reported complete cost estimates and provided source documents, 50 reported incorrect cost data. For example, for 37 of these 50 programs, we determined that baseline cost data was inaccurate because either the baseline cost estimate or the base year reported for this estimate did not match the source documents. Details of the accuracy and completeness issues we identified when assessing the cost data reported for our sample are provided by component in table 10. We identified 226 of the 836 programs reported by DOD components that did not meet our criteria for current ACAT II or III programs because, for example, they were not current or they were not stand-alone acquisition programs. Additionally, because information on program phase was not available for all programs reported by DOD components, the number of programs we identified as not a current ACAT II or III program reflects a minimum number of such programs. Table 11 provides details on consistency issues we identified by component. In addition to the contact named above, Ron Schwenn, Assistant Director; Leslie Ashton, Jenny Chanley, Teakoe Coleman, Dani Greene, John Krump, Jesse Lamarre-Vincent, Anne McDonough-Hughes, Carol Petersen, and Oziel Trevino made key contributions to this report.
|
DOD requested $168 billion in fiscal year 2014 to develop, test, and acquire weapon systems and other products and equipment. About 40 percent of that total is for major defense acquisition programs or ACAT I programs. DOD also invests in other, non-major ACAT II and III programs that are generally less costly at the individual program level. These programs typically have fewer reporting requirements and are overseen at lower organizational levels than ACAT I programs, although they may have annual funding needs that are just as significant. GAO was asked to examine ACAT II and III programs. This report addresses, among other issues, (1) the extent to which information is available on the number, cost, and performance of ACAT II and III programs and (2) factors that affected the performance of selected ACAT II and III programs. GAO collected program and cost data on current ACAT II and III programs from five DOD components. GAO also selected a non-generalizable sample of 15 programs based on program cost and other criteria and reviewed documentation and interviewed officials about program performance. The Department of Defense (DOD) could not provide sufficiently reliable data for GAO to determine the number, total cost, or performance of DOD's current acquisition category (ACAT) II and III programs. These non-major programs range from a multibillion dollar aircraft radar modernization program to soldier clothing and protective equipment programs in the tens of millions of dollars. GAO found that the accuracy, completeness, and consistency of DOD's data on these programs were undermined by widespread data entry issues, missing data, and inconsistent identification of current ACAT II and III programs. See the figure below for selected data reliability issues GAO identified. DOD components are taking steps to improve ACAT II and III data, but these steps do not fully address the problems GAO identified. For example, the components have not established systematic processes to perform data quality tests and assess the results to help identify problems for further review. These types of tests and assessments can be an important step in determining whether data can be used for its intended purposes. Additionally, DOD lacks metrics to assess ACAT II and III cost and schedule performance trends across programs and in some cases was missing baseline cost and schedule data to measure performance. Having timely and reliable cost, schedule, and performance data on smaller acquisition programs is critical to ensuring that DOD and its components can account for how they are spending their money and how well they are spending it. Reliable data are also essential for effective oversight and bringing the right oversight resources to bear when programs approach the cost threshold to become a major defense acquisition program due to cost growth. Thirteen of the 15 ACAT II or III programs GAO reviewed in-depth had exceeded their original cost or schedule targets. Program officials from ACAT II and III programs GAO reviewed cited changing performance requirements, testing issues, quantity changes, and flaws in original cost estimates, among other factors, as the reasons for cost and schedule growth. GAO has previously found that similar factors affect the performance of major acquisition programs. GAO recommends that DOD establish guidelines on what constitutes a current ACAT II and III program, take steps to improve data reliability, and determine how to measure cost and schedule performance. DOD partially concurred with the recommendations and described actions it plans to take. However, as discussed in the report, DOD's planned actions may not fully address the issues that GAO identified.
|
Nonrecourse loans have long been the government’s major price-support instrument and provide operating capital to producers of commodities, including cotton, rice, wheat, feedgrains, and oilseeds. Producers store their commodities under loan until later in the marketing year, when prices are usually higher than they are at harvest. Producers have the option either to repay their loans with interest at any time or, at the end of the loan period, to forfeit their commodities to the government and have their interest payments forgiven. The government has “no recourse” but to accept the commodities as payment. In the past, when market conditions would have led to U.S. prices falling below the loan rates, the loan rates supported U.S. prices. This happened because producers preferred to forfeit their commodities to the government rather than sell them at lower market prices that would have given them less than the face value of the loans. Under these market conditions, U.S. prices were supported and the government accumulated large and costly stocks. For cotton, rice, wheat, feedgrains, and oilseeds, the Congress added marketing loan provisions to nonrecourse loans to eliminate the price floors created by the loan rates while protecting producers’ income from the effects of low market prices. The intent was to minimize loan forfeitures and the accumulation of government stocks and to lower U.S. prices to levels closer to world prices. The marketing loan provisions allow producers to pay back nonrecourse loans at alternative repayment rates when these rates are lower than the loan rates. To establish alternative repayments rates, the U.S. Department of Agriculture (USDA) first determines a proxy for the world price for each commodity. These proxies for world prices are based on price data obtained from international markets for cotton and rice and from major U.S. terminal markets for wheat, feedgrains, and oilseeds. Next, USDA adjusts the proxies for world prices (these proxies are hereafter referred to as world prices) for quality differences and for transportation costs to arrive at the relevant alternative repayment rates. For cotton and rice loans, the alternative repayment rates are set weekly and are known as adjusted world prices. For wheat, feedgrain, and soybean loans, the alternative repayment rates are set daily and are known as posted county prices. For minor oilseeds, (such as flaxseed, sunflower seed, and canola) these rates are set weekly and are known as regionally calculated prices. When alternative repayment rates are below the loan rates, producers can repay their nonrecourse loans at these lower rates. Because producers keep the difference between the loan rate and the alternative repayment rate, which is known as a “marketing loan gain,” they should be able to sell their commodities at market prices and receive a total return—market price plus marketing loan gain—that is at least equal to the loan rate.Alternatively, producers who do not take out loans may still receive government payments equal to marketing loan gains. These amounts are called loan deficiency payments. (See app. I for more information on how program benefits are calculated.) For some commodities, certain program factors have kept U.S. prices higher than world prices. These factors vary by commodity program, and we have reported on them for cotton, peanuts, and sugar. For example, several features of the cotton program, such as import restrictions and the availability of government-paid storage when the adjusted world price is below the loan rate, reduced producers’ incentives to sell cotton to the market and thereby kept U.S. prices above world prices. High U.S. cotton prices, coupled with import restrictions, adversely affected cotton exporters and domestic mills that had to purchase higher-priced U.S. cotton. Consequently, the 1990 farm act included a provision for step 2 payments to be made to exporters and domestic mills to offset higher U.S. prices. These payments were continued in the 1996 farm act. USDA recently changed its procedures for making step 2 payments. Under the new procedures, exporters will receive the step 2 payment rate that is in effect during the week the cotton is shipped instead of the week in which cotton sales were contracted. For peanuts and sugar, the programs’ price-support features, such as domestic marketing restrictions for peanuts and the tariff-rate import quota for sugar, have continued to keep U.S. prices high. The Congress changed these programs in the 1996 farm act to help lower U.S. peanut and sugar prices, decrease the government’s costs, and reduce production and consumption inefficiencies created by the programs’ past features. When alternative repayment rates are near or below the loan rates, the marketing loan provisions may prevent the loan rates from serving as price floors. In the past, these market conditions have occurred for some commodities, and producers received marketing loan gains or loan deficiency payments. Although the historical price data were inconclusive or limited for cotton, wheat, feedgrains, and oilseeds, the data for rice suggest that the marketing loan provisions have prevented the loan rate from serving as a price floor when the adjusted world price was substantially lower than the loan rate. (See app. II for our detailed analyses of the impact of the marketing loan provisions on U.S. prices for each of these commodities.) For rice, during the last 10 years, when the marketing loan provisions were in effect, the adjusted world price was below the loan rate in 81 months.During 21 of these 81 months, when the adjusted world price was particularly low, the U.S. price was also below the loan rate. This suggests that the provisions worked as intended, and the loan rate did not act as a price floor for rice. While USDA officials generally agreed that the historical price data support this view, they stated that it is hard to separate out the effects of other changes made to the rice program during this period (such as the acreage reduction program) that may also have had an impact on lowering U.S. prices. For cotton, the price data are inconclusive on the effectiveness of the marketing loan provisions in preventing the loan rate from serving as a price floor because the adjusted world price has not fallen significantly below the loan rate since the marketing loan provisions went into effect. Until market conditions cause the adjusted world price to drop significantly below the loan rate—low enough to overcome the effects of other program features and market factors that keep the U.S. price above the adjusted world price—it cannot be conclusively determined whether the loan rate will still act as a price floor for cotton under the marketing loan provisions. In commenting on a draft of this report, USDA officials told us that there is substantial evidence that the loan rate for cotton has not served as a price floor since the marketing loan provisions went into effect. They base their view on a comparison of loan forfeitures and the accumulation of government stocks both before and after the marketing loan provisions went into effect, for those times when the U.S. price was only a few cents per pound above the loan rate. We agree that the data on forfeitures and stock accumulation merit consideration in determining whether the loan rate is serving as a price floor, but we found that forfeitures have continued to occur in some years, although the quantity forfeited is lower, since the marketing loan provisions went into effect. Therefore, we believe that it is necessary to observe what happens to U.S. prices during a period when the adjusted world price falls significantly below the loan rate in order to confirm that the marketing loan provisions prevent the loan rate for cotton from serving as a price floor, despite the effects of other program features and market factors that keep the U.S. price above the adjusted world price. For wheat, feedgrains, and oilseeds, the historical data are limited because during the short time that these provisions have been in effect, U.S. prices and alternative repayment rates have generally been above the loan rates. Even if additional data were available, they might be inconclusive because of the way in which the alternative repayment rates are set. (This issue is discussed in more detail in app. II.) Nevertheless, many USDA officials, including agricultural economists, and other agricultural economists we spoke to expect that the marketing loan provisions for wheat, feedgrains, and oilseeds will prevent the loan rates from serving as price floors when alternative repayment rates fall below the loan rates. They base this position on both theoretical expectations of producers’ profit-maximizing behavior and experience with the generic commodity certificate program in the past, which was similar in concept to the marketing loan provisions. However, a few agricultural economists and commodity analysts offered several reasons why the loan rates may at times provide some price support for these commodities despite the marketing loan provisions. (See app. II for additional details on these views.) Even if the marketing loan provisions allow U.S. prices to fall below the loan rates, some program features and market factors will keep U.S. prices higher than adjusted world prices for some commodities, such as cotton and rice. The program features include (1) import restrictions that reduce foreign competition in the United States, (2) the availability of the nonrecourse loan, and (3) for cotton, government-paid storage that makes it easier for producers to hold cotton off the market while waiting for prices to rise. The market factors include quality, reliability, and transportation advantages that allow U.S. producers to receive higher prices than some foreign producers. To the extent that higher U.S. prices are due to market factors that reflect the desirability of U.S. cotton and rice, then higher U.S. prices do not necessarily impede the marketability of these commodities. Therefore, adjusted world prices will typically be less than U.S. prices because (1) the marketing loan provisions cannot overcome the effect of all program features that support prices and (2) in setting the adjusted world prices, USDA does not fully account for all the market factors that result in higher U.S. prices. (See app. II for a detailed discussion of these factors.) According to USDA’s 1996 forecast, U.S. and world prices are expected to remain above the loan rates for some commodities, as they are now, for the 7-year duration of the 1996 farm act. This forecast suggests that for these commodities the marketing loan provisions will have no effect on U.S. prices or how they compare with world prices. Under these conditions, producers would not use the marketing loan provisions to repay their loans at the higher alternative repayment rates. Instead, they would repay their loans at the loan rates. However, some agricultural economists have suggested that over the next several years U.S. and world prices might be below those in USDA’s 1996 forecast. If market conditions change and alternative repayment rates fall below the loan rates, producers may use the marketing loan provisions when redeeming their loans. For all commodities, when U.S. prices and alternative repayment rates are above the loan rates, lower loan rates will have little if any effect on U.S. prices because producers can earn more by selling their commodities on the market than by forfeiting them to the government. However, when alternative repayment rates are below the loan rates, the effect of lowering the loan rates on U.S. prices will vary by commodity. For cotton and rice, when adjusted world prices are below the loan rates, lower loan rates are likely to have some downward effect on U.S. prices. This is because producers who use nonrecourse loans with marketing loan provisions have the option to hold their commodities under loan while waiting for prices to rise. This option has a value, known as the option value of the loan, which varies among producers at any given point in time and varies for any individual producer over time. Unless producers are offered a premium price that compensates them for giving up their option to keep their commodity under loan, they have little incentive to take the commodity out of loan. The option value is one of several factors that cause U.S. cotton and rice prices to be higher than adjusted world prices. To the extent that a lower loan rate reduces the option value of the loan because it reduces producers’ guaranteed minimum returns upon forfeiture, a lower loan rate will have some downward effect on U.S. prices, thereby bringing them closer to adjusted world prices. However, a lower loan rate will not by itself eliminate the price premium paid for U.S. cotton and rice, and U.S. prices will continue to remain higher than adjusted world prices because of other program features and market factors. For example, for cotton, a lower loan rate, combined with the elimination of government-paid storage, would result in a larger downward effect on U.S. cotton prices. For wheat, feedgrains, and oilseeds, USDA officials, including agricultural economists, and many other agricultural economists told us that lower loan rates will have little if any impact on U.S. prices when alternative repayment rates are below the loan rates. According to these USDA officials and economists, when producers use the marketing loan provisions and sell their commodities earlier in the marketing year, they generally benefit by saving on storage costs. The potential savings from avoiding storage costs are relatively greater than the option value of the loan. Consequently, these officials told us that lowering the loan rates will have little if any effect on U.S. prices because marketing loan provisions will keep the loan rates from supporting prices. In contrast, a few other agricultural economists and commodity analysts told us that the option value of the loan may be a significant factor in producers’ marketing decisions and that this and other market factors may cause the loan rates to continue providing price support despite the marketing loan provisions. According to these experts, if the loan rates are supporting prices, lowering the loan rates may have some downward effect on U.S. prices. The degree to which prices will drop depends on how the option value of the loan compares with the potential value of avoiding storage costs while receiving marketing loan benefits. In commenting on a draft of this report, USDA officials disagreed that lower loan rates would reduce U.S. prices. Their detailed comments and our response are presented at the end of this letter. In this example, the adjusted world price ($0.55) is less than 130 percent of the loan rate ($0.65). This condition must be met for 4 consecutive weeks. U.S. price in Northern Europe Average price in Northern Europe The difference between the U.S. price in Northern Europe and the average price in Northern Europe $0.63 - $0.60=$0.03 In this example, the difference ($0.03) is greater than $0.0125. This condition must also be met for 4 consecutive weeks. $0.63 - ($0.60 + $0.0125) = $0.0175/pound USDA made a total of $701 million (in 1995 dollars) in step 2 payments from fiscal years 1992 through 1996. Currently, the adjusted world price is sufficiently above the loan rate to preclude the use of step 2 payments. If the adjusted world price drops to within 130 percent of the loan rate in the future, step 2 payments may be used again. As discussed previously, when the adjusted world price is below the loan rate, a lower loan rate is most likely to have some downward effect on U.S. prices. To the extent that U.S. prices decrease because of a lower loan rate, step 2 payments will be used less often and the payment rate will also be reduced. However, since other program features (such as government paid-storage and import restrictions) and market factors contribute to making U.S. prices higher than world cotton prices, lowering the loan rate alone will not eliminate the use of step 2 payments. Recent changes in the timing of USDA’s step 2 payments to exporters may diminish this tool’s effectiveness in enhancing exports. In the past, exporters received the step 2 payment rate that was in effect during the week they contracted for cotton sales. As a result, exporters could use step 2 payments to reduce the price of U.S. cotton offered to foreign buyers. An unintended consequence of the step 2 provision was that many contracts for future sales were made during weeks with high payment rates. This practice was known as “bunching,” and many of these sales represented internal transactions between U.S. firms and their foreign affiliates. Bunching increased the cost of the step 2 provision to the government and placed domestic mills and exporters without foreign affiliates at a price disadvantage. To prevent bunching, USDA changed step 2 procedures so that exporters receive the step 2 payment rate that applies during the week the cotton is shipped instead of the week in which the sales are contracted. Consequently, when exporters agree to a sale, they do not know what step 2 payment rate, if any, will be in effect during the week the cotton is shipped. Step 2 payments have not been made since USDA changed its procedures. This change should reduce the occurrence of bunching but could also make it more difficult for exporters to reduce the higher price of U.S. cotton when it is offered for sale to foreign buyers. As we have reported in the past, the peanut and sugar programs have not been market-oriented because they have kept U.S. prices higher than world prices and resulted in production and consumption inefficiencies. As a result, these programs have cost users of peanuts and sugar and the government hundreds of millions of dollars annually. The Congress made a number of changes to both programs through the 1996 farm act to reduce U.S. prices and some of the economic inefficiencies in order to make the programs more market-oriented. However, these changes did not eliminate the difference between U.S. prices and lower world prices because the domestic marketing quota for peanuts and the tariff-rate import quota for sugar continue to restrict supply. As we recommended in the past, greater market orientation could be achieved through (1) further reductions in the support price for peanuts and (2) a reduction in the loan rate for sugar and an increase in the tariff-rate import quota. These changes would help lower U.S. prices and increase economic efficiency, but one tradeoff would be a potential reduction in producers’ revenue. The peanut program controls the domestic supply and protects producers’ income by (1) setting a national poundage quota that determines the amount of peanuts that can be sold domestically and (2) restricting imports. The national poundage quota is set at a level based on the estimated quantity of edible peanuts used in the United States at the support price. Prior to the 1996 farm act, the quota could not fall below 1.35 million tons. Generally, only producers holding a portion of the assigned quota may sell these “quota peanuts” domestically. Quota holders who choose not to grow peanuts can sell or lease their quota within the county it was assigned or return it to USDA for redistribution to other producers. Producers without assigned quota and those who exceed their quota cannot sell these peanuts in the domestic edible market except under certain conditions, but they may export them as “additional peanuts.” The program protects producers’ incomes through a two-tiered system that sets minimum support prices for both quota and additional peanuts. The support price for quota peanuts guarantees producers a price in U.S. markets that is higher than world prices. Prior to the 1996 farm act, the quota support price was adjusted upward annually when the cost of production rose but was left unchanged when the cost of production fell. (This adjustment was known as the “escalator clause.”) The support price for additional peanuts is generally set lower than the world price and plays a limited role in domestic peanut marketing. Higher U.S. prices result in increased costs to consumers. The world price for peanuts in 1995 averaged $415 per ton, while the support price for quota peanuts was $678 per ton. Therefore, U.S. consumers paid more for items containing peanuts than they would have if U.S. processors had purchased peanuts at the lower world price. In addition, higher U.S. prices could create a consumption inefficiency because the quantity of peanuts purchased at the higher U.S. price is less than what would have been purchased at the lower world price—the price that would have occurred if there were no program. The government also incurs costs when producers cannot sell their peanuts at a price greater than or equal to the support price and instead forfeit them to the government at the support price. The government pays to have these peanuts crushed and sells them at a price lower than the support price. To prevent forfeitures, USDA strives to set the annual quota at a level that does not exceed the expected quantity that would be demanded at the support price. If USDA sets the quota too high, the government will incur costs from forfeitures. For example, in fiscal years 1995 and 1996, the government incurred costs of $124.7 million and $127.4 million, respectively, because the legislatively set minimum quota of 1.35 million tons was greater than the quantity of peanuts demanded at the support price in those years. On the other hand, if USDA sets the quota too low, forfeitures will not occur, but U.S. prices will rise because the supply marketed under the quota is not adequate to meet the quantity demanded at the support price. In order to share program costs with the government, producers and buyers of peanuts pay a fee to the government, known as a marketing assessment, per ton of peanuts sold. The government also incurs indirect costs when it purchases higher-priced peanuts and peanut-containing products for its food assistance programs. In 1993, we reported that USDA paid the quota support price, instead of the lower world price, for peanuts and peanut-containing products that it purchased, leading it to incur greater costs than without the peanut program. The 1996 farm act made several changes to the peanut program to reduce its costs and make the U.S. peanut industry more market-oriented. One change in particular will help make U.S. peanut prices somewhat closer to world prices—a lower quota support price. Under the 1996 farm act, the peanut quota support price was reduced from $678 to $610 per ton and fixed through the year 2002—the remainder of the life of the farm act. As a result of this change, the quota support price is no longer linked to the cost of producing peanuts and will not increase with inflation because the escalator clause has been eliminated. In addition to reducing the quota support price, the 1996 farm act made other changes to the peanut program to increase economic efficiency. These changes included eliminating the minimum level to which the national poundage quota could fall, authorizing marketing assessment increases, eliminating provisions allowing the carryover of unfilled quota from year to year (undermarketings), redefining the peanut quota to exclude seed peanuts, limiting disaster transfers requested by quota holders whose commodity is damaged, and adding marketing requirements to maintain program eligibility. These changes should enable USDA to better control the quantity of peanuts marketed at the quota support price, thus reducing government costs associated with the program. Moreover, people who live outside of the state in which the quota is allocated or who are not peanut producers, as well as government entities, can no longer hold quota; and the annual sale, lease, and transfer of quota is now permitted across county lines within a state, up to specified amounts of quota. These changes will improve the equity and economic efficiency of the peanut program. (See app. III for additional details on these changes.) Although the lower quota support price of $610 will help reduce U.S. peanut prices, it is still substantially above the average U.S. cost of producing peanuts and world prices. In 1995, the average cost of producing peanuts in the United States was $369 per ton and the world price was $415 per ton, while the support price was $678 per ton. In 1993, we recommended that the quota support price be reduced so that over time U.S. prices would more closely parallel the cost of producing peanuts and world prices. Lowering and fixing the quota support price at $610 per ton was a good first step. This price could be reduced further, which would result in lower U.S. prices that would be closer to world prices and would also result in reductions in government costs. While USDA officials agreed that a lower quota support price will lower U.S. prices and government costs, they pointed out that it will also reduce producers’ revenues. The sugar program guarantees producers (growers and processors) a minimum price for domestic sugar through the nonrecourse loan program and controls the domestic supply of sugar through the use of a tariff-rate import quota. The nonrecourse loan program sets a guaranteed minimum price for domestic sugar through the loan rate. However, the 1996 farm act restricts the availability of nonrecourse loans to times when the tariff-rate import quota is at or above 1.5 million tons. USDA adjusts the tariff-rate import quota on the basis of the (1) estimated domestic production and demand and (2) level of supply needed to maintain domestic prices at levels high enough to discourage forfeitures. Prior to the 1996 farm act, under certain market conditions, USDA could also limit the domestic marketing of sugar by assigning marketing allotments to processors to maintain the support price. USDA assigned marketing allotments twice, in fiscal years 1993 and 1995. The 1996 farm act made the following changes to the sugar program to reduce U.S. sugar prices and some economic inefficiencies of the program: Loans are to be recourse under certain circumstances. When the tariff-rate import quota is established below 1.5 million tons on the basis of estimated domestic production and demand, loans are issued as recourse rather than nonrecourse to eliminate potential forfeitures. If loans are recourse, then there is effectively no price support and U.S. prices could fall below the loan rate. The loan rates were fixed. The loan rates were fixed for refined beet sugar at the 1995 level of 22.9 cents per pound and for raw cane sugar at 18 cents per pound. USDA has maintained the loan rate for raw cane sugar at 18 cents per pound since 1981, although in the past it had the authority to raise the rate. A fixed rate means that over time the real value of the loan rate, and therefore the real value of government support, will fall because of inflation. If prices fall near the loan rates, inflation-adjusted market prices may be lower. The no-net-cost requirement was discontinued. In the past, the sugar program was designed to operate at no net cost to the government. The 1996 farm act did not renew the no-net-cost provision of the program, and therefore this provision is no longer operative. Without the no-net-cost provision, USDA could in the future choose to set the tariff-rate import quota at a higher level to allow greater imports, which would result in lower U.S. sugar prices. However, it is not yet clear whether USDA will choose to increase the tariff-rate import quota and increase the chance of forfeitures under the nonrecourse loan program. Marketing allotments were eliminated. The 1996 farm act eliminated USDA’s authority to use marketing allotments, which may result in a more efficient allocation of resources in the sugar industry. More efficient producers will no longer have to limit their level of production and marketings in favor of less efficient and higher-cost producers. Any reductions in the costs of production because of increased efficiency may be passed on to users in the form of lower sugar prices. Penalties were imposed on forfeitures. The 1996 farm act required that sugar processors be assessed a 1-cent penalty on every pound of raw cane sugar and a 1.07-cent penalty on every pound of refined beet sugar forfeited to the government. This penalty will reduce the effective guaranteed price that processors receive from the government. Because of this penalty, USDA can now support the price of sugar at a level that is 1 cent lower than under the prior farm act without causing processors to forfeit. The 1996 farm act did not eliminate the tariff-rate import quota, which continues to be the key mechanism by which total domestic supply is restricted and U.S. sugar prices are supported. As long as USDA continues to use the tariff-rate import quota as it has in the past to restrict imports and support U.S. prices above the level necessary to prevent forfeitures, the 1996 farm act’s changes (such as limits on the availability of nonrecourse loans) will have little if any impact on U.S. prices. However, these changes could result in lower U.S. prices if there are significant increases in domestic supply (or similarly large decreases in domestic consumption) that prevent USDA from maintaining a tariff-rate import quota of 1.5 million tons while supporting prices at their current level. In commenting on a draft of this report, USDA officials pointed out that such an increase in beet sugar production occurred in fiscal year 1995. If a similar increase in domestic supply occurred under the 1996 farm act, USDA could either (1) keep the tariff-rate import quota at or above 1.5 million tons, which would result in lower sugar prices because of increased supply, or (2) set the tariff-rate import quota below 1.5 million tons, which would result in producers not being eligible for nonrecourse loans, and which could result in lower U.S. sugar prices. If USDA’s implementation of the sugar program continues to insulate the U.S. sugar market from the world market, U.S. prices are likely to remain higher than world prices. For fiscal years 1991 through 1995, the average annual world price of raw cane sugar ranged from 9.22 to 13.86 cents per pound, and the average annual U.S. price ranged from 21.39 to 22.76 cents per pound. In addition, according to some sugar analysts who are familiar with trends in world sugar prices, world prices are expected to decline in the short run and, because of the sugar program, U.S. sugar users will continue to pay premium prices. Finally, by supporting the price of U.S. sugar, the sugar program also indirectly supports the prices of other sweeteners, such as high-fructose corn syrup. There is considerable controversy about the size of the premium paid for U.S. sugar and, therefore, the total cost of the sugar program to domestic sweetener users. The size of the premium is controversial because it is not a simple difference between current U.S. and world sugar prices. Instead, the size of the premium depends in part on assumptions about how much the world price would rise if the United States did not have a sugar program. The premium could also be based on an estimate of what the world price would be if all countries eliminated programs that support their sugar industries. Nevertheless, as we and others have shown, higher U.S. sugar prices result in increased costs of hundreds of millions of dollars per year to U.S. sweetener users. USDA has not officially determined the size of the premium that users pay for U.S. sugar. However, in a 1995 report, USDA stated that for every 1-cent-per-pound premium paid for U.S. sugar, the cost to consumers is $178 million (in 1995 dollars). Higher U.S. sugar prices also result in a production inefficiency—the cost of shifting resources from other economic sectors to pay for more expensive domestic production instead of importing lower-cost sugar. A consumption inefficiency also arises when the quantity of sugar purchased at the higher U.S. price is less than the quantity that would have been purchased at the lower world price. The government incurs indirect costs of millions of dollars a year as a result of the sugar program when it purchases higher-priced sugar and sweetener-containing products for its food assistance programs. On the other hand, the government receives marketing assessments from sugar processors on each pound of sugar that they market. In order to reduce U.S. sugar prices, we recommended in our 1993 report that the loan rate be reduced gradually and the tariff-rate import quota be adjusted accordingly. Changes made in the 1996 farm act should help reduce U.S. prices if there are significant increases in domestic supply or similar decreases in domestic consumption. However, if domestic market conditions do not change, reductions in U.S. prices could be achieved only by increasing the tariff-rate import quota or eliminating it (no import restrictions). Once increases in the tariff-rate import quota result in U.S. prices dropping to the loan rate, reductions in the loan rate would be necessary to reduce prices further. However, one tradeoff of an increase in the tariff-rate import quota and a lower loan rate would be a reduction in U.S. producers’ revenues. Moreover, according to an official of the American Sugar Alliance, making these changes would adversely affect the long-term viability of the U.S. sugar industry because U.S. sugar production would be replaced by lower-priced imports, most of which receive some form of government support, such as export subsidies. Other sugar industry officials told us that further reductions in domestic sugar production will result in the deterioration of the specialized infrastructure—processing mills, machinery, seeds, and chemicals—necessary to support a domestic sugar industry. We provided copies of a draft of this report to USDA for review and comment. We met with officials of the Department, including USDA’s Deputy Chief Economist; the Farm Service Agency’s Assistant Deputy Administrator, Economic Policy Analysis Staff, and 10 other officials representing various commodity divisions within this agency; and an official representing the Commercial Agriculture Division of the Economic Research Service. These officials expressed concern with our findings in the following five areas: USDA officials told us that in their opinion the marketing loan provisions have prevented the loan rates from acting as price floors in the past and will be similarly effective in the future if market conditions warrant their use. They base this position on (1) the strong theory behind the concept of the marketing loan provisions; (2) USDA’s past experience with the generic certificate program, which they said was similar in concept to the marketing loan provisions; and (3) the data that are available for sunflower seeds and cotton. We disagree with USDA that a conclusion about the effectiveness of the marketing loan provisions for all commodities is warranted. While we agree that the marketing loan provisions appear to have prevented the rice loan rate from serving as a price floor, we believe that the evidence is insufficient to reach similar conclusions for the other commodities. For cotton, we disagree that the data on forfeitures and stock accumulations, along with theoretical expectations, are sufficient to reach a conclusion. For wheat, feedgrains, and soybeans, the provisions remain largely untested because U.S. prices and alternative repayment rates have generally been higher than the loan rates; and for minor oilseeds, the data necessary to analyze the provisions’ effectiveness are unavailable or, as USDA acknowledges, “anecdotal.” For cotton, wheat, feedgrains, and oilseeds, we believe that more price data are needed to confirm that the marketing loan provisions prevent the loan rates from serving as price floors. USDA officials were also concerned about our reliance on historical data in analyzing the effectiveness of the marketing loan provisions and projecting to the future, particularly when major program changes were made in the 1996 farm act to increase the market orientation of U.S. commodity programs. They stated that in the future there will be a different combination of domestic government commodity programs and a different mix of international trade policies. Therefore, if the effectiveness of the marketing loan provisions are analyzed using historical data, these results should not be projected to the future. In our report, we have added language to recognize that one limitation of using historical data is that some programs that affected U.S. prices in the past have been eliminated by the 1996 farm act. In addition, our report recognizes that marketing loan provisions may prevent the loan rates from serving as price floors in the future, only under certain market conditions. USDA officials were concerned that our draft report implied that higher U.S. prices always meant that U.S. commodities were not competitive on world markets. They said that price premiums are justifiable if they reflect the desirability of U.S. commodities over foreign commodities in world markets; they acknowledged that price premiums deriving from program provisions that keep U.S. prices artificially high and pose an impediment to free trade are undesirable. We agree that some price premiums resulting from market factors may be justifiable and do not indicate a lack of competitiveness. Throughout the report, where appropriate, we have changed any reference to “making U.S. prices more competitive” to “lowering U.S. prices to levels that are closer to” alternative repayment rates or world prices. USDA officials disagreed that lower loan rates would reduce U.S. prices. They stated that lowering the loan rates would have little if any effect on reducing U.S. prices when the marketing loan provisions are available. While they did not disagree that loans have an option value, they told us that if prices fall to levels significantly below the loan rates, the option value of the loans will have at best a marginal impact on U.S. prices. The option value will only influence the seasonal variation of prices, with no significant effect on annual average prices. Furthermore, they told us that if producers obtained commercial loans instead of government loans, producers would still be able to keep their commodities off the market for some period of time. Specifically, for cotton, officials told us that the option value of the loan will be less of a factor in the future because the 1996 farm act eliminated the 8-month loan extension, which in the past allowed the loan to span 2 crop years. For rice, officials stated that the level of the loan rate is irrelevant to producers’ decisions to plant; instead, the main factor is the high cost of rice production. Because of this, USDA officials stated that lowering the loan rate for rice will have little if any impact on U.S. prices. For wheat, feedgrains, and oilseeds, USDA officials hold the view that marketing loan provisions will prevent the loan rates from serving as price floors and therefore lower loan rates will have little if any impact on U.S. prices. Despite USDA’s disagreement, we continue to believe that for cotton and rice, when adjusted world prices are below the loan rates, lower loan rates will likely have some downward effect on U.S. prices. This is because the option value of the loan may be a significant factor affecting U.S. cotton and rice prices. For cotton, while we agree that eliminating the 8-month extension reduces the option value of the loan, we believe that the availability of government-paid storage and import restrictions continue to play a role in affecting the option value of the loan and keeping U.S. cotton prices higher than adjusted world prices. To the extent that lowering the loan rate for cotton reduces the loan’s option value, there will be some downward effect on U.S. prices. For rice, although the price data suggest that the marketing loan provisions have prevented the loan rate from serving as a price floor, U.S. rice prices have remained higher than adjusted world prices. To the extent that these higher prices are caused by the availability of nonrecourse loans, we believe that lowering the loan rate for rice will reduce the loan’s option value and will have some downward effect on U.S. prices. For wheat, feedgrains, and oilseeds, we do not take a position on the likely effect of lowering the loan rates on U.S. prices. The report recognizes that most experts expect the marketing loan provisions to work as intended and prevent loan rates from serving as price floors. In this case, lower loan rates will have little if any impact on U.S. prices. However, if marketing loan provisions do not prevent the loan rates from supporting prices, as some others have suggested, then lowering the loan rates may have some downward effect on U.S. prices. USDA officials expressed their strong disagreement with our estimates of the cost of the sugar program to domestic sugar users as reported in 1993 and cited in this report. This is in contrast to USDA’s official comments on our 1993 report, in which USDA stated that our report was reasonable and had no major data problems. At that time, USDA stated that the costs and benefits derived using assumptions of hypothetical policy alternatives were well within the range of most research. However, in commenting on a draft of our current report, USDA officials told us that since our 1993 report was issued, they have changed their position and now strongly disagree with our 1993 estimate of the average annual cost to users of $1.4 billion. They stated that the 1993 report did not adequately consider the complexities and dynamics of the U.S. and global sugar markets. They said that the report overestimated the cost of the sugar program to U.S. users, some data were used incorrectly, and important sugar market issues were not considered. Furthermore, they said that using our methodology, different welfare cost impacts could be obtained by selecting prices in different time periods. We continue to believe that our 1993 report provided a reasonable estimate of the cost of the sugar program to U.S. sugar users for the period analyzed. More importantly, we believe that while the precise level of price premium is subject to debate, the program and policy problems that we identified in 1993 are still relevant. USDA officials also suggested a number of technical revisions to our draft. Where appropriate, we have incorporated these revisions into the report. In conducting our review, we interviewed USDA officials from the Commodity Credit Corporation, Economic Research Service, Farm Service Agency, Foreign Agricultural Service, National Agriculture Statistical Service, Office of the Chief Economist, and county offices. We also spoke to officials of the World Bank, academic experts, industry and trade representatives, and agricultural commodity consultants. We also obtained data from USDA, and we reviewed various economic and international trade studies conducted by universities, management consulting groups, USDA, and international agencies. We did not independently verify the data used in this report. We conducted our review from July 1996 through January 1997 in accordance with generally accepted government auditing standards. A detailed discussion of our overall scope and methodology is provided in appendix IV. We are sending copies of this report to the Senate Committee on Agriculture, Nutrition, and Forestry; the House Committee on Agriculture; other interested congressional committees; the Secretary of Agriculture; and other interested parties. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact me on (202) 512-5138. Major contributors to this report are listed in appendix V. This appendix provides an (1) explanation of how to calculate the net amount that producers receive from the government when they use nonrecourse loans without marketing loan provisions, (2) analysis of how the marketing loan provisions are intended to operate and prevent the loan rates from acting as price floors, and (3) illustration of the differences in marketing loan benefits under various market conditions and the relationship between the alternative repayment rates and U.S. prices. Throughout this appendix, we use prices and the loan rate for corn in our examples to show how calculations are made. The specific calculations for cotton, rice, wheat, feedgrains, and oilseeds may vary to some extent. For example, for cotton and rice, the adjusted world price would be used as the alternative repayment rate and not the posted county price, and for cotton, storage costs would not be included because the government pays storage costs when the adjusted world price is below the loan rate. However, the overall process is the same. Under the nonrecourse loan without marketing loan provisions, producers who kept their commodity under loan for the full 9 months would, upon forfeiture, receive the loan rate (less a service fee) minus the storage costs they incurred. Producers were not required to pay interest when they forfeited their commodities to the government. However, if they repaid the loan, they had to pay interest charges. The hypothetical example in table I.1 shows that when the loan rate for corn was $1.89 per bushel, the net amount producers received from the nonrecourse loan upon forfeiture at maturity was $1.70 per bushel. Before the marketing loan provisions were available, the loan rate determined the effective level of price support, which increased during the marketing year to reflect storage and interest costs that producers incurred while holding the corn under loan. The forfeiture option always allowed them to net $1.70 at the end of 9 months. To be better off selling at any time during the 9-month loan period, producers needed to receive an amount that made them at least as well off as forfeiting at the end of the loan period. Producers had to receive an amount that allowed them to repay the loan amount of $1.89 plus accrued interest, minus the amount of refunded prepaid storage costs. (Producers who choose to keep their commodities under loan are responsible for paying storage costs in advance for the full term of the loan.) For example, after 3 months, producers would have had to receive at least $1.80 ($1.89 plus 3 cents for interest minus 12 cents for refunded storage costs) to be better off selling rather than leaving the commodity under loan for another 6 months and then forfeiting it to the government. At 9 months, producers would have had to receive at least $1.98 ($1.89 plus 9 cents in interest, without any refund for storage) to be better off selling rather than forfeiting the commodity to the government. In this example, when prices fell below $1.98 at the end of the loan period, producers forfeited their commodities and government stocks rose. The marketing loan provisions were added in part to eliminate the price floors created by the loan rates. When the alternative repayment rate is below the loan rate at the time of harvest, the marketing loan provisions provide a producer who holds a nonrecourse loan with two options: (1) redeem the loan at any time at the alternative repayment rate (for corn, this is the posted county price) and sell the commodity at the market price or (2) forfeit the commodity after 9 months at the loan rate. Under the first option, the difference between the loan rate and the alternative repayment rate represents a marketing loan gain to the producer. In addition, producers who repay their loans at the alternative repayment rate do not have to pay accrued interest on the loan. (Those producers who choose to forego loans can receive government payments equal to the marketing loan gains. These amounts are known as loan deficiency payments.) When the alternative repayment rate is below the loan rate, producers are better off by choosing the first option because they can obtain the full value of the loan rate without incurring the full 9 months of storage costs associated with forfeitures and are relieved of the interest costs on the loan. For example, if the alternative repayment rate at the time of harvest is $1.60 per bushel, producers are eligible for marketing loan benefits of 29 cents per bushel (the difference between the loan rate of $1.89 and the posted county price of $1.60). The producer sells the corn at $1.60 (this example assumes that the posted county price remains unchanged and equals the market price) and receives a total return of $1.89 (market price of $1.60 plus the marketing loan benefit of 29 cents), which is the full value of the loan. Because producers can receive the full value of their loans even when marketing their commodities at prices below the loan rates, the marketing loan provisions can prevent the loan rates from serving as price floors. The longer producers hold their commodities under loan, the more their benefit is reduced by storage costs. Producers have an incentive to use the marketing loan provisions early in the marketing year to avoid the greatest amount of storage costs. The analysis in the previous section assumes that the posted county price and the price offered to the producer (hereafter known as market price) are the same. However, because the posted county price is based on the previous day’s terminal prices and lags behind the market, it could be lower or higher than the market price. The total benefit that a producer receives depends on the relationship between the posted county price and the market price. As shown in table I.2, producers benefit more when the posted county price is lower than or equal to the market price than they do when the posted county price is above the market price. According to USDA officials, marketing loan gains are most likely to be made to producers when the posted county price is lower than or equal to the market price. When the posted county price is above the market price, producers would generally be expected to wait until the U.S. price rose or the posted county price fell before they redeemed their loans. However, the amount of time producers are willing to wait for higher prices will depend on the tradeoff between their expected price gains, additional storage costs, and their expectations about future market prices. Producer redeems the loan at the posted county price and receives a marketing loan gain of 0.29 = (1.89 - 1.60) 0.29 = (1.89 - 1.60) 0.29 = (1.89 - 1.60) Total returns (marketing loan gain plus market price) $1.89 = (0.29 + 1.60) $1.94 = (0.29 + 1.65) $1.84 = (0.29 + 1.55) This appendix provides our detailed analyses of the effects of the marketing loan provisions on U.S. prices for cotton, rice, wheat, feedgrains, and oilseeds. Over the last 10 years, when the marketing loan provisions were in effect, U.S. and world cotton prices were above the loan rate for all but 35 months, and producers did not use the marketing loan provisions to redeem their loans. During the 35 months when the adjusted world price was below the loan rate, producers received about $2.6 billion in marketing loan gains and loan deficiency payments. Figure II.1 shows the relationship between the adjusted world price, the U.S. price, and the loan rate for this period. Figure II.1: Relationship Between the Adjusted World Price, U.S. Price, and Loan Rate for Cotton, 1986-95 (Figure notes on next page) As shown in figure II.1, U.S. prices fell below the loan rate for only 5 of the 35 months that world cotton prices were below the loan rate, and in only 2 of the 5 months was the U.S. price below the loan rate by more than 1 cent per pound. These price data might suggest that the marketing loan provisions were not working and that the loan rate was creating a floor for U.S. prices. However, this conclusion may be premature because during the last 10 years, several other program features, some of which no longer exist, and market factors contributed to keeping U.S. prices higher than adjusted world prices and the loan rate. These program features include the option value of the loan resulting from the availability of the loan at a particular loan rate, the availability of government-paid storage, quotas on imports, and, in the past, the availability of a loan extension and restrictions on production. These features have allowed producers to store their cotton under loan until either price conditions become more favorable or they can forfeit the cotton to the government. To overcome the disincentives created by the program features and to get cotton out of storage and to the market, cotton buyers (domestic textile mills and exporters) have had to pay premium prices. These premiums have kept U.S. prices higher than the adjusted world prices. In addition, U.S. cotton producers receive premium prices because of a number of market factors, such as confidence that the terms of the contract will be fulfilled (known as contract sanctity/reliability), high-quality standards, and transportation advantages. These program features and market factors are discussed below. Option value of the loan. The option to hold cotton under a nonrecourse loan has a value known as the option value of the loan. The loan rate guarantees producers a minimum price and makes it easier for them to keep cotton off the market while waiting for prices to rise. Therefore, unless producers are offered a premium price that compensates them for giving up their option to continue to keep cotton under loan, they have little incentive to take cotton out of loan. Buyers are willing to pay a premium price because when they acquire cotton, they can continue to keep the cotton they acquire under loan, retaining some of the option value. The option value of the loan increases at higher loan rates (or decreases at lower rates) because the level of the loan rate determines the degree of price protection. Government-paid storage. For cotton alone, the government pays storage costs when the adjusted world price nears or drops below the loan rate. As a result, producers can keep cotton off the market at no cost to them. This government-paid storage increases the option value of the loan and therefore increases the price that buyers will pay for cotton. In the past, the government also paid storage costs for up to 60 days prior to the time the cotton was placed under loan. However, beginning with the 1996 crop year, the U.S. Department of Agriculture (USDA) has changed its regulations so that government-paid storage costs will be limited to the period of time when the cotton is actually under loan. Producers will be responsible for all storage charges that accrue prior to that time. Import quotas and transportation costs. Import quotas and high transportation costs largely inhibit domestic textile mills from importing cotton. Therefore, except under certain conditions when the U.S. price is significantly higher than the adjusted world price, U.S. producers have a captive domestic market and do not have to compete against foreign producers who are selling cotton at lower world prices. For example, the step 3 provision allows specified amounts of cotton imports when the U.S. price is substantially above the adjusted world price for a significant period of time. Contract sanctity/reliability. USDA officials told us that foreign buyers of U.S. cotton are willing to pay a premium price because less risk is associated with this purchase. Buyers can expect the terms of the contract to be fulfilled and the product, as specified, to be delivered as promised. High-quality standards. USDA officials told us that the reliable quality of U.S. cotton is one of the market factors that results in a premium price for U.S. cotton. High-quality standards and strict grading procedures applied to U.S. cotton reduce the buyer’s risk that is frequently associated with purchasing cotton in a foreign market. Loan extension. The 1996 farm act eliminated the provision that had allowed producers to extend their loans for an additional 8 months, which had provided a total loan period of 18 months. The elimination of the extension will reduce the option value of the loan in the future because producers will have less time to keep their cotton under loan while waiting for prices to rise. USDA officials told us that the elimination of the extension is particularly important because the loan will no longer span 2 crop years. Production restrictions. Prior to the 1996 farm act, production restrictions—acreage set-asides and the 50/85/92 program—reduced supply to some extent, and prices were higher because less cotton was available on the market. The 1996 farm act eliminated these production restrictions. This change should have a downward effect on U.S. cotton prices in the future. Furthermore, U.S. cotton prices are higher than the adjusted world price because the adjusted world price is based on the cost of transporting U.S. cotton to Northern Europe. USDA estimates the world price for cotton from average prices quoted in Northern Europe, adjusts the world price for U.S. quality differences, and subtracts the cost of transporting cotton from the United States to Europe—about 12 cents per pound—to arrive at an adjusted world price. Domestic buyers incur only a 5-cents-per-pound cost of transporting cotton to domestic mills. As a result, domestic buyers gain a price advantage of 7 cents per pound on the value of the cotton they purchase. This price advantage contributes to the price premium that buyers offer to cotton producers to persuade them to take cotton out of storage and sell it rather than hold it and eventually forfeit it to the government. In addition, because USDA sets the adjusted world price weekly and U.S. prices change daily, buyers and producers can take advantage of the fluctuating differences between the two prices and further increase their returns from the program. Finally, because the adjusted world price is a price based on a formula rather than a market-determined price, cotton industry officials we spoke to stated that it may not accurately reflect actual world cotton prices and therefore may not be a good measure of U.S. competitiveness. Because all the factors mentioned above result in premium prices for U.S. cotton, it cannot be determined whether the loan rate will still act as a price floor under the marketing loan provisions until market conditions cause the adjusted world price to drop far enough below the loan rate to overcome the price premium. During the last 10 years, for 33 of the 35 months when the adjusted world price was below the loan rate by at least 1 cent, the adjusted world price would probably have had to fall even further below the loan rate to counter the effect of the premium and cause U.S. prices to fall below the loan rate. It is not possible to predict whether market conditions during the life of the 1996 farm act will result in the use of the marketing loan provisions and whether the adjusted world price will fall low enough to fully counter the premium and allow the U.S. price to fall below the loan rate. During the last 10 years, when the marketing loan provisions were in effect for rice, the adjusted world price was below the loan rate in 81 months.During 21 of these 81 months, when the adjusted world price was particularly low, the U.S. price fell below the loan rate. Unlike the inconclusive cotton data, the data for rice suggest that when market conditions result in an adjusted world price that is substantially lower than the loan rate, the marketing loan provisions prevent the loan rate from serving as a price floor. Figure II.2 shows the relationship between the adjusted world price, U.S. price, and loan rate for rice for August 1986 through August 1996. Figure II.2: Relationship Between the Adjusted World Price, U.S. Price, and Loan Rate for Rice, August 1986 Through August 1996 (Figure notes on next page) Regardless of the availability of the marketing loan provisions, the U.S. price will generally remain higher than the adjusted world price because of several factors that cause buyers to pay a premium for U.S. rice. In addition to the option value resulting from the availability of the loan at a particular loan rate, other factors that result in a premium price include contract sanctity/reliability, high-quality standards, and significant tariffs and transportation costs that limit imports. Moreover, the method used to calculate the adjusted world price may contribute to keeping the U.S. price higher than the adjusted world price. Each of these factors is discussed below. Option value of the loan. As in the case of cotton, the option to hold rice under loan has a value because the loan rate guarantees producers a minimum price, making it easier to keep rice off the market. In addition, under the marketing loan provisions, interest that has accrued on the loan is forgiven when the loan is repaid at the adjusted world price. According to one rice industry official, because the adjusted world price for rice has been below the loan rate for long periods of time, the loan has essentially become interest-free. Domestic rice millers and exporters recognize the value of this “interest-free loan” and are willing to pay premium prices to producers. Contract sanctity/reliability. USDA officials and industry representatives agree that U.S. rice buyers are willing to pay a premium price for U.S. rice because less risk is associated with this purchase. Buyers can expect the terms of the contract to be fulfilled and the product, as specified, to be delivered as promised. Sellers from other countries are generally not able to back their products with the same level of contract sanctity and reliability. High-quality standards. High-quality standards and strict grading procedures applied to U.S. rice reduce the buyer’s risk that is frequently associated with purchasing rice in a foreign market. Industry officials told us that the quality of U.S. rice is consistently better than the same type of rice produced by any other country. This quality advantage is reflected in a higher price for U.S. rice. Import tariffs and transportation costs. Even though rice does not have an import quota like cotton, it does have an import tariff of up to 35 percent, depending on the country and/or quality of rice. In addition, according to industry officials, significant transportation costs are incurred when shipping rice to the United States. Because of both the tariff and the transportation costs, as well as concerns about quality and reliability, only a small quantity of rice is imported into the United States. Consequently, the lack of competition in the U.S. market from lower-priced imports helps keep the U.S. price higher than the adjusted world price. In commenting on a draft of this report, USDA officials disagreed with the importance of tariffs in protecting the U.S. rice market. Currently, the rice that is imported is almost exclusively rice varieties not grown in the United States. However, these officials did not address the question of how much rice similar to U.S.-grown rice might be imported if the tariff were not as high. As in the case of cotton, the adjusted world price may not consistently reflect actual world prices. Since there is no readily available source of world market prices for rice, USDA has to calculate a world price for rice on the basis of actual transaction prices in international rice markets. This world price is then adjusted for transportation costs and some quality differences. Even though the adjusted world price is based on market data, it is still a formula-based price and may not represent actual world market conditions. Moreover, the formula USDA uses to determine the world price and adjusted world price for rice is not publicized, as it is for cotton. According to one USDA official, the formula is not publicized to prevent price manipulation by foreign competitors and domestic producers. However, the formula’s confidentiality has led experts to question its validity. Some industry officials we spoke to stated that the adjusted world price for rice is set too high, while some agricultural economists stated that it is set too low. Setting the adjusted world price too low would increase the premium paid by domestic buyers for U.S. rice. The forecasts of USDA and others indicate that while U.S. prices are expected to remain above the loan rate for the 7-year duration of the 1996 farm act, world prices are predicted to be lower than the loan rate in some of those years. If the adjusted world price falls far enough below the loan rate, producers’ use of marketing loan provisions should allow U.S. prices to also fall below the loan rate. For wheat, feedgrains, and oilseeds, the historical data needed to assess the effect of the marketing loan provisions are limited. Unlike the cotton and rice programs, which have over a decade of experience with the marketing loan provisions, oilseeds have had these provisions in effect only since 1991 and wheat and feedgrains only since 1993. Moreover, since the marketing loans were authorized for these commodities, U.S. prices have generally been above the loan rates, and the federal government has spent only a limited amount on marketing loan gains and loan deficiency payments. The marketing loan provisions were used only in crop years 1993 and 1994 for wheat and feedgrains, and gains were realized on only a small percentage of the total U.S. production of these commodities. However, for oilseeds, these provisions were used for crop years 1991 through 1994. Table II.1 provides information on the total quantity of wheat, feedgrains, and oilseeds produced in crop years 1993 and 1994; the percent of total production realizing marketing loan benefits; and the average marketing loan gain or loan deficiency payment received. Average marketing loan gain per bushel or cwt. Average loan deficiency payment per bushel or cwt. 2,396 mil. bu. 2,320 mil. bu. 10,103 mil. bu. 398 mil. bu. 375 mil. bu. 230 mil. bu. 655 mil. bu. 3.480 mil. bu. 2.922 mil. bu. 1,871 mil. bu. 2,558 mil. bu. 48,361,850 cwt. 2,524,500 cwt. 74,420 cwt. Legend: bu.— bushel cwt. — hundredweight mil. — million For some commodities, payments were made only in a single year. Therefore, for those commodities, information is provided for the year when payments were made. Generally, marketing loan gains and loan deficiency payments were made for a small share of the total production during crop years 1993 and 1994. For example, for corn, total marketing loan benefits (marketing loan gains and loan deficiency payments) were realized on 1 percent of the total bushels produced in crop year 1994. Five states (Illinois, Indiana, Michigan, Ohio, and Wisconsin) received about 95 percent of the total loan deficiency payments made for corn in crop year 1994. The average marketing loan gain for corn was $0.02 per bushel in crop year 1994, and the average loan deficiency payment was $0.04 per bushel. Furthermore, 50 percent of the loan deficiency payments made to corn producers in crop year 1994 occurred when the alternative repayment rate was no more than 3 cents below the loan rate. With less than a 2-percent difference between the repayment rate and the loan rate, it is difficult to determine whether the loan rate was acting as a price floor for corn during that year. Even if additional data were available, particular aspects of each commodity’s program and market features make it difficult to reach firm conclusions about the performance of the marketing loan provisions in allowing U.S. market prices for wheat, feedgrains, and oilseeds to drop below the loan rates. For example: For wheat, only one county loan rate applies to all five classes of wheat, but there are five alternative repayment rates. The average county loan rate may be set too high or too low for a particular class of wheat. As a result, for some classes of wheat, the fact that forfeitures occurred would not necessarily indicate that the loan rate was supporting prices but rather that the loan rate provided a price advantage not normally supported by the market. For wheat, corn, and other feedgrains, the market is becoming more specialized because some buyers are willing to pay a premium for certain quantities of grain with specific characteristics. Such contractual arrangements result in several U.S. prices existing simultaneously, some of which could be above the loan rate because of price premiums. It is therefore difficult to assess, at any given time, whether the loan rates are supporting prices or whether the contractual arrangements are keeping prices higher than the loan rates. For oilseeds, since 1991, most payments under the marketing loan provisions have been made for minor oilseeds. However, little price information exists for these commodities because many of the minor oilseeds are grown under contract or are thinly traded. For example, flaxseed received marketing loan benefits on almost 70 percent of the total crop produced in crop years 1991 through 1993. But most of this crop was grown under contract and little price information is available, according to a USDA official. Moreover, because flaxseed is a thinly traded commodity, determining its alternative repayment rates is also difficult. Limited price data make it difficult to assess whether the loan rate is acting as a price floor. In addition, for wheat, feedgrains, and oilseeds, the method that USDA uses to calculate the alternative repayment rates—posted county prices—hinders an assessment of the marketing loan provisions’ effectiveness in allowing U.S. prices to drop below the loan rates. USDA determines each county’s posted county price, daily for wheat, feedgrains, and soybeans, and weekly for minor oilseeds, by using the appropriate terminal price from the previous day or week, adjusted for transportation costs and other factors. Because the terminal price may not reflect local county market conditions, the posted county price is not always consistent with local prices. Moreover, because posted county prices measure the previous day’s or week’s terminal prices, they do not incorporate new information that may affect prices on a particular day. As a result, in some instances, the posted county price may be set below the loan rate when actual market conditions warrant a posted county price above the loan rate. In these cases, it may appear that the loan rate is supporting the U.S. price, when in actuality the posted county price may not be reflecting local county market conditions and prices. (See app. I for more information on how the relationship between the posted county price and the U.S. price affects the benefits producers receive under the marketing loan provisions.) Lacking conclusive data, USDA officials, agricultural economists, and other commodity analysts disagree on the extent to which the marketing loan provisions will prevent the loan rates from acting as price floors for wheat, feedgrains, and oilseeds. Many USDA officials and agricultural economists we spoke to expect that the marketing loan provisions for wheat, feedgrains, and oilseeds will work largely as intended if alternative repayment rates fall below the loan rates. They expect these provisions to be most effective when prices fall substantially below the loan rates and remain there for a significant period of time. For example, one USDA official told us that producers used the generic commodity certificate program during a period of low prices in the 1980s. Therefore, he stated that it is likely that producers will use the marketing loan provisions if the posted county prices fall substantially below the loan rates in the future. Moreover, these experts stated that when prices are below the loan rates, it will be to the producers’ advantage to use the marketing loan provisions because the producers must pay for storage if they choose not to sell.Producers would usually gain from using the marketing loan provisions and selling their crops instead of forfeiting them because they would not incur the storage costs they would have had to pay if they had held their commodity for the full term of the loan and then forfeited it. (See app. 1 for further discussion on producers’ marketing loan gains.) These experts also stated that because producers would be willing to accept lower prices for their commodities and use the marketing loan provisions, loan rates would no longer act as price floors, and forfeitures would be unlikely to occur. However, a few agricultural economists and commodity analysts offer several reasons why the loan rate may at times provide some price support despite the marketing loan provisions. For example, some told us that when U.S. prices and posted county prices are slightly below loan rates, a temporary resistance prevents prices from falling further below the loan rate. This happens because the gain from using marketing loan provisions may not be enough to overcome the transaction costs associated with using the provisions. In this case, producers may continue to hold their commodities under loan and temporarily keep U.S. prices above or at the loan rates. These experts stated that if supply and demand conditions warrant prices falling further below loan rates, this resistance is most likely to disappear. Some also stated that the loan rate may at times provide price support because the option value of the loan is relatively large compared with the potential savings from avoiding storage costs. If so, producers may prefer to keep their commodities under loan and forfeit them if prices remain low despite the marketing loan provisions. In addition, the greater the option value of the loan, the greater resistance loan rates will provide against falling prices. Furthermore, because the posted county prices are sometimes not consistent with local U.S. prices, some agricultural economists told us that if posted county prices are higher than the local county prices, producers may have little incentive to use the marketing loan provisions and may choose to forfeit their commodities. The extent to which this may occur depends on the actual differences between the posted county prices and U.S. prices and the potential to avoid storage costs by redeeming loans at the posted county prices. According to 1996 forecasts by USDA and others, U.S. prices for wheat, feedgrains, and soybeans are expected to be above the loan rates for the next several years. Under these market conditions, the marketing loan provisions will not be used. However, during 1996, prices for wheat and feedgrains fell substantially. For example, cash prices for corn fell from a high of $5.25 per bushel on July 11, 1996, to a low of $2.51 per bushel on November 5, 1996. (Some of this difference was due to seasonal variations.) If prices continue to fall to levels near the loan rate of $1.89, then producers may use the marketing loan provisions. The 1996 farm act lowered the quota support price for peanuts to reduce U.S. peanut prices and the cost of the peanut program to the government. This appendix discusses additional changes made to the peanut program and their effect on the U.S. peanut market. This appendix also includes an economic analysis of the effect of the reduced quota support price on the national poundage quota and on the U.S. peanut market. In addition to the reduction in the quota support price, discussed on page 15, other changes were made to the peanut program in the 1996 farm act: elimination of the legislatively set minimum national poundage quota; authorization to increase marketing assessments; elimination of provisions allowing the carryover of unfilled quota from year to year (undermarketings); redefinition of the peanut quota to exclude seed peanuts; limits on transfer payments (known as disaster transfers) made to quota holders whose commodity is of lesser quality; and added marketing requirements for maintaining program eligibility. These changes should enable USDA to better control the quantity of peanuts marketed at the quota support price, thus reducing the government’s costs associated with the program. In addition, out-of-state nonfarmers and government entities can no longer hold quota; and the annual sale, lease, and transfer of quota is now permitted across county lines within a state, up to specified amounts of quota. These changes will improve the equity and economic efficiency of the peanut program. The following discusses these changes in detail: National poundage quota. The 1996 farm act eliminated the minimum level for the national poundage quota, which refers to the quantity of peanuts that can be marketed domestically at the support price. The minimum quota is no longer fixed at 1.35 million tons by legislation. Instead, if conditions warrant, the national poundage quota may fall to lower levels. For crop year 1996, USDA set the quota at 1.1 million tons—0.25 million tons less than the minimum set under the previous legislation. This lower quota is intended to be more in line with the estimated quantity of peanuts demanded at the $610 per ton support price. If market conditions change in the future, USDA now has the ability to match the quota to the changing quantity demanded at the fixed support price. In addition, if the quota is set to equal the quantity of peanuts demanded at the support price, government costs for the program should be minimized. This is because the government would not have to purchase surplus peanuts to maintain the quota support price. Marketing assessments. The 1996 farm act provides USDA with the authority to increase future marketing assessments if marketing assessments in the current year do not cover all losses incurred from operating the peanut loan program. According to USDA officials, this provision will help ensure that the peanut program operates at no net cost to the Treasury. Undermarketings. The 1996 farm act further enhanced USDA’s ability to set the quota by no longer allowing the carryover of quota from year to year when producers are unable to produce enough peanuts to meet their quota. The amount of peanuts represented by the quota carried over to the next year was known as undermarketings. Previously, these undermarketings were in addition to the national poundage quota set for the year. By eliminating undermarketings, the 1996 farm act improved USDA’s ability to control the quantity of peanuts marketed at the quota support price. Seed peanuts. For the 1996 through 2002 crop years, producers will be allocated a temporary quota for peanuts to be used as seed. Previously, producers had to purchase quota peanuts rather than less expensive additional peanuts for seed. The new quota for seed in effect reimburses producers for the extra expense of using the quota peanuts. Under the previous legislation, the national poundage quota was based on domestic edible, seed, and related uses. Now the national poundage quota will not include seed use. The quota for seeds will be in addition to the national poundage quota. Also, the quota for seeds will be temporary and will only apply to the seeds used in the year the quota is issued. While the separate quota for seeds may increase the total quantity of quota, it ensures that the national poundage quota represents more closely only those peanuts marketed for edible use. Disaster transfers. Under the previous legislation, quota peanut producers who harvested a crop but were unable to market it commercially because it had been damaged by weather, insects, or disease were protected from a loss in income by disaster transfer payments. To qualify for the transfer payment, producers placed their damaged peanuts into the government’s additional peanuts loan program and received the support price established for additional peanuts. Furthermore, they received the disaster transfer payment, which is the difference between the higher quota support price and the support price for additional peanuts. These transfer payments ensured that quota holders received the quota support price regardless of the quality of the peanuts they produced. Under the new legislation, disaster transfers are limited to 25 percent of the producer’s quota and 70 percent of the quota support price. Marketing requirements for maintaining program eligibility. Producers who market 100 percent of their quota peanuts through a marketing association loan for 2 consecutive years shall be ineligible for price support the next crop year if during the prior 2 years they received and did not accept a written offer from a buyer for at least the quota support price. Reallocation of peanut quota held by out-of-state nonproducers or government entities. Effective with the 1998 crop year, peanut quota may no longer be held by people who are not peanut producers or whose primary residence and place of business is located outside the state in which the quota is allocated. In addition, peanut quota will be forfeited for farms owned or controlled by municipalities, airport authorities, schools, colleges, refuges, and other public entities. The forfeited quota will be allocated to other eligible producers in the state. The change made pursuant to the 1996 farm act will help ensure that peanut producers, rather than peanut quota holders who do not produce peanuts, are the beneficiaries of the peanut program. Transfer of peanut quota across county lines. The 1996 farm act allows for the annual transfer of the peanut quota across county lines within the same state for counties with less than 50 tons of quota. For counties with more than 50 tons of quota, the amount of transfer is limited to 40 percent of the quota in the transferring county as of January 1, 1996. The cumulative out-of-county transfers for any state, however, may not exceed 15 percent for 1996, 25 percent for 1997, 30 percent for 1998, 35 percent for 1999, and 40 percent for 2000. The previous legislation allowed the transfer of quota freely across county lines only in those states that had less than 10,000 tons of quota and under certain conditions within contiguous counties in the same state. An economic analysis of the effect of the reduced quota support price on the national poundage quota and on the U.S. market illustrates that as a result of changes made under the 1996 farm act, more peanuts will be available at a lower price than under the previous legislation. Additional reductions in the quota support price may further reduce the price of U.S. peanuts. The method by which the support price and national poundage quota interact is shown in figure III.1. P1 = Quota support price under previous legislation P2 = Quota support price under 1996 farm bill Pe = Price if there were no peanut program with current demand (D2) Qe = Quantity consumed if there were no peanut program with demand curve D2 Q1 = Quantity consumed at the quota support price P1 on D1 Q2 = Quantity consumed at the quota support price P2 on D2 Q3 = Quantity consumed at the quota support price P1 on D2 S = Supply curve D1 = Demand curve prior to change in consumer taste for peanuts D2 = Demand curve after change in consumer taste for peanuts This figure is a simplified economic representation of how the peanut market operates. The supply curve shows the different quantities of peanuts that producers will offer at each price. The demand curve shows the different quantities of peanuts that buyers will purchase at each price. Prior to the 1996 farm act, the support price was set at a level represented in the figure by P1, and the minimum national poundage quota was set at a quantity represented by Q1. In recent years, domestic use of peanuts has fallen short of the minimum national poundage quota set by legislation. This decline in use is attributed to changes in consumers’ tastes because of concern about fat in the diet and is represented by a shift in the demand curve from D1 to D2. Although demand for peanuts declined and only Q3 quantity of peanuts would be purchased on the domestic market at the quota support price P1, the national poundage quota was fixed by legislation at Q1. Therefore, USDA could not reduce the quota and had to buy Q1 minus Q3 quantity of surplus peanuts, increasing the costs associated with the program. To reduce these costs while maintaining a support price of P1, USDA would have had to reduce the quota to Q3 quantity of peanuts—the quantity that would have been purchased at the quota support price P1. Under the 1996 farm act, the legislatively set minimum national poundage quota was eliminated and the poundage quota was reduced. The quota did not need to be reduced to Q3, however, because the quota support price was also reduced—from P1 to P2. The new quota was set at Q2, the quantity that would be purchased by the market at the lower support price, P2. These changes reduce the possibility that the government will have to purchase surplus peanuts. Under this scenario, buyers purchase a larger quantity of peanuts at a lower price than under prior legislation, even though the quota has been lowered. If there were no program, however, the quantity purchased would be even greater—Qe—and the price even lower—Pe. For this reason, further reductions in the quota support price for peanuts, if made, may lower U.S. prices. At the request of the Chairman of the House Committee on the Budget, we reviewed seven commodity programs—cotton, rice, wheat, feedgrains, oilseeds, peanuts, and sugar—to determine how certain support provisions that remain operative under the 1996 farm act affect U.S. commodity prices in comparison with world prices. The world price must be analyzed on a commodity-by-commodity basis because currently there are only proxies for world prices. For this review, we used USDA’s proxies for the world price for cotton, rice, wheat, feedgrains, and oilseeds. The world price for peanuts is derived from the price quoted for U.S. peanuts in Rotterdam, adjusted for the cost of shelling and transportation back to the United States. The world price for sugar is the Number 11 contract price as traded on the New York Coffee, Sugar, and Cocoa Exchange, (f.o.b. Caribbean) for raw cane sugar. For this review, when analyzing U.S. prices, we used prices that producers receive for cotton, rice, wheat, feedgrains, and oilseeds. In conducting our review, we obtained data from USDA on payments made under the programs for cotton, rice, wheat, feedgrains, and oilseeds, as well as information on how the alternative repayment rates are calculated. We also spoke with representatives of USDA’s Commodity Credit Corporation, Economic Research Service, Farm Service Agency, Foreign Agricultural Service, National Agriculture Statistical Service, Office of Chief Economist, and county offices. We also spoke to officials from the World Bank, academic experts, industry and trade representatives, and agricultural commodity consultants. We reviewed various economic and international trade studies conducted by universities, management consulting groups, USDA, and international agencies. We conducted the following analyses to determine if the marketing loan provisions prevent loan rates from acting as price floors and allow U.S. prices to fall to levels that are closer to adjusted world prices. For cotton and rice, we analyzed USDA’s proxies for weekly world prices for crop years 1986 through 1995 and the way in which these prices were converted to the adjusted world prices used for the marketing loan provisions. To understand how the conversions were made, we spoke to officials at the Farm Service Agency. We also analyzed weekly spot market prices for cotton and producer prices for rice for the same period to understand the relationship between the adjusted world price and U.S. prices. To adjust prices for inflation, we used the gross domestic product implicit price deflator, which is the generally accepted method for determining real prices. We also identified other program and market factors that affect U.S. prices for cotton and rice. To make the same determination for wheat, feedgrains, and oilseeds, we obtained data on marketing loan benefits from USDA’s Kansas City Management Office to determine the level and general distribution of payments for crop years 1993 through 1995. For corn, we also analyzed posted county prices, loan rates, and market price information to understand the relationship between these prices for crop year 1994. We selected corn for our detailed analysis because this was the only commodity of this grouping for which meaningful price data were available. We recognize that our analysis of historical price data to determine the effectiveness of the marketing loan provisions may be limited in its applicability to the future. This is because the 1996 farm act has either eliminated or changed many of the program provisions that were in place in the past. To determine the effect of lower loan rates on the relationship between U.S. and world prices, we spoke with USDA officials, including agricultural economists, and other agricultural economists who are specialists in each of the commodities we reviewed. We also reviewed the literature on this question. To determine the effect of a lower loan rate on step 2 payments, we interviewed and obtained documents from USDA officials and spoke to officials from the National Cotton Council and the International Cotton Advisory Committee, and to a cotton industry official. To determine the impacts of the recent changes in the timing of step 2 payments on the program’s effectiveness, we reviewed regulations and reports from USDA and others and spoke to officials at USDA, the National Cotton Council, and the International Cotton Advisory Committee, and to a cotton industry official. To identify additional changes that could be made to make the peanuts and sugar programs more market-oriented, we reviewed legislation and regulations, as well as reports from USDA. We also interviewed officials at USDA, in academia, commodity consulting groups, the American Sugar Alliance, and representatives of sugar grower and processor associations. We did not independently verify the data used in this report. We conducted our review from July 1996 through January 1997 in accordance with generally accepted government auditing standards. Juliann M. Gerkens, Assistant Director Jay R. Cherlow, Assistant Director for Economic Analysis Carol E. Bray, Senior Economist Barbara J. El Osta, Senior Economist Anu K. Mittal, Senior Evaluator Karla J. Springer, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the impact of support provisions on selected commodity prices, focusing on: (1) whether marketing loan provisions prevent loan rates from acting as price floors and whether they allow U.S. prices to fall to levels closer to world prices; (2) the effect lower loan rates would have on the relationship between U.S. and world prices; (3) the affect of a lower loan rate on step 2 payments for cotton exports and the impact of recent changes in timing of payments on the program's effectiveness; and (4) the steps that could be taken to make the peanut and sugar programs more market-oriented. GAO found that: (1) when alternative repayment rates, which are derived from the U.S. Department of Agriculture's (USDA) proxies for world prices, are near or below the loan rates, the marketing loan provisions may prevent the loan rates from serving as price floors; (2) lowering the loan rates has little if any effect on U.S. prices when alternative repayment rates are above the loan rates; (3) however, when alternative repayment rates are near or below the loan rates, the effect on U.S. prices of lowering the loan rates differs by commodity; (4) for cotton and rice, the availability of nonrecourse loans, in combination with other program and market factors, keeps U.S. prices significantly higher than adjusted world prices; (5) therefore, lowering the loan rates is likely to allow U.S. prices to fall to levels that are closer to adjusted world prices; (6) for wheat, feedgrains, and oilseeds, most experts assert that the marketing loan provisions will work as intended to overcome the price-supporting effects of the nonrecourse loans; (7) for these crops, lowering the loan rates would have little if any impact on U.S. prices; (8) to the extent that a lower loan rate results in lower U.S. cotton prices, step 2 payments would be reduced but not eliminated; (9) step 2 payments would continue to be made because the marketing loan provisions have not been able to overcome the cotton program's other features, such as government-paid storage, that help keep U.S. cotton prices higher than adjusted world prices; (10) however, because of recent changes in how USDA makes step 2 payments to exporters, these payments may no longer directly offset higher U.S. prices and therefore may be less effective in enhancing exports; (11) further changes can be made to make the peanut and sugar programs more market-oriented; (12) additional reductions in the quota support price for peanuts will lower U.S. prices and increase economic efficiency; (13) an increase in the tariff-rate import quota for sugar, allowing more sugar to be imported at the lower tariff rate, or its elimination entirely (no import restrictions), would result in lower U.S. prices; and (14) once prices fall to the level of the loan rate, reductions in the loan rate would be necessary to reduce prices further.
|
For any organization that depends on information systems to carry out its mission, protecting those systems that support critical operations and infrastructures is of paramount importance. Without proper safeguards, the speed and accessibility that create the enormous benefits of the computer age may allow individuals and groups with malicious intent to gain unauthorized access to systems and use this access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other sites. Concerns about attacks from individuals and groups, including terrorists, are well founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, the steady advance in the sophistication and effectiveness of attack technology, and the dire warnings of new and more destructive attacks to come. Given these threats, the security of computer-supported federal operations are at risk and place a variety of critical operations at risk of disruption, fraud, and inappropriate disclosure. We have designated information security as a governmentwide high-risk area since 1997—a designation that remains today. To address these concerns, Congress enacted the Federal Information Security Management Act of 2002 to strengthen the security of information collected or maintained and information systems used or operated by federal agencies, or by a contractor or other organization on behalf of a federal agency. The act provides a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. The act requires each agency to develop, document, and implement an agencywide information security program for the information and systems that support the operations of the agency as well as information systems used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency. Established by the Federal Reserve Act of 1913, the Federal Reserve System consists of a 7-member Board of Governors with headquarters in Washington, D.C.; 12 Reserve Districts, each with its own FRB located in a major city in the United States; and 25 bank branches. The Federal Reserve System differs from other entities established to carry out public purposes in that it is part public and part private. Although the Board is a government agency, the banks are not. Also, the Federal Reserve System structure does not follow the familiar “top-down” hierarchy, with all policymaking authorities centralized in Washington, D.C. The Board and the FRBs have shared responsibilities and policymaking authority in many areas of operation. The FRBs play a significant role in the processing of marketable Treasury securities. As fiscal agents of Treasury, the FRBs receive bids, issue securities to awarded bidders, collect payments on behalf of Treasury, and make interest and redemption payments from Treasury’s account to the accounts of security holders. During fiscal year 2005, the FRBs processed debt held by the public of about $4.5 trillion in issuances, about $4.2 trillion in redemptions, and about $128 billion in interest payments. Certain FRBs also provide IT services in support of Treasury auctions, operating and maintaining the Treasury mainframe auction application in which bid submissions are recorded and the auction results calculated. In addition to the Treasury mainframe auction application, the FRBs also operate and maintain two Treasury distributed-based auction applications. These applications provide the user interface to the mainframe auction application through the Federal Reserve networks. One of the distributed-based auction applications serves approximately 670 users, allowing them to participate in public (primarily noncompetitive) auctions via the Internet. The other distributed-based auction application serves 22 primary broker/dealers for competitive auctions who connect to it via workstations installed in the dealers’ offices by the FRBs. One nonprimary broker/dealer is allowed to access this distributed-based auction application via the Internet on a trial basis. These distributed-based auction applications transmit information on the tenders/bids, including the name of the submitter, the par amount of securities being tendered or awarded, the discount rate being tendered or awarded, and the clearing bank. Multiple Federal Reserve organizations are involved in the operation and maintenance of these applications, including the Federal Reserve Information Technology (FRIT)—the organization that provides entitywide IT support services for the Federal Reserve System. Other systems supporting Treasury financial reporting are mainframe-based applications and are used to record securities purchased by financial institutions, provide an automated system for investors to buy securities directly from Treasury and manage their Treasury securities portfolios, and monitor and track all cash received and disbursed for debt transactions that the FRBs process. The objective of our review was to assess the effectiveness of information system controls in ensuring the confidentiality, integrity, and availability of Treasury’s financial and sensitive auction information on key mainframe and distributed-based systems that the FRBs maintain and operate on behalf of BPD and that are relevant to the Schedule of Federal Debt. Our assessment included a review of the supporting network infrastructure that interconnects the mainframe and distributed-based systems. To accomplish this objective, we used elements of our Federal Information System Controls Audit Manual to evaluate information system controls within the FRB control environment. We concentrated our efforts primarily on the evaluation of logical access controls over the FRBs’ distributed-based auction applications because of their recent implementation and the Federal Reserve network infrastructure that supports these applications. To evaluate these applications, we reviewed information system controls over network resources used by the applications and focused on the following control domain areas: identification and authentication; authorization; boundary protection; cryptography; logging, auditing, and monitoring; and configuration management and assurance. Our review included observations of Treasury auction operations and an examination of automated programs related to the auction process; system data collected by FRB employees in our presence and at our direction; system and infrastructure documentation; source code for the distributed-based auction applications; and configuration files of firewalls, routers, and switches. We also examined policy and procedural documentation for the FRBs’ distributed computing security and network security, interviewed information technology managers and staff, and familiarized ourselves with the operations of the general auditors and with the results of their recent work applicable to our audit. In addition, we performed limited application controls testing over the Treasury mainframe auction application and other key mainframe applications that support Treasury’s financial reporting. Specifically, we evaluated application controls associated with access (segregation of duties, least privilege, and identification and authentication); controls over master data; transaction data input (data validation and edit checks); transaction data processing (data integrity and logs); and transaction data output (output reconciliation and review). To evaluate the effectiveness of these controls, we obtained system configuration information using GAO-prepared analytical tools run by FRB IT staff, and verified critical operating system logging and access control information for relevant system configurations. Also, using GAO-prepared scripts, we obtained information on operating system utilities with assistance from FRB IT staff. We discussed with officials from the staff of the Board of Governors and key Federal Reserve information security representatives and officials whether information security controls were in place, adequately designed, and operating effectively. We also discussed with these individuals the results of our review. We performed our work at the FRBs that operate and maintain the mainframe and distributed-based financial reporting and auction applications we selected for review. We performed our work from March 2005 through May 2006 in accordance with generally accepted government auditing standards. Although the FRBs established and implemented many controls to protect the mainframe applications that they maintain and operate on behalf of BPD, they did not consistently implement controls to prevent, limit, or detect unauthorized access to sensitive data and computing resources for the distributed-based systems and network environment that support Treasury auctions. As a result, increased risk exists that unauthorized and possibly undetected use, modification, destruction, and disclosure of certain sensitive auction information could occur. Furthermore, other FRB applications that share common network resources may also face increased risk. These information system control weaknesses existed, in part, because the FRBs did not have (1) an effective management structure for coordinating, communicating, and overseeing information security activities across bank organizational boundaries and (2) an environment to sufficiently test the auction applications. The FRBs had generally implemented effective information system controls for the mainframe applications that they operate and maintain on behalf of BPD in support of Treasury’s auctions and financial reporting. Examples of these controls include multiple layers of procedural and technical controls over mainframe systems, effective isolation of mainframe systems having different control requirements, and continuous independent auditing of mainframe technical controls. In addition, FRIT upgrades the software for the mainframe systems on an annual schedule. Each year, a new logical partition of the mainframe is created with the upgraded operating system and vendor-supplied software. This logical partition is then tested in a defined process, which is subject to an annual audit, and there is continuous monitoring of the production logical partitions. Although the mainframe control environment was generally effective, the FRBs had not effectively implemented information system controls for the distributed-based systems and supporting network environment relevant to Treasury auctions. More specifically, the FRBs did not consistently (1) identify and authenticate users to prevent unauthorized access; (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (3) implement adequate boundary protections to limit connectivity to systems that process BPD business; (4) apply strong encryption technologies to protect sensitive data in storage and on the Federal Reserve networks; (5) log, audit, or monitor security-related events; and (6) maintain secure configurations on servers and workstations. Identification and Authentication A computer system must be able to identify and differentiate among users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system distinguishes one user from another—a process called identification. The system also must establish the validity of a user’s claimed identity through some means of authentication, such as a password, that is known only to its owner. The combination of identification and authentication—such as user account/password combinations—provides the basis for establishing individual accountability and for controlling access to the system. The National Institute of Standards and Technology states that information systems should employ multifactor authentication, such as a combination of passwords, tokens, and biometrics. The FRBs did not adequately identify and authenticate users. For example, due to the weak design of password reset functionality for one of the distributed-based auction applications, anyone on the Internet could potentially change the password for a user in the application by having only his or her userID. Recognizing the severity of this vulnerability, the FRBs took steps to immediately correct this weakness. The FRBs also designed and implemented the distributed-based auction applications to only rely on one means of authentication, rather than a combination of authentication factors for controlling access. Furthermore, the FRBs did not replace a well-known vendor-supplied password on one of their systems, thereby increasing the risk that an unauthorized individual could guess the password and gain access to the system. Authorization is the process of granting or denying access rights and privileges to a protected resource, such as a network, system, application, function, or file. A key component of granting or denying access rights is the concept of “least privilege.” Least privilege is a basic underlying principle for securing computer resources and data. The term means that users are granted only those access rights and permissions that they need to perform their official duties. To restrict legitimate users’ access to only those programs and files that they need to do their work, organizations establish access rights and permissions. User rights are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that are associated with a particular file or directory and regulate which users can access them and the extent of that access. To avoid unintentionally giving users unnecessary access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. The FRBs did not implement sufficient authorization controls to limit user access to distributed-based computer resources. The distributed-based auction applications had excessive database privileges that were granted explicitly as well as inherited through permissions given to all users. As a result, malicious users could use these excessive privileges to exploit other vulnerabilities in the applications. In addition, the FRBs had granted users administrative privileges on their workstations, even though most users did not require this level of access. Granting unnecessary access privileges increases the risk that a workstation could be successfully compromised and then used to attack other FRB resources. As a result, the unnecessary level of access granted to computer resources provides opportunities for individuals to circumvent security controls to deliberately or inadvertently read, modify, or delete critical or sensitive information. Boundary protections demarcate a logical or physical boundary between protected information and systems and unknown users. Organizations physically allocate publicly accessible information system components to subnetworks with separate, physical network interfaces, and prevent public access into their internal networks, except as authorized. Unnecessary connectivity to an organization’s network not only increases the number of access paths that must be managed and the complexity of the task, but increases the risk in a shared environment. The FRBs did not consistently implement adequate boundary protections to limit connectivity to applications in the shared network environment. These applications include those that the FRBs operate and maintain on behalf of BPD and other FRB internal applications and systems that serve a variety of business areas with differing security requirements. In addition, the internal network was not segregated to restrict access to internal systems, and management of network devices and applications was conducted “in-band.” These practices increase the risk that individuals could disrupt or gain unauthorized access to sensitive auction data and other Federal Reserve computing resources. In some cases, the FRBs implemented effective boundary protection controls. For example, the remote access system used Federal Information Processing Standard compliant tokens for authentication and enforced a restriction that prevented simultaneous communication with the internal Federal Reserve network and the Internet. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. Encryption—one type of cryptography—is the process of converting readable or plaintext information into unreadable or ciphertext information using a special value known as a key and a mathematical process known as an algorithm. The strength of a key and an algorithm is determined by their length and complexity—the longer and more complex they are, the stronger they are. The FRBs did not appropriately apply strong encryption technologies to sensitive data and network traffic. Weak encryption algorithms, such as the user’s session information and application configuration files, were used to protect sensitive data in one of the distributed-based auction applications. Also, a weak encryption format was used to store and transmit certain passwords. These weaknesses could allow an attacker to view data and use that knowledge to gain access to sensitive information, including auction data. Determining what, when, and by whom specific actions were taken on a system is crucial to establishing individual accountability, investigating security violations, and monitoring compliance with security policies. Organizations accomplish this by implementing system or security software that provides an audit trail for determining the source of a transaction or attempted transaction and for monitoring users’ activities. How organizations configure the system or security software determines what system activity data are recorded into system logs and the nature and extent of the audit trail information that results. Without sufficient auditing and monitoring, organizations increase the risk that they may not detect unauthorized activities or policy violations. Furthermore, the National Institute of Standards and Technology guidance states that organizations should deploy centralized servers and configure devices to send duplicates of their log entries to the centralized servers. The FRBs did not sufficiently log, audit, or monitor events related to the distributed-based auction application process. For example, the intrusion detection system had not been customized to detect any abnormal communication among application components that might indicate an attack was in progress. In addition, no centralized logging was performed for certain servers we examined. As a result, there was a higher risk that unauthorized system activity would not be detected in a timely manner. To protect an organization’s information, it is important to ensure that only authorized application programs are placed in operation. This process, known as configuration management, is accomplished by instituting policies, procedures, and techniques to help ensure that all programs and program modifications are properly authorized, tested, and approved. Patch management, a component of configuration management, is an important element in mitigating the risks associated with software vulnerabilities. When a software vulnerability is discovered, the software vendor may develop and distribute a patch or work-around to mitigate the vulnerability. Up-to-date patch installation can help mitigate vulnerabilities associated with flaws in software code that could be exploited to cause significant damage, ranging from Web-site defacement to the loss of control of entire systems, thereby enabling malicious individuals to read, modify, or delete sensitive information; disrupt operations; or launch attacks against other organizations’ systems. Configuration assurance is the process of verifying the correctness of the security settings on hosts, applications, and networks and maintaining operations in a secure fashion. The FRBs did not maintain secure configurations on the distributed-based auction application servers and workstations we reviewed. Key servers and FRB workstations were missing patches that could prevent an attacker from gaining remote access. In addition, the FRBs were running a database management system and network devices that were no longer supported by the vendor. Unsupported products greatly increase the risk of security breaches, since the vendor often does not provide patches for known vulnerabilities. As a result of these weaknesses, the risk is increased of a successful attack and compromise of the related auction process. The previously mentioned information system control weaknesses existed, in part, because the FRBs did not have (1) an effective management structure for coordinating, communicating, and overseeing information security activities across bank organizational boundaries and (2) an environment to sufficiently test the auction applications. Implementing effective information security management practices across the enterprise is essential to ensuring that controls over information and information systems work effectively on a continuing basis, as described in our May 1998 study of security management best practices. An important factor in implementing effective practices is linking them in a cycle of activity that helps to ensure that information security policies address current risks on an ongoing basis. An effective management structure is the starting point for coordinating and communicating the continuous cycle of information security activities, while providing guidance and oversight for the security of the entity as a whole. One mechanism organizations can adopt to achieve effective coordination and communication, particularly in organizations where information security management is decentralized, is to establish a central security management office or group to serve as a facilitator to individual business units and senior management. A central security group serves as a locus of knowledge and expertise on information security and coordinates agencywide security-related activities. This group is also accessible to security specialists at the various organizational elements within the agency. Such a management structure is especially important to manage the inherent risks associated with a highly distributed, interconnected network-based computing environment and to help ensure that weaknesses in one system do not place the entire entity’s information assets at undue risk. In addition, as part of this management structure, clearly defined roles and responsibilities for all security staff should be established and coordination of responsibilities among individual security staff should be developed and communicated to ensure that, collectively, information security activities are effective. The FRBs did not have an effective management structure for coordinating, communicating, and overseeing their decentralized information security management activities that support Treasury auction systems and the supporting network infrastructure. Each bank operates independently and autonomously of one another, yet they share many of the same systems and computing resources. Because the FRBs did not have an effective information security management structure over the distributed-based systems, information security activities were not adequately coordinated among the banks and with the various IT groups involved in providing IT support services, including FRIT—the organization that provides entitywide IT support services. For example, information management activities associated with one of the distributed-based auction systems was divided among 10 IT groups, as shown in figure 1. In addition, no IT group was responsible for coordinating and communicating enterprisewide security operations support or oversight services. Consequently, the various organizations responsible for implementing information security did not have a good understanding or adequate visibility of the activities that other groups performed, nor did they always make appropriate decisions about information security for the network environment as a whole. As a result, there was no enterprisewide view of information security, and decisions regarding information security activities were not always optimal or based on a full understanding of the shared network environment supporting the Treasury auction process. For example, one IT group responsible for database operations made information security decisions regarding the distributed-based auction applications on the concept that they were operating in a “trusted network,” which resulted in the omission of controls that should have been in place; one IT group made decisions about the operations and maintenance of the distributed-based auction applications without full or accurate knowledge of the relevant computing environment; no IT group had responsibility for making a decision to upgrade the distributed-based auction database product, although all concerned agreed that an upgrade was needed; and servers that support the distributed-based auction applications were supposed to be identical to ensure real-time continuity of operations, but our testing showed that, as implemented, they were not identical. The Federal Reserve recognizes that a need exists for comprehensive approaches to managing information security, and that the management structure and processes that served its mainframe-centric environment in the past are not adequate for the distributed, interconnected environment supporting its various lines of business today. The Federal Reserve has an initiative under way to establish an information security architecture framework that is intended to integrate enterprise security activities, including enterprise access management, domain boundary, data security, configuration management, and information assurance. If effectively implemented, this initiative could provide the FRBs with an enterprisewide operational and technological view of its computing environment, including the interdependencies and interrelationships across the entity’s business operations and underlying IT infrastructure and applications that support these operations. However, until a more comprehensive and enterprisewide approach to security management is adopted, the FRB organizations that support Treasury auction systems will be limited in their ability to ensure the confidentiality, integrity, and availability of certain sensitive auction information and other resources for systems that they maintain and operate. The FRBs did not have a test environment to evaluate system changes and enhancements to the distributed-based auction applications, which limited the rigor of the testing that could be performed. A separate test environment that models the production environment is critical to ensuring that systems and system enhancements are adequately tested and do not adversely affect production. However, the FRBs did not have an isolated testing area that was functionally separate from the production network infrastructure and other FRB business applications. As a result, some application security testing was performed during very limited scheduled outages of the production systems involved, and some test procedures were never performed because the risk to production systems could not be effectively mitigated. Although the FRBs have implemented many controls to protect the mainframe information systems that they maintain on behalf of BPD relevant to the Schedule of Federal Debt, information security control weaknesses related to the distributed-based auction systems and supporting network environment exist at the Federal Reserve that place certain sensitive auction information at risk. The weaknesses in identification and authentication; authorization; boundary protection; cryptography; logging, auditing, and monitoring; and configuration management and assurance affect not only the distributed-based auction systems but also could affect other FRB systems residing in the shared network environment. With control over and responsibility for Treasury’s auction information systems spread across the FRBs, an effective management structure for coordinating, communicating, and overseeing information security activities across bank organizational boundaries becomes even more important. In addition, more robust testing of security controls over the auction applications is imperative to help provide more timely detection of vulnerabilities. Until the Federal Reserve takes steps to mitigate these weaknesses, it has increased risk that sensitive auction data would not be adequately protected against unauthorized disclosure, modification, or destruction. To help strengthen the FRBs’ information security over key distributed-based auction systems, we recommend that you take the following two steps: establish a management structure that ensures decentralized information security activities are effective and implement an application test environment for the auction systems. We are also making additional recommendations in a separate report with limited distribution. These recommendations consist of actions to be taken to correct the specific information security weaknesses we identified that are related to identification and authentication; authorization; boundary protection; cryptography; logging, auditing, and monitoring; and configuration management and assurance. In providing written comments on a draft of this report (reprinted in app. I), the Director, Division of Reserve Bank Operations and Payment Systems of the Federal Reserve System, generally agreed with the contents of the draft report and stated that the Federal Reserve has already taken corrective actions to remedy many of the reported findings and will continue to apply its risk-based assessment framework to determine appropriate information security controls or compensating measures to address the remaining findings. The director also described completed, ongoing, and planned actions to address systemic and organizational issues that contributed to the report’s findings, including actions to improve the Federal Reserve’s ability to coordinate and oversee its operational and technical environments and to replace its existing auction applications and operational infrastructure. In addition, the director commented that the Federal Reserve and Treasury plan to validate the integrity of the new application and infrastructure at several points during the development of the application; a key aspect of this validation is to ensure that the findings in this report are addressed. This report contains recommendations to you. As you know, 31 U.S.C. 720 requires that the head of a federal agency submit a written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Government Reform not later than 60 days from the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of recommendations, GAO requests that the agency also provide us with a copy of your agency’s statement of action to serve as preliminary information on the status of open recommendations. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Homeland Security and Governmental Affairs; the Subcommittee on Federal Financial Management, Government Information, and International Security, Senate Committee on Homeland Security and Governmental Affairs; and the Chairmen and Ranking Minority Members of the House Committee on Government Reform and the Subcommittee on Government Management, Finance, and Accountability, House Committee on Government Reform. In addition, we are sending copies to the Fiscal Assistant Secretary of the Treasury and the Deputy Director for Management of OMB. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected], Keith A. Rhodes at (202) 512-6412 or [email protected], or Gary T. Engel at (202) 512-8815 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. On behalf of Chairman Bernanke, thank you for the opportunity to comment on the GAO’s report titled Information Security: Federal Reserve Needs to Address Treasury Auction Systems. The GAO’s audit of the Treasury auction systems was conducted as part of its review of the Bureau of the Public Debt’s FY 2005 Schedules of Federal Debt. The report identified a number of weaknesses in Reserve Bank computer-based information security control environments in the distributed computing and network environments that support the Treasury auction processes. We have already taken corrective actions to remediate many of the findings in the report, and we will continue to apply our risk-based assessment framework to determine appropriate information security controls or compensating measures to address remaining findings. The Reserve Banks are taking action to address systemic and organizational issues that contributed to the report’s findings. We met with the GAO review team several times to discuss our plans to further strengthen our information security architecture and to correct the root causes of the findings so that we avoid recurring weakness in controls. The report recognizes that successful implementation of the strengthened architecture could improve our ability to manage our information security operational and technical environments. We have also taken actions to improve our ability to coordinate and oversee our complex IT systems effectively. The Reserve Banks recently realigned their information security governance structure and designated the Director of the Reserve Banks’ Federal Reserve Information Technology organization (FRIT) as the focal point for enterprise-wide information security. All operational units within the Federal Reserve Banks are responsible for confirming compliance with established information security operational practices and information security policies and standards with the Director of FRIT. As part of this realignment, FRIT established a new function, National Information Security Assurance (NISA), which is responsible for monitoring end-to-end information security compliance with security standards, including software currency, across the Federal Reserve. Further, NISA will maintain an aggregate view of information security risk across all risk management programs, including internal audit and external sources, such as the GAO. The Treasury auction applications reviewed in this report were developed starting in 1998 when web technology, tools, and development practices were substantially less evolved than those available today. While security methods for web-based applications have improved, so has the sophistication of criminals attempting to compromise them. The Treasury and the Federal Reserve are currently undertaking a significant development initiative to replace the existing applications and operational infrastructure by year-end 2007. The design of the new application and infrastructure is based on current sound practices that will ensure a well- managed and well-controlled operating environment. The Federal Reserve and Treasury plan to validate the integrity of the application and infrastructure at several points in the project using internal and external technical resources. A key aspect of this validation is ensuring the GAO’s findings are addressed. The new auction applications will be operated within the Federal Reserve’s strengthened information security architecture, and information security compliance will be monitored through our improved information security governance structure. As your report notes, this review specifically focused on information security controls in the distributed computing and network environments supporting the Treasury auction process. The GAO’s review did not consider the end-to-end risk control environment that would include management and business operational controls. This additional layer of control is critical to ensuring the integrity of the Schedules of Federal Debt. The information security vulnerabilities the GAO identified did not affect its opinion in its report titled Financial Audit: Bureau of the Public Debt’s Fiscal Years 2004 and 2005 Schedules of Federal Debt. That report noted that effective internal controls over financial reporting and compliance with applicable laws and regulations were maintained. Although we consider the information security control vulnerabilities identified in the Treasury auction system report significant and warranting our serious attention, they should not be construed as allowing successful circumvention of Treasury auction management and business operational controls. We appreciate the quality of the GAO technical review and the time taken by the review team to brief Federal Reserve and Treasury staff thoroughly on the results of the review. The GAO team has also contributed to our remediation efforts by consulting with various Federal Reserve technical and management staff on the technical details underlying the findings in the report. In addition to the individuals named above, Ed Alexander, Lon Chin, Edward Glagola, David Hayes, Hal Lewis, Duc Ngo, Dawn Simpson, and Jenniffer Wilson, Assistant Directors, and Mark Canter, Dean Carpenter, Jason Carroll, West Coile, Debra Conner, Neil Doherty, Nancy Glover, Sharon Kittrell, Eugene Stevens, Henry Sutanto, Amos Tevelow, and Chris Warweg made key contributions to this report.
|
The Federal Reserve System's Federal Reserve Banks (FRB) serve as fiscal agents of the U.S. government when they are directed to do so by the Secretary of the Treasury. In this capacity, the FRBs operate and maintain several mainframe and distributed-based systems--including the systems that support the Department of the Treasury's auctions of marketable securities--on behalf of the department's Bureau of the Public Debt (BPD). Effective security controls over these systems are essential to ensure that sensitive and financial information is adequately protected from inadvertent or deliberate misuse, disclosure, or destruction. In support of its audit of BPD's fiscal year 2005 Schedule of Federal Debt, GAO assessed the effectiveness of information system controls in protecting financial and sensitive auction information on key mainframe and distributed-based systems that the FRBs maintain and operate for BPD. To do this, GAO observed and tested FRBs' security controls. In general, the FRBs had implemented effective information system controls over the mainframe applications they maintain and operate for BPD in support of Treasury's auctions and financial reporting. On the distributed-based systems and supporting network environment used for Treasury auctions, however, they had not fully implemented information system controls to protect the confidentiality, integrity, and availability of sensitive and financial information. The FRBs did not consistently (1) identify and authenticate users to prevent unauthorized access; (2) enforce the principle of least privilege to ensure that access was authorized only when necessary and appropriate; (3) implement adequate boundary protections to limit connectivity to systems that process BPD business; (4) apply strong encryption technologies to protect sensitive data both in storage and on its networks; (5) log, audit, or monitor security-related events; and (6) maintain secure configurations on servers and workstations. Without consistent application of these controls, the auction information and computing resources for key distributed-based auction systems remain at increased risk of unauthorized and possibly undetected use, modification, destruction, and disclosure. Other FRB applications that share common network resources may also be at increased risk. Contributing to these weaknesses in information system controls were the Federal Reserve's lack of (1) an effective management structure for coordinating, communicating, and overseeing information security activities across bank organizational boundaries and (2) an adequate environment in which to sufficiently test the security of its auction applications.
|
The Naval Inventory Control Point authorizes movement of its inventory from the Defense Logistic Agency, Navy-managed shipping and receiving activities, and Navy repair contractors. With prior approval from Naval Inventory Control Point item managers, Navy repair contractors can also use material located at their facilities. DOD procedures and Federal Acquisition Regulation procedures generally require that repair contractors establish and maintain an internal property control system for the control, use, maintenance, repair, protection, and preservation of government property in their possession. The Navy currently has 359 repair contractors using its Web-based inventory management system, known as the DOD Commercial Asset Visibility System. This system provides the Navy with asset reporting coverage for more than 95 percent of its commercial repair business. For fiscal year 2002, the most recent and complete data available at the time of our review, the Naval Inventory Control Point reported that 4,229 government-furnished material shipments (representing 4,301 items valued at approximately $115 million) had been shipped to its repair contractors. Table 1 shows the derivation of the sample size of our survey, including the number and value of shipments for which we received survey responses. DOD requires the Navy to use a number of procedures to monitor items shipped to and received by repair contractors. First, the recipient of the material is responsible for notifying the Naval Inventory Control Point once an item has been received. If the Naval Inventory Control Point has not been provided a receipt within 45 days of shipment, it is required to follow up with the intended recipient. The rationale behind these requirements is that until receipt is confirmed, the exact status of the shipment is uncertain and therefore vulnerable to fraud, waste, or abuse. The Naval Inventory Control Point is also required by DOD and Navy procedures to submit quarterly reports to the Defense Contract Management Agency identifying all government-furnished material that has been provided to a contractor. These reports allow the Defense Contract Management Agency to independently verify that Navy repair contractors have accounted for all government-furnished material shipped to them. As a result, the Defense Contract Management Agency does not have to rely strictly on records provided by Navy repair contractors. In addition to these DOD and Navy procedures, the Navy, as a representative of the federal government, is also obligated to establish and maintain effective internal control systems. The Federal Managers’ Financial Integrity Act of 1982 requires the General Accounting Office to issue standards for internal control in government. According to these standards, internal control is defined as an integral component of an organization’s management that provides reasonable assurance that the following objectives are being achieved: (1) effectiveness and efficiency of operations, (2) reliability of financial reporting, and (3) compliance with applicable laws and regulations. A subset of these objectives is the safeguarding of assets. Internal control should be designed to provide reasonable assurance regarding prevention of or prompt detection of unauthorized acquisition, use, or disposition of an agency’s assets. Effective and efficient internal control activities help ensure that an agency’s control objectives are accomplished. The control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives, such as the process of adhering to DOD requirements for receipt of government-furnished material and issuance of quarterly government-furnished material status reports. DOD annually ships inventory valued at billions of dollars to various locations around the world. For years, DOD has had difficulty tracking this inventory from origin to destination. The accountability problem with shipped inventory is part of a more global issue. Since at least 1990, we have considered DOD’s inventory management to be a high-risk area because DOD’s inventory management procedures are ineffective. The lack of adequate controls over shipped inventory and the resulting vulnerability to undetected loss or theft have been major areas of concern. In March 1999, we reported that significant weaknesses existed at all levels of the Navy’s shipped inventory management structure, leading to potential theft or undetected losses of items and demonstrating inefficient and ineffective logistics management practices. We concluded that these weaknesses and the problems they created were primarily a result of the Navy not following its own procedures regarding controls over shipped inventory. In following up with the Navy, we found that the Navy had implemented many of our recommendations that have improved the accountability over its shipped inventory. According to Naval Inventory Control Point officials, at least 75 percent of their shipped inventory discrepancies have been resolved. Also, according to a June 2003 DOD Office of the Inspector General report, the Navy had taken actions to improve its procedures and controls to account for items repaired at commercial repair facilities. However, the DOD Office of the Inspector General found that additional improvements were needed to improve the monitoring and oversight of shipped inventory. The Naval Inventory Control Point and its repair contractors are not following DOD inventory management control procedures governing the accountability for and visibility of government-furnished material shipped to its repair contractors. As a result, this inventory is vulnerable to loss or theft. First, the Naval Inventory Control Point is not following a DOD procedure that requires repair contractors to acknowledge receipt of government-furnished material that has been shipped to them from the Navy’s supply system. Consequently, the Naval Inventory Control Point is not following another DOD procedure that requires it to follow up with its repair contractors that have not confirmed receipt of shipped material. Additionally, the Naval Inventory Control Point is not following DOD and Navy procedures that require quarterly reports on the status of government-furnished material shipped to Navy repair contractors be provided to the Defense Contract Management Agency. Navy repair contractors are not routinely acknowledging receipt of government-furnished material that has been shipped to them from the Navy’s supply system. According to a DOD procedure, repair contractors must enter shipments into their inventory records and notify the inventory control point when material has been received. This material receipt acknowledgment is designed to maintain accountability for all shipped items, including government-furnished material that has been shipped to Navy repair contractors in usable condition from the Navy’s supply system. Additionally, the Navy must adhere to internal control standards outlined by the General Accounting Office, such as providing assurance that assets are safeguarded against unauthorized acquisition, use, or disposition. Navy repair contractors are not routinely adhering to the DOD procedure to acknowledge receipt of government-furnished material shipped to them because Naval Inventory Control Point officials are not requiring them to do so. By not requiring repair contractors to receipt, the Naval Inventory Control Point does not follow up with its repair contractors within 45 days—as required by DOD procedure—when these repair contractors fail to confirm receipt of government-furnished material shipped to them. Naval Inventory Control Point officials acknowledged that they only become aware that a contractor has not received an item if the contractor inquires about the shipment. Naval Inventory Control Point personnel provided several reasons why Navy repair contractors are not being required to notify the Naval Inventory Control Point of material receipt. They indicated that the Naval Inventory Control Point does not require its repair contractors to acknowledge receipt of government-furnished material because such material is provided with the expectation that it will immediately be consumed in the repair of other items. Naval Inventory Control Point officials also stated that submitting notification of receipt for this material might overstate the inventory levels in the DOD Commercial Asset Visibility System—the Navy’s inventory management system—because the system would show this material as on-hand at its repair contractors’ facilities when the material is actually earmarked for immediate use in the repair of another item. We have noted the Naval Inventory Control Point officials’ concerns about the potential for notification of receipt of government-furnished material to overstate inventory levels. However, one of the Navy repair contractors in our review currently uses the DOD Commercial Asset Visibility System to enter receipt of government-furnished material, without overstating the inventory levels maintained within the system. Specifically, the Navy repair contractor is using a standard module of the DOD Commercial Asset Visibility System to enter receipt of material received in usable condition from the Navy’s supply system. Upon entering a receipt of material in the system, the contractor then issues the item to itself for immediate use in the repair of another item. This process eliminates the possibility of adverse impacts to inventory levels. However, the Naval Inventory Control Point is not using this module of the DOD Commercial Asset Visibility System for the receipt of material received in usable condition. Additionally, in our previous Air Force report on shipments to repair contractors, we found that in contrast to the Navy, the Air Force requires repair contractors to issue a notification of receipt for material that they have received. For fiscal year 2002, the most recent and complete data available at the time of our review, the Naval Inventory Control Point reported that 4,229 government-furnished material shipments (representing 4,301 items valued at approximately $115 million) had been shipped to its repair contractors. We randomly selected and examined 308 government- furnished material shipments, representing 344 items that were shipped to Navy repair contractors. We surveyed 29 Navy repair contractors to determine if they had recorded receipts for these shipments in their property records. The repair contractors had recorded receipts for all classified shipments. However, they could not document the receipt of 4 unclassified shipments (representing 4 items). Because our sample was randomly selected, the results can be projected to the entire universe of government-furnished material managed by the Naval Inventory Control Point. We estimate that 50 unclassified items may be unaccounted for, with a value of about $729,000 in inventory of aircraft-related government- furnished material for fiscal year 2002. Lack of contractor notification has impeded the Naval Inventory Control Point’s visibility of government-furnished material. Without such notification, the Naval Inventory Control Point cannot be assured that its repair contractors have received all shipped material. Finally, because the Naval Inventory Control Point does not have data on unconfirmed government-furnished material shipments, it lacks the ability to independently know when corrective actions, such as resolving inventory discrepancies, are needed. As a result, the Navy’s inventory continues to be at risk of fraud or loss. The Naval Inventory Control Point is also not following DOD and Navy procedures that require Naval Inventory Control Point officials to provide the Defense Contract Management Agency with quarterly status reports showing all shipments of government-furnished material that have been provided to its repair contractors. According to the procedures, these reports should include information about total shipments and their dollar value, number of shipments for which receipts are unknown, and rejected requisitions. The purpose of the reports is to assist the Defense Contract Management Agency in independently verifying contractor records of government-furnished material. A Navy procedure designates individuals within the inventory control point to serve as the management control activity to, among other things, generate and distribute the required quarterly government-furnished material status reports. According to Naval Inventory Control Point and Defense Contract Management Agency officials, the inventory control point has not provided the Defense Contract Management Agency with the required quarterly reports. The inventory control point officials stated that they were unaware of the requirement to provide quarterly reports to the Defense Contract Management Agency. Although a number of standard DOD and Navy inventory management control procedures stipulate the requirement to generate and distribute the quarterly reports, inventory control point officials lack recognition of this reporting requirement. The Naval Inventory Control Point currently lacks procedures to ensure that these quarterly reports are generated and distributed to the Defense Contract Management Agency as required by DOD and Navy procedures. Proper distribution of the quarterly status reports is vital to the management of government-furnished material. A 1995 DOD Inspector General audit report of management access to DOD’s supply system asserted that the Defense Contract Management Agency serves as the last line of defense in protecting material resources and needs an independent record of the government-furnished material shipped to repair contractors. These quarterly reports serve as such a record and prevent the Defense Contract Management Agency from having to rely solely on the repair contractors’ records of government-furnished material. Inventory worth millions of dollars is vulnerable to fraud, waste, or abuse because the Naval Inventory Control Point is not adhering to DOD inventory management control procedures for government-furnished material shipped to its repair contractors. Because the Naval Inventory Control Point has not required its repair contractors to acknowledge receipt of government-furnished material, it will continue to lack assurance that its repair contractors have received shipped material. In addition, without requiring receipts, the Naval Inventory Control Point will be unable to follow up on unconfirmed material receipts within the required 45 days. As a result, the Naval Inventory Control Point will continue to lose the ability to understand inventory management weaknesses and take necessary corrective action. Furthermore, without the Naval Inventory Control Point implementing procedures to ensure that quarterly reports for all shipments of government-furnished material to its repair contractors are generated and distributed to the Defense Contract Management Agency, the Defense Contract Management Agency might be impaired in its ability to serve as the last line of defense in protecting government-furnished material. To improve the control of government-furnished material shipped to Navy repair contractors, we recommend that the Secretary of Defense direct the Secretary of the Navy to instruct the Commander, Naval Inventory Control Point, to implement the following three actions: Require Navy repair contractors to acknowledge receipt of material that is received from the Navy’s supply system as prescribed by DOD procedure. Follow up on unconfirmed material receipts within the 45 days as prescribed in the DOD internal control procedures to ensure that the Naval Inventory Control Point can reconcile material shipped to and received by its repair contractors. Implement procedures to ensure that quarterly reports of all shipments of government-furnished material to Navy repair contractors are generated and distributed to the Defense Contract Management Agency. In written comments on a draft of this report, DOD concurred with our recommendations and provided estimated timelines for implementation of each of the recommendations. DOD’s written comments on this report are reprinted in their entirety in appendix II. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Navy; the Director, Office of Management and Budget; and the Director, Defense Logistics Agency. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. For the purposes of this review, government-furnished material is defined as either (1) usable items (commonly referred to as “A” condition items that the Naval Inventory Control Point directs to be shipped from Navy wholesale warehouses to its repair contractors or (2) items that the Navy repair contractors have repaired to usable condition and issued to themselves to use in completing the repair of another item. According to Naval Inventory Control Point officials, their use of government-furnished material occurs in three different circumstances where an item is needed to complete the repair of another item because the item in question is (1) missing when the other item is inducted for repair, (2) beyond repair or beyond economical repair, or (3) needed to complete the expeditious repair of another item. To assess the Naval Inventory Control Point’s and its repair contractors’ adherence to procedures for controlling government-furnished material, we took the following steps: To identify criteria for controlling shipped inventory, we reviewed Department of Defense (DOD) and Navy procedures, obtained other relevant documentation related to shipped inventory, and discussed inventory management procedures with officials from the following locations: Headquarters, Department of the Navy, Washington, D.C.; the Naval Inventory Control Point; Mechanicsburg, Pennsylvania; the Naval Inventory Control Point, Philadelphia, Pennsylvania; and the Defense Contract Management Agency, Alexandria, Virginia. Because of long-standing problems with the accuracy of data in DOD’s inventory management systems, we took a number of measures to assess the reliability of the Naval Inventory Control Point’s data. To assess the data, we performed electronic testing for obvious errors in accuracy and completeness in the data on government-furnished material shipped to Navy repair contractors. When we found discrepancies in the data, such as missing data elements and data entry errors, we brought them to the Naval Inventory Control Point officials’ attention and worked closely with them to correct the discrepancies before conducting our analysis. In addition, we statistically selected a random sample of the Navy’s data for review. This sampling methodology enabled us to independently verify the overall accuracy of the Navy’s data on government-furnished material shipped to Navy repair contractors. To identify the number, value, and classification of the government- furnished material shipments, we obtained computerized supply records from the Navy’s transaction history file of all shipments between October 2001 and September 2002 from the Naval Inventory Control Point’s two office locations—Philadelphia and Mechanicsburg, Pennsylvania. The records contained descriptive information about each shipment, including document number, national stock number, quantity shipped, classification, and source of supply. After some preliminary data analysis, we excluded all records from the Mechanicsburg office due to the data set mistakenly capturing nongovernment-furnished material as government-furnished material. To select Navy repair contractors and items shipped to them, we used computerized shipment data obtained from the Naval Inventory Control Point’s transaction history file, including data such as national stock number, quantity, source of supply, and transaction dates. We randomly selected 302 unclassified or sensitive shipments (representing 338 items) and selected the total population of 6 classified shipments (representing 6 items) that were issued to Navy repair contractors in fiscal year 2002 as government-furnished material. We randomly selected and sent surveys to 29 Navy repair contractors (associated with 83 unique repair contracts) for our review. We received survey responses from 20 of the 29 Navy repair contractors, representing 207 government-furnished material shipments (corresponding to 243 items). Four of these Navy repair contractors received the 6 shipments of classified shipments. Because the number of Navy repair contractors and government-furnished material shipments were randomly selected, the results of our analysis can be projected to all Navy repair contractors and shipments. To determine how Navy repair contractors are granted access to government-furnished material (i.e., federal supply class or stock number), we conducted a modified statistical sample of the various contractual agreements between the Navy and its repair contractors. We initially planned to manually review a subset of 40 contracts associated with our sample shipments to determine how the government-furnished material was identified in the contract. However, we revised our approach when we discovered that the Navy’s inventory management system had the functionality to match each of our items by stock number to the associated Navy repair contracts and contractors that are repairing the items. As a result, we reviewed all of the contracts associated with our sample shipments. We found each of our sample shipments was listed by national item identification number in the contractual repair agreement and based on certain circumstances (such as items are missing on induction, broken beyond economical repair, or are needed for the expeditious repair of another higher assembly item), these items were furnished to the contractor as government-furnished material. To determine whether Navy repair contractors had received and accounted for our selected shipments, we surveyed each randomly selected contractor to assess whether government-furnished material shipments delineated in the Naval Inventory Supply Control Point’s supply records had been received and entered into the contractor’s property control records. To determine what happened to sample shipments that had reportedly not been received by the Navy repair contractors, we provided a listing to the Naval Inventory Control Point in Philadelphia for further review. To learn whether issues associated with unaccounted for shipments were adequately resolved, we reviewed Department of Defense, Navy, and Naval Inventory Control Point implementing guidance. Such information provided the basis for conclusions regarding the adherence to procedures for controlling shipped inventory. The Navy repair contractors that responded to our survey were BAE Flight Systems, Nashua, New Hampshire; Boeing, Jacksonville, Florida; Lockheed Martin, Marietta, Georgia; Lockheed Martin, Oswego, New York; Lockheed Martin, Syracuse, New York; Northrop Grumman, Bethpage, New York; Northrop Grumman, Woodland Hills, California (two locations); Systems and Electronics, Inc., Sanford, Florida; Raytheon, Indianapolis, Indiana; Raytheon, Goleta, California; Raytheon, McKinney, Texas; Raytheon Technical Services Corp., Indianapolis, Indiana; General Dynamics, Bloomington, Minnesota; Global Technical Systems, Virginia Beach, Virginia; Sikorsky Aircraft, Shelton, Connecticut; Rockwell Collins, Cedar Rapids, Iowa; Beaver Aerospace, Livonia, Michigan; L3 Communications, Alpharetta, Georgia; and Parker Hannifin, Irvine, California. Our work was performed from March 2003 through April 2004 in accordance with generally accepted government auditing standards. In addition to the contact listed above, Lawson Gist, Jr., Jacqueline S. McColl, Corrie J. Dodd, Anthony C. Fernandez, Arthur L. James, Jr., Stanley J. Kostyla, David A. Mayfield, and Robert K. Wild made key contributions to this report. Defense Inventory: Air Force Needs to Improve Control of Shipments to Repair Contractors. GAO-02-617. Washington, D.C.: July 1, 2002. Performance and Accountability Series: Major Management Challenges and Program Risks—Department of Defense. GAO-01-244. Washington, D.C.: January 1, 2001. High-Risk Series: An Update. GAO-01-263. Washington, D.C.: January 1, 2001. Defense Inventory: Plan to Improve Management of Shipped Inventory Should Be Strengthened. GAO/NSIAD-00-39. Washington, D.C.: February 22, 2000. Department of the Navy: Breakdown of In-Transit Inventory Process Leaves It Vulnerable to Fraud. GAO/OSI/NSIAD-00-61. Washington, D.C.: February 2, 2000. Defense Inventory: Property Being Shipped to Disposal Is Not Properly Controlled. GAO/NSIAD-99-84. Washington, D.C.: July 1, 1999. DOD Financial Management: More Reliable Information Key to Assuring Accountability and Managing Defense Operations More Efficiently. GAO/T-AIMD/NSIAD-99-145. Washington, D.C.: April 14, 1999. Defense Inventory: DOD Could Improve Total Asset Visibility Initiative With Results Act Framework. GAO/NSIAD-99-40. Washington, D.C.: April 12, 1999. Defense Inventory: Navy Procedures for Controlling In-Transit Items Are Not Being Followed. GAO/NSIAD-99-61. Washington, D.C.: March 31, 1999. Performance and Accountability Series: Major Management Challenges and Program Risks—Department of Defense. GAO/OCG-99-4. Washington, D.C.: January 1, 1999. High-Risk Series: An Update. GAO/HR-99-1. Washington, D.C.: January 1, 1999. Department of Defense: Financial Audits Highlight Continuing Challenges to Correct Serious Financial Management Problems. GAO/T-AIMD/NSIAD-98-158. Washington, D.C.: April 16, 1998. Department of Defense: In-Transit Inventory. GAO/NSIAD-98-80R. Washington, D.C.: February 27, 1998. Inventory Management: Vulnerability of Sensitive Defense Material to Theft. GAO/NSIAD-97-175. Washington, D.C.: September 19, 1997. Defense Inventory Management: Problems, Progress, and Additional Actions Needed. GAO/T-NSIAD-97-109. Washington, D.C.: March 20, 1997. High-Risk Series: Defense Inventory Management. GAO/HR-97-5. Washington, D.C.: February 1, 1997. High-Risk Series: Defense Financial Management. GAO/HR-97-3. Washington, D.C.: February 1, 1997. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
GAO has reported in a number of products that the lack of control over inventory shipments increases vulnerability to undetected loss or theft and substantially increases the risk that million of dollars can be spent unnecessarily. This report evaluates the Navy's and its repair contractors' adherence to Department of Defense (DOD) and Navy inventory management control procedures for government-furnished material shipped to Navy repair contractors. Government-furnished material includes assemblies, parts, and other items that are provided to contractors to support repairs, alterations, and modifications. Generally, this material is incorporated into or attached onto deliverable end items, such as aircraft, or consumed or expended in performing a contract. The Naval Inventory Control Point and its repair contractors have not followed DOD and Navy inventory management control procedures intended to provide accountability for and visibility of shipped government-furnished material to Navy repair contractors. As a result, Navy inventory worth millions of dollars is vulnerable to fraud, waste, or abuse. First, Navy repair contractors are not acknowledging receipt of government-furnished material from the Navy's supply system. Although a DOD procedure states that contractors will notify the military services' inventory control points once material is received, Naval Inventory Control Point officials are not requiring its repair contractors to do so. Consequently, the Naval Inventory Control Point is not adhering to another DOD procedure that requires the military services to follow up with repair contractors within 45 days when the receipts for items are not confirmed. Naval Inventory Control Point officials stated that receipting for government-furnished material, which is earmarked for immediate consumption in the repair of another item, might overstate the inventory levels in their inventory management system. Without material receipt notification, the Naval Inventory Control Point cannot be assured that its repair contractors have received the material. For fiscal year 2002, the most recent and complete data available at the time of GAO's review, the Naval Inventory Control Point reported that 4,229 government-furnished material shipments (representing 4,301 items value at approximately $115 million) had been shipped to its repair contractors. GAO randomly selected and examined 308 government-furnished material shipments, representing 344 items that were shipped to the Navy's repair contractors. Based on this random sample, GAO estimated that 50 unclassified items may be unaccounted for, with a value of about $729,000 in aircraft-related government-furnished material. Additionally, the Naval Inventory Control Point does not send quarterly reports on the status of government-furnished material shipped to its repair contractors to the Defense Contract Management Agency. DOD and Navy procedures require that the Naval Inventory Control Point generate and distribute quarterly reports on government-furnished material shipped to repair contractors, including information on total shipments and their dollar value, number of shipments for which receipts are unknown, and rejected requisitions to the Defense Contract Management Agency. Although there are a number of DOD and Navy procedures that outline this reporting requirement, the Naval Inventory Control Point officials responsible for implementing this procedure were unaware of the requirement. The Naval Inventory Control Point lacks procedures to ensure that these reports are generated and distributed to the Defense Contract Management Agency. Without the reports, the Defense Contract Management Agency may be unable to independently verify that the Navy repair contractors have accounted for all government-furnished material shipped to them.
|
To meet our objective we took the following steps: Reviewed and analyzed IRS documents and data, including performance and workload data, reports, testimonies, budget submissions, and compared these to IRS’s goals and past performance to identify trends and anomalies. Reviewed various criteria, including industry standards, federal requirements, and best practices, to assess IRS’s performance in key areas. Reviewed information from other organizations that compile data pertinent to our objectives, such as the ForeSee Results IRS Satisfaction Insight Review, which evaluates customer satisfaction with Web site performance. Interviewed IRS officials responsible for tax return processing, taxpayer services, and examination and compliance activities. Interviewed external stakeholders who frequently interact with IRS on key aspects of the filing season, including representatives of a customer service trade organization, to identify customer service benchmarks and best practices, and representatives from major tax preparation firms and organizations. Observed operations at IRS’s Joint Operations Center (which manages telephone services) and listened to calls from taxpayers with telephone assistors. We also viewed operations at one of IRS’s walk-in sites and Submission Processing Center in Atlanta, Ga. We selected these particular offices for a variety of reasons, including the location of key equipment and IRS managers. Reviewed Treasury Inspector General for Tax Administration (TIGTA) reports and interviewed TIGTA officials about IRS’s performance and initiatives. When data were available, we compared IRS’s 2010 performance to its performance from fiscal years 2005 through 2009. IRS officials noted that tax law changes affected performance during fiscal years 2008 and 2009 as compared to 2005 through 2007, when tax law changes were not as significant. This report discusses numerous filing season and performance measures and data covering the quality, accessibility, and timeliness of IRS’s services. To the extent possible, we corroborated information from interviews with documentation and data and where not possible, we attribute the information to IRS officials. We reviewed IRS documentation, interviewed IRS officials about computer systems and data limitations, and compared those results to our standards of data reliability. Data limitations are discussed where appropriate. We consider the data presented in this report to be sufficiently reliable for our purposes. We conducted our work primarily at IRS headquarters in Washington, D.C. and at the Wage and Investment Division headquarters in Atlanta, Ga. as well as other sites mentioned earlier. We conducted this performance audit from February 2010 through December 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Each year during the filing season IRS accepts individual income tax returns electronically and on paper, processes the returns, and validates key pieces of information. A growing majority of taxpayers file their individual tax returns electronically. IRS uses the legacy Individual Master File (IMF) and current Customer Account Data Engine (CADE) to process individual income tax returns. IRS plans to eventually shift all return processing to a new system called CADE 2, which is intended to facilitate faster refund processing and other benefits, such as providing IRS with more up-to-date account information and more timely responses to taxpayer inquiries. IRS is also replacing its legacy electronic filing (e-file) system with the Modernized e-File (MeF) system. IRS cannot accept electronically filed returns directly from taxpayers. Rather, IRS-authorized e-file providers transmit returns to IRS electronically. Return transmitters send electronic return data directly to IRS using either the MeF or the legacy e-file system. The MeF system is intended to accept or reject individual tax returns faster than the legacy system. In addition, if the return is rejected, the MeF system should provide better information regarding why the return was rejected. s The MeF system is also intended to allow taxpayer to attach portable document format (PDF) files to their tax returns, which will be useful in instances where taxpayers are required to submit additional documentation, such as for the FTHBC. Finally, MeF serves a single point of submission for federal and state tax return information. I is planning to fully implement the MeF system in time for the 2012 filin g season, and retire the legacy e-file system in October 2012. As in the last few filing seasons, in 2010, IRS administered complex tax law changes, including the MWP and Residential Energy Property Credits—part of the American Recovery and Reinvestment Act of 2009 (ARRA)—and the FTHBC. The refundable MWP tax credit provides up to $400 for working individuals and up to $800 for married taxpayers filing joint returns. Individuals who received Social Security, Railroad Retirement, or Veteran’s benefits received a $250 Economic Recovery Payment in 2009, which reduced the amount of the MWP credit they were eligible to receive. The Residential Energy Property Credit increases the existing credit rate to 30 percent of cost up to a maximum credit of $1,500 for homeowners who make certain energy efficient improvements to existing homes. As we have previously reported, since 2008, Congress has enacted three versions of the FTHBC to help stimulate the housing market by providing first-time homebuyers and some long-term homeowners with a refundable tax credit to assist with the purchase of a home. Appendix II summarizes key tax law changes that affected recent filing seasons. Part of IRS’s filing season work involves correcting errors on tax returns, which can benefit both IRS and taxpayers. Correcting errors before issuing refunds allows IRS to avoid costly and burdensome audits and taxpayers may receive larger refunds or be made aware of additional taxes owed before being required to pay interest and penalties. For example, IRS used math error authority (MEA) to identify and correct errors with the FTHBC. MEA allows IRS to identify calculation errors and check for obvious noncompliance, such as claims above income and credit limits. These automated and relatively low-cost (compared to audits) math error checks increase the likelihood of IRS collecting the correct amount of tax owed. Congress must grant IRS specific authority to use MEA for purposes beyond computational errors. We previously recommended that Congress broaden IRS’s MEA with appropriate safeguards to prevent its misuse. In addition to processing tax returns, IRS also provides tax law and account assistance, limited return preparation, tax forms and publications, and outreach and education, primarily through its telephone services, Web site, and, to a much lesser extent, through face-to-face assistance. For example, IRS staff provides assistance at 401 walk-in sites where taxpayers can receive basic tax law assistance, receive assistance with their accounts, and have returns prepared by IRS if their annual income is $49,000 or less. IRS also has volunteer partners that staff over 12,000 volunteer sites, which help serve traditionally underserved taxpayer segments, including elderly, low-income, and taxpayers with limited English proficiency. Continued improvements to telephone service and IRS’s Web site could help reduce the demand for taxpayer service at walk- in and volunteer sites. The number of tax returns IRS processed in 2010 declined by about 2 percent from the prior year, as shown in table 1. However, electronically filed returns continued to increase, reaching 71 percent of all returns. As we have previously reported, electronic filing is important because it allows taxpayers to receive refunds faster, is less prone to transcription and other errors, and provides IRS significant cost savings. For example, for fiscal year 2009, IRS reported that it costs 19 cents to process an e-filed return compared to $3.29 for a paper return. Table 1 also shows that IRS issued about 2 percent fewer refunds in 2010 compared to 2009, with the average refund amount being about $2,915. In all, IRS issued about $312 billion in refunds during the 2010 filing season. IRS processed about the same number of returns on current CADE as it did in 2009 (just over 41 million). In 2012, IRS plans to establish and use the new CADE 2 database in conjunction with their legacy system for daily processing of individual taxpayer accounts. Current CADE processes returns about 1 to 8 days faster than the legacy system, and at present, only taxpayers whose accounts have been moved to that system get this benefit (i.e., about 30 percent of all individual returns). Although IRS once intended for current CADE to replace the legacy system, this is no longer the case. Rather, beginning in the 2012 filing season, IRS plans to introduce daily processing for most returns using the legacy system and the new CADE 2 database. At that time, IRS officials expect that the majority of individual taxpayers will receive the benefit of faster refunds in addition to other benefits. Although IRS began using MeF to accept individual returns for the first time in 2010, the system was underutilized. Return transmitters submitted about 7 percent of the total number of returns that IRS officials projected the MeF system could accept. IRS officials cited several reasons for lack of use of the MeF system, including that it is unproven compared to the current legacy e-file system. They also noted that the legacy e-file system had a lower rejection rate than MeF and return transmitters may have stopped using MeF after encountering performance problems. In interviews with GAO and in a survey conducted by the Electronic Tax Administration Advisory Committee, a group of stakeholders that offers suggestions about current or proposed electronic tax administration policies, industry stakeholders who are major users of electronic filing cited MeF system “instability” (system down-time, time- outs, slow servers, and delayed acknowledgments) as a major reason for low use of the MeF system. For the 2011 filing season, IRS officials expect that the MeF system will be capable of accepting up to about 85 million individual returns. However, IRS officials acknowledged that until IRS overcomes the performance and stability issues experienced in 2010, transmitters are likely to continue to send Form 1040 returns to the legacy electronic filing system. Transmitters may still be reluctant to switch to MeF until the system is proven to be stable, and their participation is still voluntary until the legacy system is turned off in 2012. To ensure that issues with the stability of the system do not persist in 2011, IRS officials are testing the MeF system in preparation for next year. IRS officials expect that by 2012, when the legacy system is scheduled to be turned off, MeF should be able to accept all individual returns filed electronically. IRS corrected a large number of taxpayer MWP errors and identified returns with residential energy credits using its Error Resolution System (ERS) this filing season. In addition, IRS applied filters for pre-refund examinations on certain FTHBC returns. IRS officials said that the combined effect of these actions resulted in longer processing times in general—not just for returns with MWP errors or FTHBC claims. In total, about 26 million returns, or about 20 percent of all returns processed, went to ERS this year. IRS officials said it generally takes approximately one week to correct returns in ERS; however, between March and May of 2010, it took up to two weeks to process these returns. Appendix III describes in more detail the large number of returns sent to ERS this filing season. Correcting returns benefited either taxpayers or IRS. For example, IRS corrected millions of MWP errors in favor of taxpayers, meaning taxpayers received larger refunds (or had a lower balance due) than they anticipated when they filed their return. In addition, applying filters for pre-refund examinations allowed IRS to prevent millions of dollars from being issued for ineligible FTHBC claims. Although applying these filters often results in IRS identifying incorrect refunds, we have previously reported that in some cases the filters applied were not sufficient to stop incorrect refunds from being sent to taxpayers. In part due to these complications, IRS’s timeliness in issuing refunds declined by 3 percentage points, marking its lowest level since at least 2005. According to our calculations, this translates to about 3.3 million more refunds being delayed through August 2010 compared to last year. Delays in providing refunds adversely affect taxpayers because it takes longer for them to receive their refunds and contributes to taxpayer calls about the status of their refunds. In addition, IRS paid significantly more interest on those refunds than in previous years, which imposed additional costs to the federal government. For example, IRS paid about $12.6 million in refund interest through August 2010, about $8 million more than in 2009. On the other hand, rapid processing of refunds without proper checks can lead to erroneous refund payments which can be costly to the federal government. Recovering erroneous refunds also imposes additional burdens on taxpayers. Finally, although IRS missed its fiscal year goals for refund timeliness and refund interest paid, IRS met four key processing goals—correspondence error rate, deposit error rate, productivity, and refund error rate. Appendix IV defines and summarizes IRS’s processing performance compared to goals from 2005 to 2010. The percentage of callers seeking live assistance who actually received it—referred to as IRS’s Customer Service Representative Level of Service (LOS)—improved to 76 percent in 2010 as compared to the previous 2 years, as shown in table 2. However, taxpayers waited almost 10 minutes on average to speak with a phone assistor in 2010. This is the longest average wait time since at least 2005, and IRS officials attribute it in part to an increase in the number of calls from taxpayers inquiring about their individual tax account. Taxpayers’ access to phone assistors in 2010 was below the levels from 2005 through 2007 and, although IRS met its goals in 2010, the goals were lower than any previous year since before 2005. IRS’s LOS goal for 2010 was 11 percentage points lower than from 2005 through 2008. As in 2008 and 2009, IRS continued to receive millions of calls related to tax law changes, including for the FTHBC and MWP. From February 8 through June 30, 2010, calls about these two credits accounted for 9 percent of IRS’s telephone services. In response to MWP calls, in March 2010, IRS introduced an automated application to reduce the number of taxpayers needing to talk to a live assistor. IRS received 77 million telephone calls during the 2010 filing season, about the same as in 2009, as table 3 shows. IRS’s automated phone system answered about 25 percent more calls compared to 2009, which is due in part to a new automated phone service which enables taxpayers to request their electronic filing personal identification number (PIN), as well as a 16 percent increase in the use of the refund automated application. IRS phone assistors answered about 24 million calls at a cost, according to IRS officials, of about $25 per call, or about $600 million from January 1 through June 30, 2010. IRS responded to these calls using 24 call centers with about 5,300 full-time equivalents (FTE). The accuracy of IRS’s telephone assistors’ responses to tax law and account-related questions was about the same as last year and exceeded IRS’s fiscal year 2010 goals, as shown in table 4. IRS officials attribute continued levels of accuracy to a number of factors, such as the use of automated assistance tools and targeted training of assistors. The decline in IRS’s live telephone assistance goal from 82 percent in 2005 through 2008 to 71 percent in 2010 raises questions about what constitutes good customer service. Executive Order 12862 instructs federal agencies to establish and measure performance against customer service standards, which are to be equal to the quality of service offered by private organizations providing a comparable service. A related Presidential Memorandum introduced in 1995 and still in effect also notes that customer service standards should reflect customer views. In addition, we have reported that performance data should be used to identify and analyze the gap between an organization’s actual performance and desired outcomes, including by setting performance benchmarks to compare an organization with private organizations that are thought to be the best in their field. IRS sets an annual goal for LOS performance based on resource availability, the expected number and complexity of calls, and anticipated volume of taxpayer correspondence, and subsequently determines weekly and other performance targets to achieve its annual goal. IRS’s LOS goal would differ from a customer service standard in that it measures what IRS management determines is attainable given current resources and expected call volume, compared to defining standards based on the quality of service provided by comparable organizations and on what matters most to the customer—in this case, taxpayers. According to IRS officials, they would be able to determine a customer service telephone standard and could provide cost estimates to achieve the standard. Once set, however, IRS officials identified several challenges to meeting such a standard, including: the potential need for additional resources; the need to balance resources between telephone services, other taxpayer services, and enforcement activities; unexpected changes in agency priorities which require the flexibility to shift resources to respond; and potentially significant fluctuations in call volume, including those resulting from tax law changes. IRS officials expressed concern that developing a customer service telephone standard could create the expectation that the agency would achieve that standard each year, even when resources, call volume, and other priorities may make the standard unattainable. This could be the case even if the annual goal is set at a level that is attainable. However, as noted above, a customer service standard is something to strive for and is different than an annual performance goal. Adding a customer service telephone standard would make the gap between the standard and annual performance goals transparent. Further, it could help IRS communicate its resource needs and help Congress make more informed decisions about IRS’s budget. According to senior IRS officials, IRS has a process that includes holding regular team meetings to solicit ideas from frontline phone assistors about how to improve service. For example, the meetings are intended to help managers identify trends in call topics that might benefit from further research about the source of taxpayer confusion that could lead to finding less costly ways to provide assistance. However, we identified several gaps in this process. Specifically: Staff responsible for analyzing IRS’s telephone calls using a research tool called Contact Analytics said determining appropriate search terms to effectively identify calls from taxpayers is one key challenge to using the system. This is something that could be improved by using frontline employees to identify search terms or trends in call topics. Managers and frontline phone assistors with whom we met considered the process to be informal where phone managers would note some issues to more senior management. According to IRS officials responsible for phone services, IRS did not consult with frontline phone assistors to obtain input on taxpayer call topics when reviewing call trends and adding an additional telephone service line for 2010. The telephone service industry considers holding regular meetings with experienced phone assistors to identify trends in call topics to be a key element in improving service. By not consistently using existing processes to solicit input from IRS’s frontline employees to identify issues for further research, including contributing ideas for Contact Analytics, IRS may miss areas of importance to taxpayers which could improve taxpayer service. Contact Analytics allows IRS to search recorded interactions between taxpayers and IRS assistors to enhance the taxpayer’s experience by gaining a better understanding of the reasons taxpayers call IRS and identifying opportunities for cost savings or efficiency gains. However, Contact Analytics is not used to access phone calls older than 45 days. According to an IRS official, the current 45-day limit causes IRS’s business units to use more time-consuming processes to analyze calls for proposed improvements to IRS’s telephone operations. For example, as part of their planned review of refund inquiry calls received during the 2010 filing season, IRS officials explained they used an alternative system to listen to calls because they needed to review data for several months of the filing season, a period that extended beyond the 45 days available through Contact Analytics. According to officials with IRS’s Contact Analytics office, IRS is only able to store recorded calls for 45 days because of the expense of storing the calls and limited storage space. Separately, IRS officials responsible for recording and storing calls explained that IRS developed its policy on the length of time to store data, including calls available for analysis through Contact Analytics several years ago. IRS officials acknowledged they have not surveyed the business units as to whether they need to store the recorded calls for a longer period or analyzed whether the benefits of storing the calls longer would exceed the costs. IRS’s frontline assistor staff is trained to respond to both telephone inquiries and taxpayer correspondence. IRS shifts staff between these two areas based on the volume of work and resource availability. For fiscal year 2010, IRS dedicated about 5,800 FTEs to taxpayer correspondence and, as we noted earlier, about 9,400 FTEs to telephone service. IRS received about 20 million letters, forms, and other types of taxpayer correspondence in 2010, a slight increase as compared to 2009 and a 25 percent increase compared to 2007. Compared to earlier years, such as 2005, the average percentage of taxpayer correspondence overage has increased significantly, as shown in table 5, which IRS officials attribute to legislative tax law changes and a corresponding increase in the volume of amended returns. Amended returns make up a significant portion of IRS’s taxpayer correspondence work and IRS has processed an increasing number of amended returns since 2005, due in part to taxpayers’ taking advantage of tax law changes. Taxpayers are not able to file amended returns electronically, which leads to increased processing time for taxpayers and added expense for IRS. For many of its processes, IRS has established performance measures to make managers and frontline staff more accountable for improving performance. As we previously reported, performance measures should provide a clear link to organizational priorities to provide useful information for decision making. Since one measure may not encompass the entire performance of a program area, IRS’s balanced measures include measures to assess employee satisfaction, customer satisfaction, and business results. The business results measures generally take into account both the quality and quantity, or productivity, of IRS’s work. As we previously mentioned, IRS has a number of balanced performance measures to monitor the productivity of its business results for telephone service, including average wait time and the percentage of callers seeking live assistance who actually receive it. In all, IRS has five balanced performance measures that address the productivity of its telephone service. IRS has one taxpayer correspondence performance measure that addresses productivity—customer accounts resolved. However, this measure does not account for the timeliness of its correspondence services to taxpayers, which is one of IRS’s organizational priorities. IRS currently measures the timeliness of the employee’s work. However, this measure does not evaluate the time a taxpayer waits for a response. According to IRS officials, IRS uses a number of indicators to monitor its taxpayer correspondence workload, including size of inventory and weekly closures, and make workforce management decisions. In addition, a number of these indicators assess the timeliness of IRS’s response to taxpayers. For example, IRS indicators show that for 2008 and 2009, on average, 23 to 25 percent of IRS’s taxpayer correspondence portfolio has been overage, while in 2010 the percentage overage increased to 27 percent. IRS management uses the percentage overage indicator, among others, to make weekly workforce management decisions, including the allocation of staff between telephone service and paper correspondence. For example, IRS has a computer program that helps IRS officials balance telephone and paper inventories and allocate staff between those two areas. IRS recognizes that providing timely taxpayer correspondence service is its highest improvement opportunity for paper inventory. However, without elevating timeliness to taxpayers as part of its suite of balanced performance measures for taxpayer correspondence, IRS management risks prioritizing telephone or other services, for which such measures already exist, over providing timely taxpayer correspondence. Balanced performance measures are recognized agency priorities and are communicated as such to frontline staff. The lack of balanced performance measures addressing the timeliness of IRS’s response to taxpayers may explain, in part, why such a large percentage of IRS’s taxpayer correspondence is overage. This is particularly important as IRS makes trade-offs between providing telephone service and responding to taxpayer correspondence. Ensuring the timeliness of IRS’s response to taxpayer correspondence directly reduces the volume of calls made to IRS’s telephone services, which can represent significant annual cost savings as phones represent one of the most expensive forms of taxpayer services. Visits to IRS’s Web site continue to increase, and in particular, the use of automated services like “Where’s My Refund?” is substantially higher than in 2007, as table 6 shows. Specifically, IRS piloted four new automated Web services in 2010. According to IRS officials, these automated services are designed to reduce calls to phone assistors by providing alternative channels for taxpayers to access information: the Did I Receive a 2009 Economic Recovery Payment? application, which determines whether the taxpayer received the $250 stimulus payment in 2009; the Electronic Filing Personal Identification Number (PIN) application, which enables taxpayers to request their PIN to sign and file their return electronically; Seven Interactive Tax Assistant topics, which use interactive question-and- response processes, similar to what is used by phone assistors, to answer taxpayer questions about common tax law issues such as filing status, standard deduction, and eligibility for the Child Tax Credit; and a state-by-state partial list of volunteer tax preparation sites with contact and availability information. In the Taxpayer Assistant Blueprint (TAB), IRS’s 5-year strategic plan for improving service to taxpayers, IRS identified five Web site management control gaps and corresponding improvements that would allow IRS to maximize the opportunities to provide taxpayer service through its Web site. IRS has taken action to address some of these gaps. However, other gaps including those related to content management and usability reviews have not been fully addressed. We previously identified management controls as necessary to ensuring the effectiveness and efficiency of operations and the use of resources. IRS officials said that IRS is taking actions to address elements of these management control gaps such as development of new Web content management guidance and usability review guidance, which are expected to be in place by January 2011. Ensuring effective management control for its Web site is especially important in light of IRS’s planned improvements. From January 1 through April 15, 2010, IRS’s 2010 taxpayer satisfaction survey results found that 73 percent of surveyed visitors to www.irs.gov reported that they obtained the information or services they were seeking, the same level as in 2009, but a 5 percent decrease compared to 2007. In response, IRS is taking steps to improve its Web site, including investing $320 million over 10 years to introduce a new site by the 2013 filing season. This $320 million is being used for Web operations, including a Web site help desk, development of interactive Web applications, and program management of IRS’s Web site and registered user and employee sites. According to IRS officials, the new Web site should provide IRS with a strengthened technical infrastructure that would allow for easier updates on the site and new automated features. Through April 30, 2010, IRS received 2.8 million taxpayer contacts at its 401 IRS walk-in sites, about the same as last year. To increase taxpayer access to assistance, IRS piloted extended Saturday and evening hours in 16 IRS walk-in sites, held five Saturday open house events during the fiscal year, and expanded a pilot project to place IRS walk-in site employees in volunteer tax preparation sites to provide assistance with accounts and tax law questions. From January 1 through April 15, 2010, IRS employees worked with approximately 5,300 taxpayers at 27 volunteer sites. According to IRS officials, they plan to continue the programs in 2011. As of April 30, 2010, the accuracy of accounts and tax law assistance provided at IRS walk-in sites continued to improve, as table 7 shows. IRS officials attribute this increased accuracy to the continued use of the Interactive Tax Law Assistant which guides assistors through a series of questions to provide accurate and consistent responses to taxpayers’ questions. Further, IRS introduced a new return preparation assistance accuracy measure, which assesses the extent to which IRS staff prepares accurate returns. According to IRS officials, IRS partnered with community-based organizations that ran 12,326 volunteer sites, staffed with 87,602 volunteers in 2010. Through April 25, 2010, volunteers prepared 2.9 million tax returns, about the same as last year. Return preparation accuracy by volunteers increased compared to 2009. For 2010, volunteers achieved an 85 percent accuracy rate for return preparation compared to 78 percent last year. According to IRS officials, this increase resulted from IRS’s new requirement that volunteers use an IRS-approved intake sheet, expanded training of volunteers, and increased IRS monitoring visits. For more detail on the number of contacts at walk-in and volunteer sites, see appendix VI. IRS is expanding its program to support its volunteer partners as they work with taxpayers to promote financial education and asset building. One initiative of this program is to assist taxpayers who may not have an account at a bank, savings and loan, credit union, or other financial institution, to receive their refund through direct deposit on a debit card issued by one of IRS’s national bank partners. IRS anticipates that these efforts may result in reduced taxpayer use of refund anticipation loans by providing taxpayers with a low-cost or no-cost refund option for receiving refunds quickly. In 2010, the program, which already had a low participation rate, drew far fewer taxpayers than in 2009, despite increasing the number of sites offering the cards from 15 in 2009 to 20 in 2010. Less than 3 percent of taxpayers eligible for the program elected to receive their refund on a debit card compared to 2009 when 8 percent of eligible taxpayers participated in the program. According to IRS officials, poor program participation is in part due to the challenge of appropriately marketing the program to taxpayers. However, other factors may have contributed, including having a limited number of volunteers available to administer the program at volunteer sites and additional training requirements for volunteers distributing the cards. IRS included its partner bank institutions in its evaluation of the program’s 2010 performance, but the evaluation did not include other key stakeholders, such as taxpayers or partners from volunteer sites where the program was implemented. By not including these other key stakeholders, IRS may not have fully identified the causes of the program’s poor participation rate. As we have previously reported, according to the American Evaluation Association’s Guiding Principles for Evaluators, evaluations should include relevant perspectives and interests of the full range of stakeholders. IRS plans to continue to facilitate the program and is carrying out a study and pilot test to improve marketing of the debit card at volunteer sites. However, without an understanding informed by multiple stakeholder perspectives on reasons why taxpayer participation in the program was low, including reasons outside of the manner in which the program was marketed, IRS risks missing opportunities to increase participation. The filing season is a large-scale, complex effort that requires IRS to balance resources across processing returns (including some pre-refund compliance verification) and providing assistance to taxpayers via telephones, mail, walk-in sites and IRS’s Web site. Although IRS dealt with a number of challenges this filing season, its performance improved in some areas and IRS met some goals. Efficiency gains realized from continued growth in electronic filing contributed to IRS’s performance. However, the combined effects of recent tax law changes and changes in taxpayer behavior can be seen in the fluctuations in IRS’s performance— not just this year, but in previous years as well. During 2010, IRS’s performance in issuing timely refunds decreased—a result, in part, of correcting millions of taxpayer errors to the benefit of taxpayers and the federal government. In addition, telephone service, although better than last year, remained below 2005 through 2007 levels and IRS continues to have a significant amount of overage taxpayer correspondence. IRS management faces trade-offs in determining how to best allocate resources among these priorities. For example, when IRS dedicates more resources to providing quality telephone service, fewer resources are available to respond to paper correspondence and vice versa. Opportunities exist for IRS to improve the information available to make decisions between competing priorities and to gain efficiencies. Establishing a customer service standard for telephone service would provide Congress with better information on the resources needed for IRS to deliver better telephone service. Further leveraging the use of powerful tools already in IRS’s arsenal—namely its own staff and data contained in Contact Analytics—should help IRS gain efficiencies. For example, assessing the costs and benefits of storing recorded calls for longer than the current 45-day period could help IRS use Contact Analytics to better determine why taxpayers call. Further, establishing a performance measure for the timeliness of its taxpayer correspondence should help IRS better manage its full range of interactions with taxpayers. Involving all key stakeholders in reviews of important initiatives, such as expanding the effort to provide refunds on debit cards, will lead to a more complete understanding of why such initiatives are or are not working. To gain efficiencies and improve taxpayer service, the Commissioner of Internal Revenue should direct the appropriate officials to: 1. Based on the quality of service provided by comparable organizations and on what matters most to the customer, determine a customer service telephone standard, and the resources required to achieve this standard based on input from Congress and other stakeholders; 2. Use the existing process of regular team meetings with frontline telephone assistors to solicit information on call trends and other potential improvements to phone service and to supplement issues identified using Contact Analytics; 3. Assess business units’ needs for holding Contact Analytics calls beyond 45 days and store calls for this period or document that the costs of doing so exceed the benefits; 4. Establish a performance measure for taxpayer correspondence that includes providing timely service to taxpayers; and 5. Establish an evaluation plan for the 2011 filing season debit card program that includes taxpayers, volunteer site partners, and other stakeholders and assesses the full range of reasons for program participation rates. We provided a draft of this report to the Commissioner of Internal Revenue. We received written comments from the Deputy Commissioner for Services and Enforcement, which are reprinted in appendix I. IRS also suggested technical changes to the report, which we incorporated where appropriate. In response to our draft report, the Deputy Commissioner expressed appreciation to GAO for recognizing IRS’s significant achievements in delivering the 2010 filing season despite the challenges presented by several complex tax law changes. Of the five recommendations, the Deputy Commissioner agreed with two and, although he did not explicitly agree, he described steps IRS is taking to address a third recommendation. He disagreed with two recommendations. The Deputy Commissioner agreed with the recommendation to use the existing process of regular team meetings with frontline telephone assistors to solicit information on call trends and other potential improvements to phone service and Contact Analytics. He also agreed with the recommendation to develop a performance measure for taxpayer correspondence that includes providing timely service to taxpayers. Further, in response to the recommendation that IRS establish an evaluation plan for the debit card program that includes taxpayers, volunteer site partners, and other stakeholders and assesses the reasons for program participation rates, he described steps IRS plans to take to assess the debit card program participation rates. The Deputy Commissioner disagreed with the recommendation that IRS develop a customer service telephone standard, stating that he does not believe that IRS needs to revise its current process to measure telephone service at this time as it currently develops its telephone plans after consideration of many factors. Such factors include historical call demand and the types and anticipated lengths of calls. However, a customer service telephone standard would serve as a means of communicating to Congress and others what IRS believes would constitute good customer service. Having such a standard would make the gap between the standard and annual performance goals more transparent. We recognize that IRS may not be able to achieve the standard because of factors such as unexpected call volume and competing resources. The intent is to highlight for Congress and others the gap between good service and what IRS is able to attain. In addition, developing such a standard would put IRS in compliance with Office of Management and Budget guidance that requires agencies to develop customer service standards. Accordingly, we believe this recommendation remains valid. Finally, the Deputy Commissioner disagreed with the recommendation that IRS assess its business units’ needs for holding Contact Analytics calls beyond 45 days and store calls for this period or document that the costs of doing so exceed the benefits. He stressed that IRS’s Contact Recording System is used to store calls and the Contact Analytics system is used to analyze some calls, noting that IRS is confident that storing calls beyond 45 days would not be a low-cost effort. However, IRS officials responsible for recording and storing calls told us that IRS developed its policy on how long to store data, including calls available through Contact Analytics, several years ago. As we note in our report, IRS officials acknowledged that they have not surveyed business units as to whether they need to store the calls for a longer period or analyzed whether the benefits of doing so would exceed the costs. Further, we identified an example during our review in which IRS needed to use more time-consuming processes to analyze calls because the calls were not available beyond 45 days for use by Contact Analytics. Contact Analytics should allow IRS to better understand the reasons taxpayers call. Because further analysis could demonstrate whether the benefits of storing calls for a longer period currently exceed the costs, we believe this recommendation remains valid. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Director of the Office of Management and Budget. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Description of legislation’s effect on filing season Extended many existing tax deduction provisions by 2 years. Late passage of the bill caused IRS delays in processing some returns. One-time refund on the federal income tax return that can be requested by all individuals and entities that paid the telephone excise tax, regardless of whether they have an obligation to file a tax return. Mandated that IRS send stimulus payments to over 100 million households based on taxpayers who filed a 2007 return. Taxpayers filing as single generally received $600 and married couples received $1,200. Many parents received an additional $300 for each qualifying child born after December 31, 1990. Mortgage Forgiveness Debt Relief Act of 2007 (Pub. L. 110-142) Allowed taxpayers to generally exclude from taxable income forgiven mortgage debt used to buy, build, or substantially improve a principal residence. In 2008 it was extended to qualifying indebtedness discharged by January 1, 2013. (Expanded by the Emergency Economic Stabilization Act of 2008 Pub. L. 110-343) (Provision of the American Recovery and Reinvestment Act of 2009, Pub. L. 111-5) Taxable government bonds that are issued with federal subsidies for a portion of the borrowing costs delivered through (1) nonrefundable tax credits provided to the holders, or (2) refundable tax credits paid to state and local government issuers of the bonds. 1. (Provision of the Housing and Economic Recovery Act of 2008, Pub. L. 110-289) 1. Provided taxpayers a tax credit equal to 10 percent of the purchase of a home up to a maximum of $7,500. Taxpayers must repay the credit over 15 years beginning in the 2011 filing season. 2. (Provision of the American Recovery and Reinvestment Act of 2009, Pub. L. 111-5) 2. Provided taxpayers a refundable tax credit equal to 10 percent of a home’s purchase price up to $8,000. Taxpayers are still required to repay the credit if the home is resold or ceases to be the primary residence of the taxpayer within 3 years. (Provision of the American Recovery and Reinvestment Act of 2009, Pub. L. 111-5) Allows eligible small businesses to apply certain losses experienced in 2008 against tax liability incurred in up to 5 previous years. Allowed for taxpayers that did not receive their full stimulus payment in 2007 to receive the unpaid portion of the credit on their 2008 return. (Provision of the Economic Stimulus Act of 2008, Pub. L. 110-185) (Provision of the Worker, Homeownership, and Business Assistance Act of 2009 Pub. L. 111-92) Extended the FTHBC from November 30, 2009, to April 30, 2010. Also allowed certain long time homeowners purchasing new homes to claim a tax credit up to $6,500. Refundable tax credit providing up to $400 and $800, respectively, to working individuals and married couples filing joint returns. (Provision of the American Recovery and Reinvestment Act of 2009, Pub. L. 111-5) Description of legislation’s effect on filing season (Provision of the American Recovery and Reinvestment Act of 2009, Pub. L. 111-5) Increases the existing percentage of costs that can be claimed and maximum allowable credit available to homeowners who make certain energy efficient improvements to existing homes through December 2010. Figure 1 below shows that for the third consecutive year, the number of returns in the Internal Revenue Service’s (IRS) Error Resolution System (ERS) steadily increased. In 2008, IRS corrected many returns due to the economic stimulus package and telephone excise tax refund. Last year IRS corrected many returns due to the Recovery Rebate Credit and IRS said this year’s high inventory was due to Making Work Pay (MWP) errors and using ERS to identify returns on which taxpayers claimed certain credits, including the residential energy credit. Through September 30, IRS had corrected about 7.7 million errors associated with MWP, which represents about one-third of all returns that went to ERS. Approximately 4.6 million of these taxpayers, or 60 percent, did not claim MWP and IRS computed the credit for them, according to IRS data through October 1. The remainder of these taxpayers made an error calculating the credit. IRS officials took several actions this filing season which they believe helped reduce the ERS inventory, but a key automated tool was not ready when IRS processed most of the returns. Beginning in June 2009, IRS started developing an Integrated Automation Technology tool specifically to correct frequently recurring MWP errors. However, IRS had difficulties developing the tool and it was not available at all processing sites until June 18, 2010. By that time, IRS had already corrected approximately 5.6 million, or about 75 percent, of all MWP errors. In addition to creating the tool, IRS staff worked overtime, shifted resources among various submission processing functions, and hired and trained additional employees. IRS also placed a large volume of returns in ERS from taxpayers claiming residential energy credits. However, these returns did not necessarily contain errors. Rather, IRS used ERS to transcribe information to identify the number of taxpayers claiming residential energy credits and the dollar amount. For the 2011 filing season, IRS officials told us the Residential Energy Credit will appear by itself on Form 1040, line 52 and the combined credits from Form 8396, Mortgage Interest Credit, Form 8839, Qualified Adoption Expenses, Form 3800, General Business Credit, and Form 8801, Credit for Prior-Year Minimum Tax, will be reported on line 53. IRS officials said the credits from Form 3800 and Form 8801 will be transcribed separately in the Integrated Submission and Remittance Processing system, eliminating the need for ERS to transcribe them. IRS officials said this system is faster and less expensive than ERS, but there was not enough time during the 2010 filing season to program the system for this purpose. To prevent sending as many returns to ERS next filing season, IRS officials told us they are updating forms and other materials using lessons learned from this filing season. As shown below in table 8, the Internal Revenue Service (IRS) met half of its performance goals, the fewest number of goals met since at least 2005. IRS met the fiscal year 2010 goals for correspondence error rate, deposit error rate, productivity, and refund error rate. The fiscal year goal for four measures were not met—refund interest paid, refund timeliness, deposit timeliness, and efficiency. As noted earlier in the report, IRS officials attribute the combined effect of correcting millions of taxpayer errors and conducting targeted pre-refund compliance checks to missing goals for refund timeliness and refund interest paid. However, IRS also narrowly missed its goals for deposit timeliness and efficiency. IRS officials attributed missing deposit timeliness to an increased number of payments which generally had smaller dollar amounts. IRS officials attributed missing the efficiency measure goal to processing fewer information return Schedule K-1 documents than projected, and using more staff resources to process returns in ERS. During the 2010 filing season, the Internal Revenue Service (IRS) received most of its calls in the period leading up to and including the April 15th filing deadline, with the heaviest volume of calls at the end of January and beginning of February and during the week of the filing deadline (see fig. 2 below). IRS saw relatively fewer busy signals or IRS initiated disconnect of taxpayers as compared with 2008 and 2009. IRS’s Level of Service (LOS), or the percentage of callers seeking and receiving live assistance, takes into account a number of factors, such as the number of assistor calls answered, informational message calls answered, calculated busy signals, courtesy disconnects, and taxpayer hang-ups after being routed to a response line. According to IRS officials, other telephone call centers wait a certain period of time before counting callers who hang-up against their performance measures, whereas IRS counts these hang-ups, referred to as “secondary abandons” immediately. The total number of contacts at walk-in sites, taxpayer assistance centers staffed by Internal Revenue Service (IRS) employees, and volunteer sites, where volunteers prepare tax returns, are about the same in 2010 as in 2009, but lower compared to 2008. As IRS expands automated taxpayer services online and over the telephone, the demand for face-to-face service is likely to decline. However, face-to-face services remain an important component of IRS’s efforts to serve many taxpayers as some taxpayers, particularly those with low incomes or limited proficiency in English, still require face-to-face assistance. In addition to the contact named above, Joanna Stamatiades, Assistant Director; Amy Bowser; James Cook; Tom Gilbert; Mark Kehoe; Kirsten Lauber; Blake Luna; Patricia MacWilliams; Sabrina Streagle; Jeff Wojcik; Jennifer Wong; and Benjamin Wories made key contributions to this report.
|
The Internal Revenue Service's (IRS) filing season is an enormous undertaking that includes processing individual income tax returns, issuing refunds, and responding to taxpayers. GAO was asked to assess IRS's 2010 filing season performance in relation to its goals and prior years' performance processing individual tax returns, answering telephones, and delivering Web and face-to-face services. To conduct the analysis, GAO analyzed data and documents from IRS, interviewed IRS officials, observed IRS operations, and interviewed tax industry experts. IRS dealt with a number of challenges this filing season, including significant tax law changes, such as the Making Work Pay credit, and corresponding changes in taxpayer behavior. IRS balanced its resources across its filing season activities with improvements in some areas but fluctuations in others. Return processing: Electronic filing, which reduces costs to IRS, increased about 3 percent, to 71 percent of all individual returns. However, IRS experienced delays in issuing millions of refunds, which IRS officials attributed primarily to correcting taxpayer errors associated with the Making Work Pay credit and conducting additional automated checks. Telephone service: Compared to 2009, the percentage of callers seeking live assistance who received it improved in 2010 and the accuracy of answers remained high, at over 90 percent. However, the average wait time increased. Further, IRS's annual goal for providing caller assistance was lower than any of the preceding 5 years. However, IRS lacks a standard for what constitutes good customer telephone service that could be compared to its annual goals. Such a standard would make the gap between the annual goals and the standard more transparent. IRS is using a tool called Contact Analytics to better understand the reasons why taxpayers call. However, IRS has not assessed the costs and benefits of storing recorded calls for longer than the current 45 day period for use in Contact Analytics, and GAO identified gaps in the process IRS uses to solicit input on call topics from frontline IRS staff. Such input could be used to identify issues for further research using Contact Analytics. IRS's customer service staff also responds to taxpayer correspondence. IRS received about 20 million pieces of correspondence in 2010, but it does not have a performance measure that addresses the timeliness of taxpayer correspondence, a key agency objective. By not having such a performance measure, IRS managers may have a less informed basis for balancing resources across telephone and correspondence services. Web site: Visits to IRS's Web site increased and IRS is taking steps to improve content management before introducing a new Web site in 2012. Face-to-face: In 2010, taxpayer visits to IRS's walk-in sites and sites operated by volunteers remained about the same as in 2009. IRS's program to provide refunds on debit cards at certain volunteer sites, targeting taxpayers without bank accounts, received little use in 2010. IRS's evaluation of the program did not include taxpayers or volunteers. By not including these stakeholders, IRS risks not learning the real reasons for low participation. GAO's five recommendations to IRS are to establish a customer service telephone standard, assess the costs and benefits of storing recorded calls beyond 45 days, solicit information on call trends from employees, develop a performance measure for the timeliness of taxpayer correspondence, and involve key stakeholders in its evaluation of its debit card program. IRS disagreed with developing a customer service standard, not wanting to revise its measurement of phone service. However, a standard would allow IRS to communicate to Congress what it believes constitutes good service. IRS also disagreed with assessing the costs and benefits of storing calls beyond 45 days. GAO's report suggests that further analysis could show whether the benefits of doing so currently exceed the costs. IRS generally agreed with the other three recommendations.
|
In 1998, about 7 million—or 18 percent—of Medicare’s 39 million beneficiaries were enrolled in a managed care plan. About 90 percent of Medicare managed care enrollees belong to one of 307 risk-contract HMOs.These plans are paid a predetermined monthly amount for each Medicare enrollee, regardless of the amount of Medicare covered services the enrollee uses. The plans are called “risk” HMOs because they assume the financial risk of providing care for the amount Medicare pays. Risk HMOs must provide all services covered by fee-for-service Medicare; in many instances, they provide additional services—such as outpatient prescription drugs and routine physical exams. Generally, plans require enrollees to use only providers that contract with the plan and to follow certain procedures to obtain health care services. For example, most plans require enrollees to obtain prior authorization for care either from their primary care physician or directly from the plan. If enrollees do not follow the procedures, plans may not pay for the services. HCFA performs biennial on-site performance reviews of each health plan’s operations, including the appeals process, to evaluate plan compliance with HCFA regulations. HCFA staff review a sample of appeal cases and evaluate whether the plan met Medicare process and timeliness requirements. Results of the performance review are reported in the monitoring report. The report documents whether a plan met all legal and policy requirements and describes any deficiencies and needed corrective actions. In November 1993, a class action lawsuit filed against the Secretary of HHS challenged a number of the policies and practices of the Medicare managed care program. As a result of this lawsuit, HCFA is currently under an injunction and order issued by the federal district court that requires Medicare HMOs to give their enrollees written notices that meet certain criteria. Specifically, the order required, among other things, that Medicare HMOs (1) issue denial notices within no more than 5 working days of the request for service or payment and at least 1 working day before the reduction or termination of treatment, (2) clearly state the reason for the denial in the notice, (3) expedite appeals when services are urgently needed (within 3 working days of the request), and (4) continue acute care services until a final appeal decision is issued when the beneficiary requests an expedited appeal. Since the 1997 court order, HCFA has required each plan to implement an expedited process for decisions on initial requests for health services and appeals of denied health services. Subsequently, the expedited process was mandated along with other appeals procedures and beneficiary protections by the Balanced Budget Act of 1997 (BBA) and further addressed in the Medicare+Choice regulations published in June 1998. A beneficiary may now request an expedited decision if he or she believes that serious adverse health consequences could result from waiting for a decision under the standard process. Medicare beneficiaries enrolled in managed care plans have a multilevel appeals process available if plans refuse to pay for requested services, refuse to provide requested services, or discontinue or reduce services.Beneficiaries generally appeal to their plan first. If the plan upholds the initial denial, the appeal is forwarded to CHDR for external review and resolution. However, a further appeal to an ALJ and the court is possible. Under certain circumstances, a beneficiary or a health care provider may request that a plan expedite its decision on the initial request and any subsequent appeal. The appeals process may begin when a Medicare member asks his or her plan to provide a service, such as skilled nursing care or a referral to a specialist, or pay for a service already obtained and is turned down. In such instances, Medicare requires plans to issue a written notice that states the reason for the denial and explains the beneficiary’s appeal rights. A member has 60 days from the date of the denial notice to ask the plan to reconsider its initial decision. The appeal request, which must be in writing, can be addressed to the member’s health plan or the Social Security Administration, which will forward it to the health plan. A member is not required to submit additional information to support or clarify the request. However, health plans must provide their members the opportunity to supply such information. The plan’s reconsideration of its initial decision, the internal portion of the appeals process, must conform to certain requirements. Prior to July 27, 1998, a plan had up to 60 days to complete this process; now a plan must reconsider its initial decision within 30 calendar days if the request is for health care services and within 60 calendar days if it is for payment. The plan representative considering the appeal must not have been involved in making the initial decision. To make a reconsidered decision, the plan representative reviews the initial decision and all other evidence submitted by the beneficiary, beneficiary representative, provider, and health plan. If a plan upholds, in whole or part, its initial denial, it must forward the case to CHDR for external review. HCFA has modified its contract with CHDR requiring CHDR to be held to the same time standards as the plans for processing appeals. (Prior to the change, CHDR had 30 days to consider the case, make its ruling, and inform the beneficiary of its decision.) If CHDR upholds the plan’s denial, the beneficiary can request an additional appeal before an ALJ, provided the services in question cost at least $100. A beneficiary may ask that the Social Security Departmental Review Board review a denied ALJ appeal. If the board declines to review the ALJ decision or denies the appeal and the amount of the services in question is greater than $1,000, the beneficiary may request a hearing in U.S. District Court. A beneficiary who loses an appeal is responsible for the cost of any disputed health care services that he or she obtained. Figure 1 shows the Medicare appeals process, step by step. Since August 28, 1997, HCFA has required managed care plans to establish and maintain an expedited process covering both initial decisions and internal appeals. Medicare beneficiaries can request expedited decisions when they believe that waiting the standard time for an initial decision or an appeal of the initial decision could seriously jeopardize their health or life. If a beneficiary makes the request, the plan determines whether the expedited process is warranted. If a physician makes the request on behalf of a beneficiary or concurs with the beneficiary’s request, the plan must expedite its decision. Generally, health plans must make the expedited decision within 72 hours following the request. An expedited decision that is adverse to the beneficiary must be forwarded to CHDR within 24 hours. CHDR is required to process the expedited cases within 72 hours.Figure 2 provides the time intervals for major events in the process. HMOs that responded to our survey reported receiving approximately 9 appeals annually per 1,000 Medicare members. However, this number may understate beneficiaries’ dissatisfaction with HMOs’ initial decisions. First, dissatisfied beneficiaries may disenroll and switch to another plan or fee-for-service instead of appealing. Second, beneficiaries may be unfamiliar with their appeal rights or the appeals process. Plans may not always issue the required notices or may omit an explanation of beneficiaries’ appeal rights. In other cases, beneficiaries may not appeal because the notices list nonspecific reasons for the denial. The number of annual appeals per 1,000 Medicare beneficiaries varied among HMOs and may be rising. The 242 Medicare HMOs that responded to our recent survey reported an average of about 9 appeals annually per 1,000 beneficiaries between January 1996 and May 1998 (see table 1). Generally, plans overturned nearly three-quarters of the requested appeals. Those not overturned were submitted to CHDR for further review and consideration. Between August 28, 1997, and December 31, 1997, plans expedited 861 appeals. During the first 5 months of 1998, plans expedited 1,548 appeals. The number of annual appeals per 1,000 Medicare beneficiaries among HMOs ranged from 0 to 90. Over half of the plans reported between 1 and 10 appeals per 1,000 beneficiaries. A number of HMOs reported no appeals for each study year: 17 percent in 1996, 13 percent in 1997, and 9 percent in 1998. Nearly all of these HMOs (87 percent) had low Medicare enrollment. There was no similar pattern for plans with the highest appeal rates; they were spread nearly evenly across all plan sizes. The appeal rate may be rising. Plans reported just over 8 appeals per 1,000 beneficiaries in 1996 and 1997, but annualized data from the first 5 months of 1998 indicated more than 10 appeals per 1,000 beneficiaries. Aggregate appeals data may indicate potential problems with a plan’s appeals process, but additional information is needed to assess whether a plan adequately performs this function. A relatively low appeal rate may be the result of a plan’s low denial rate or members who are unaware of their appeal rights. Conversely, a plan that denies many requests or that actively educates members about their rights may experience a relatively high appeal rate. Consequently, appeals data should be considered in conjunction with other factors, such as the rates at which CHDR overturns plans’ appeal decisions and HCFA’s observations of plans’ appeals process. The number of appeals may understate beneficiaries’ dissatisfaction with their HMO’s initial decision if some disenroll instead of appealing. Currently, beneficiaries may disenroll and switch to another plan or Medicare fee-for-service at the end of any month. As we have previously reported, many Medicare HMOs experience high disenrollment rates. The extent to which beneficiaries choose to disenroll rather than appeal is unknown. It is clear, however, that disenrollees report less satisfaction with the care they received from their HMOs than enrollees. According to a survey conducted by HHS’ OIG, disenrollees were much more likely than enrollees to say that their primary HMO doctor failed to provide Medicare-covered services. The survey showed that 12 percent of the disenrollees said that their doctors failed to provide covered services, whereas, only 3 percent of enrollees made such an assertion. If some beneficiaries leave their plans instead of appealing adverse decisions, the number of appeals may rise as BBA’s lock-in provisions take effect. Beginning January 1, 2002, beneficiaries will generally be able to change their enrollment decision only once each year outside the annual open enrollment period. In 2002, this change must occur within the first 6 months of the year. In subsequent years, the change must occur within the first 3 months. After the disenrollment period ends (3 or 6 months), beneficiaries will be locked into their selected plans for the remainder of the year. Studies by HHS’ OIG and by the Medical Rights Center (MRC) confirm the views of several advocacy group representatives that beneficiaries are confused about the Medicare appeals process. HHS’ OIG reported in March 1998 that 27 percent of Medicare HMO enrollees and 35 percent of disenrollees surveyed were uninformed about their appeal rights—rates similar to those found by the Inspector General in 1993. The results of an analysis conducted by MRC are consistent with the OIG’s findings. MRC reported that 40 percent of the 179 beneficiaries who called the center between August 27, 1997, and February 28, 1998, were confused about their appeal rights. According to MRC officials, HMO physicians and customer service staff sometimes compounded beneficiaries’ confusion. For example, MRC handled several cases where HMO customer service representatives allegedly gave out misleading, incorrect, or no information on beneficiaries’ Medicare appeal rights. Representatives of other advocacy groups reported similar experiences and said that they believe many beneficiaries have difficulty understanding the appeals process. Beneficiaries are supposed to be informed of their appeal rights when they receive a written notice from their plan denying a service or payment.These notices are required to state that the beneficiary has the right to appeal if he or she believes the plan’s initial determination is incorrect. The notices must also tell the beneficiary where and when the appeal must be filed. However, HCFA, OIG, and our own analysis of CHDR appeal cases found numerous instances of incomplete or missing denial notices. HCFA monitoring reviews indicate that some denial notices were not issued and others failed to mention beneficiaries’ appeal rights. In 1997, HCFA performed 90 monitoring visits to health plans. About 13 percent of the plans reviewed were cited for failing to issue denial notices. Nearly one-quarter of the 90 plans were cited for issuing denial notices that did not adequately explain beneficiaries’ appeal rights. Two studies by HHS’ OIG provide additional evidence that beneficiaries are not always informed of their appeal rights. In one study, the OIG found that in 39 out of 144 appeal cases there was no evidence that the beneficiaries had been sent the plans’ initial decisions explaining their appeal rights. In another study, the OIG surveyed beneficiaries who were enrolled or had recently disenrolled from a managed care plan. According to the results of a survey, 41 respondents (about 10 percent) said that their health plan had denied requested services. Of these, 34 (83 percent) said that they had not received the required notice explaining the denial and their appeal rights. Similar deficiencies were found in the appeal cases reviewed at CHDR. Of the 108 CHDR appeal cases reviewed, 5 contained denial notices that failed to inform the beneficiary of his or her appeal rights. Another 32 cases sent to CHDR by the plans lacked the denial notices completely. HCFA requires that denial notices clearly state the specific basis for denial. HCFA officials said that vaguely worded denial notices hinder enrollees’ efforts to construct compelling counterarguments for their appeals. Also, vague notices may hinder beneficiaries from appealing because they may be uncertain as to whether they are entitled to the requested services. Most notices we reviewed contained general, rather than specific, reasons for the denial. In 53 of the 74 CHDR cases that contained the required denial notices, the notices simply said that the beneficiary did not meet the coverage requirements or contained some other generic reason. It is unclear whether beneficiaries who receive denial notices with nonspecific reasons are less likely to submit written support for their position compared to beneficiaries who receive more detailed notices. Beneficiaries had submitted written support in only 14 of the CHDR appeal cases. you required skilled rehabilitation services—P.T. eval. for mobility + gait—eval. for ADL’s, speech eval. swallowing—from 2/11/98 and these services are no longer needed on a daily basis. The case file indicated that while was making progress in his therapy programs, his condition had stabilized and further daily skilled services were no longer indicated. The physical therapy notes indicate that he reached his maximum potential in therapy. He had progressed to minimum assistance for bed mobility, moderate assistance with transfers, and was ambulating to 100 feet with a walker. The speech therapist noted that his speech was much improved by 2/18/98 and that his private caregiver had been instructed on safe swallowing procedures and will continue with feeding responsibilities. Representatives from several advocacy groups told us that in cases brought to their attention, the denial notices were often general and did not clearly explain why the beneficiary would not receive or continue to receive a specific service. In August 1997, MRC established a hotline for HMO appeals and analyzed all calls it received during the first 6-month period (179). MRC concluded that the explanations found in most plans’ denial notices were unhelpful because of their generality—for example, the services were “not medically necessary.” HCFA regulations state that whenever plans discontinue services, they must issue timely denial notices to beneficiaries. HCFA, however, does not specify how much advance notice is required and we found that many plans do not issue denial notices in what many would reasonably consider “timely.” Although beneficiaries may appeal denied services upon receiving notice, those who receive little advance notice may not be able to continue to receive services because of their potential financial liability. If the beneficiary appeals and loses, he or she is responsible for the cost associated with services received after the date specified in the denial notice. The potential financial burden can be substantial, especially if the denial involves SNF services. In three of the four plans we visited, the general practice was to issue denial notices the day before services were discontinued. We reviewed a number of SNF discharge notices at three HMOs and often found that the notices were mailed (usually by certified or express mail) to the beneficiary’s home instead of being delivered to the facility where the beneficiary resided. In some cases, it appeared that the beneficiary or his or her representative received the notice a few days after the beneficiary had been discharged. Ten of the 25 CHDR cases we reviewed also involved a beneficiary or his or her representative receiving a discharge notice after the beneficiary was discharged from the SNF. The fourth plan we visited issued SNF discharge notices 3 days prior to the discharge date. This lead time helped ensure that the beneficiary received the notice before the discharge. It also allowed more time for the beneficiary to file an expedited appeal and receive a decision from the plan. Consequently, beneficiaries in this plan who appeal and lose are less exposed to SNF costs incurred during the appeals process. Officials in three plans indicated that when a beneficiary is being considered for discharge, a nurse or discharge planner probably would have discussed the issue with the beneficiary well in advance of the discharge. Even when a beneficiary knows a discharge is imminent, however, he or she cannot appeal until a denial notice is officially issued. Officials from the plans we visited told us that, in almost every instance, the decision to discharge a beneficiary from a SNF is made several days before the actual discharge date. Officials from all the plans agreed that, in most instances, such notices could be issued several days prior to the discharge date so that beneficiaries who wished to appeal could receive an expedited appeal decision before the planned discharge date. HCFA’s biennial monitoring of plans’ appeals process focuses on timeliness and administrative issues, but we found several important weaknesses in the agency’s monitoring procedures. For example, HCFA’s sampling of cases to determine whether beneficiaries are appropriately informed of their appeal rights likely misses beneficiaries who were not informed. HCFA’s monitoring also generally excludes the operations of HMO provider groups that may be responsible for making denial decisions and for issuing the required notices. HCFA officials believe that the agency can improve in many of these areas, and in commenting on a draft of this report, HCFA said that it has begun to address these weaknesses. However, to date, HCFA has made little use of the results of its HMO performance reviews to develop overall national trends and improve the agency’s oversight function. To determine whether plans informed beneficiaries of their appeal rights, HCFA’s monitoring protocol requires agency staff to review a sample of appeal cases. HCFA staff check these case files to determine whether each contains a copy of the required denial notice. However, it seems reasonable to assume that beneficiaries who appeal denials are more likely to have been informed of their rights than beneficiaries who do not appeal. Yet HCFA does not check cases where services or payment for services were denied and not appealed. HCFA might get a better indication of whether beneficiaries were told of their rights if agency staff examined a sample of denial notices from cases that were not appealed. Some health plans delegate the responsibility for deciding whether to expedite initial decisions, issuing denial notices, and other operating tasks to medical provider groups. For example, one plan we visited had delegated the responsibility of issuing service and payment denial notices, including paying claims, to approximately 250 provider groups with which it contracted. A plan official stated that his plan has never reviewed service denials and does not know how many services its provider groups have denied. The plan has, however, recently developed a monitoring protocol to review service denials and intends to implement it soon. According to several HMO officials, this practice is common in California and is increasing in other parts of the country. Officials also said that HMOs typically exercise little or no oversight over provider groups’ operations and have difficulty ensuring that groups adequately perform the delegated tasks. For example, according to an official from another HMO, provider groups on the West Coast expect plans to grant them the authority to issue denial notices because they are at financial risk for the services they provide. To contract with these groups, his plan must delegate that authority even though the practice is not desirable from his HMO’s perspective. He said that provider groups often do not send the plans copies of issued denial notices, although the plans request them. The official estimated that his plan receives only about 30 percent of the denial notices issued by their provider groups. He added that his plan does not review the notices it does receive. Moreover, according to a HCFA official, HCFA does not generally monitor HMO provider groups. Because provider groups may not submit requested information to HMOs and HCFA does not normally monitor provider groups directly, it is likely that no one reviews many of the initial decisions—including expedited decisions—made by these groups. A 1998 study done for HCFA noted that the delegation of authority to provider groups is problematic because health plans do not exercise sufficient control over the delegated functions. The report recommended that HCFA pay closer attention to this issue. Although HCFA has provided plans with general guidance, such as model language for denial notices, it has not produced specific guidelines to ensure consistent implementation of the expedited appeals process. Further, without clear guidelines on what should be expedited, HCFA has no way of determining whether plans are expediting initial decisions and appeals appropriately. HCFA has not produced criteria or examples for HMOs to follow when deciding whether the standard appeal time frames could seriously jeopardize a beneficiary’s health or life. In the absence of such criteria, Medicare HMOs have a wide latitude to determine whether a beneficiary’s request for an initial decision or appeal should be expedited. Receiving no specific guidance from HCFA, several California HMO and provider industry representatives formed a work group and developed clinical criteria for expedited initial decisions and appeals. In January 1998, the HCFA region responsible for Arizona, California, and Nevada provided the work group’s criteria to all Medicare HMOs in those three states. HCFA officials said they are not aware of similar efforts in other regions. We found, however, that at least one Florida HMO had incorporated much of the California work group’s criteria into its own procedures—possibly because the HMO also operated in California. Without better guidance from HCFA, some cases that should be expedited may not be. In our review of cases sent to CHDR, we examined 42 appeals involving denied services that HMOs had not expedited. CHDR reviewers determined that seven (17 percent) of these cases should have been expedited. (CHDR expedited these cases for its own review process.) Staff from HCFA’s central and regional offices told us that the agency has made little use of its monitoring reports as an overall program management tool. Each report documents the results of HCFA’s biennial performance review of a plan and summarizes its compliance with Medicare regulations. Aggregating the findings from the individual monitoring reports could help HCFA monitor the relative performance of plans, identify variations among regions, and study national trends. However, when we requested all of the 1997 monitoring reports no one at HCFA’s headquarters had a complete set. We were told that we would have to request them from each region. Shortly after we requested the reports from the regions, the Health Plan Purchasing and Administration Group in HCFA’s central office began collecting from the regional offices all 1996 and 1997 monitoring reports. According to HCFA officials, agency staff are now analyzing the information in the reports. HCFA is planning to develop a health plan management system that will provide information to central and regional office staff and will aid plan and program oversight. The system will include information on appeals. HCFA had expected to complete the data design phase by now but has fallen behind schedule. According to the project manager, the system will not be operational until late 1999 or early 2000. The need for both HCFA and Medicare beneficiaries to have information on HMO appeals is well recognized. In 1996, and again in 1998, HHS’ OIG recommended that HCFA require managed care plans to report data on appeals, such as the number of cases, the number resolved internally and externally, issues involved, and the time needed to resolve cases. Also, in implementing its expedited process, HCFA is requiring plans to report data on expedited appeals. Further, BBA requires plans to disclose information on the number and the disposition of appeals to interested Medicare beneficiaries. On February 10, 1999, HCFA issued an operational policy letter that establishes the guidance for managed care plans to follow in collecting appeals data and making that information available to Medicare beneficiaries. Plans will report the number of appeals per 1,000 Medicare beneficiaries. Each plan’s rate will be based on its contract market. Plans will begin collecting and maintaining appeals data beginning April 1, 1999. Data collection periods will be based on a rolling 12-month period. (The prior 6 months of data are added to the next 6 months of data in order to come up with a 12-month data collection period.) The first 6-month period will begin April 1, 1999, and end September 30, 1999. Plans will report results from the first 6-month period on January 1, 2000. HCFA, however, has not provided guidance on the type of appeals data plans should collect and report to HCFA. According to officials in HCFA’s central office, the agency has formed a work group—consisting of plan representatives, advocacy representatives, and program officials—to develop appeals data requirements. HCFA expects to finalize these requirements later this year. Meanwhile, some HMOs may be waiting to receive HCFA’s guidelines before they implement systems to track their appeals data. Although all the plans that responded to our survey reported the total number of appeals upheld and overturned, only about two-thirds were able to break down their appeals into more specific service categories, such as nursing home care and emergency room use. Medicare beneficiaries have access to a multilevel appeals process that allows them to challenge HMO decisions to deny services or payment for services. Relatively few beneficiaries—about 9 out of every 1,000 managed care enrollees—appeal each year. Some beneficiaries may not appeal, however, because they are unaware of their appeal rights or confused about the process. Evidence from a variety of sources—HCFA monitoring reports, studies by HHS’ OIG, and our review of cases at plans and at CHDR—indicate that plans do not always inform beneficiaries of their appeal rights as required. In some cases, denial notices cite nonspecific reasons for the denial, making it more difficult for beneficiaries to challenge their plan’s decision. In other cases, beneficiaries may be unnecessarily exposed to substantial health care costs because notices are not issued in a timely fashion. Furthermore, the agency has not issued specific guidance as to the types of cases plans should expedite. HCFA reviews plans’ implementation of the appeals process, but its monitoring protocol exhibits several weaknesses. For example, HCFA does not know whether provider groups have satisfactorily implemented the required appeals process because it exercises little oversight over provider group operations. The type of cases HCFA samples to determine whether beneficiaries were informed of their appeal rights likely systematically misses beneficiaries who were not informed. Further, the agency has not provided plans guidance on the types of appeals data they should collect and report to HCFA. HCFA agrees that it needs to strengthen its oversight of health plans’ appeals process and noted that the agency has several initiatives under way. To help ensure that the appeals process provides adequate protection to Medicare beneficiaries, the HCFA Administrator should take the following actions: Provide more explicit denial notice instructions to plans. Denial notices should explain the coverage criteria and state the specific reason or reasons why the beneficiary did not meet the criteria. Set specific timeliness standards for certain types of denial notices, such as discontinued SNF care services, to allow beneficiaries reasonable time to obtain an expedited appeal decision. Develop criteria for plans to use in determining when initial decisions and appeals should be expedited. To improve the agency’s monitoring of the appeals process, the HCFA Administrator should take the following actions: Require each plan to collect sufficient information from its provider groups so that HCFA staff can, during the course of a normal biennial performance review, determine whether the plan and its provider groups satisfactorily implemented the required appeals process. Require agency staff conducting performance reviews to sample a number of denied cases that were not appealed to determine whether beneficiaries were informed of their appeal rights. Use the data the agency collects during plan performance reviews to assess the relative performance of plans, and develop strategies for better plan monitoring and program management. To ensure that appeals data are available to HCFA and Medicare beneficiaries, the Administrator should develop requirements for the type and format of appeals data plans must collect and make available. HCFA agreed with our finding that its oversight of health plans’ appeals process needs to be strengthened and generally agreed with our recommendations. (See app. II for HCFA’s written comments regarding our recommendations.) The agency outlined several initiatives it has recently undertaken to better protect beneficiary rights. Some of these initiatives may be implemented shortly; others are in the early planning stage. HCFA expressed concern, however, about our recommendation that the agency develop criteria to help plans determine when initial and appeal decisions should be expedited. HCFA said that a further refinement of the current general criteria might inadvertently exclude unspecified standards. HCFA said that it would explore possible options regarding the criteria, but that it would proceed cautiously to avoid unanticipated problems. We disagree with the premise that further refinement of the criteria would inadvertently limit beneficiary access to expedited initial and appeal decisions. As noted in this report, specific clinical criteria have been developed and used by plans in at least one HCFA region. HCFA could develop specific criteria, to be implemented nationwide, that are understood to be an elaboration of the current general criteria and not a replacement for them. In addition, HCFA provided several technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents earlier, we plan no further distribution of this report until 1 day from the date of this letter. At that time, we will send copies to the Honorable Donna Shalala, Secretary of HHS; the Honorable Nancy-Ann Min DeParle, Administrator of HCFA; and interested congressional committees and members. We will also make copies available to others on request. Please contact me at (202) 512-7119 or James Cosgrove, Assistant Director, at (202) 512-7029 if you or your staff have any further questions. This report was prepared by Cam Zola, Richard Neuman, and Beverly Ross. To obtain information on plan-level appeals handled by HMOs during 1996, 1997, and the first 5 months of 1998, we surveyed all (307) Medicare HMOs that were active as of May 31, 1998. We obtained responses from 250 plans (81.4 percent). We visited three judgmentally selected HMOs—one in California and two in Florida. We selected these plans based on (1) geographic location, (2) high 1997 disenrollment rates, and (3) high Medicare enrollments. Our visit to one Florida HMO coincided with a monitoring visit by HCFA’s region IV staff. During our visits, we discussed the appeals process with plan officials and reviewed a limited number of cases at three of the locations. The cases included standard appeals and expedited appeals that were upheld and overturned at the plan level within the 6 months prior to our visit. Each case reviewed was discussed with a plan official responsible for the plan’s appeals process. In addition, we made a site visit to an HMO in Maryland during a HCFA monitoring visit. Our visit to the Maryland HMO was limited to overseeing the monitoring team’s review of appeal cases and several discussions with plan officials. We visited the two HCFA regional offices (region IX in San Francisco, California, and region IV in Atlanta, Georgia) responsible for the three plans we visited. We discussed the appeals process and the monitoring effort with appropriate officials in each region. We also spoke with regional personnel in HCFA’s region X about the appeals process and HCFA’s monitoring effort and results. In addition, we obtained from HCFA a summary spreadsheet that showed all the monitoring reports completed in 1997 and summarized plan compliance with Medicare requirements. From this list, we selected and reviewed the monitoring reports of plans that indicated deficiencies in the categories related to the appeals process, denial notices, or both. With assistance from CHDR we randomly selected and reviewed 108 appeal cases that had been adjudicated by CHDR in 1998 and had not been sent to storage as of October 1998. We developed a data collection instrument and specific criteria for evaluating the case file information. A CHDR analyst, who reviewed each case and recorded the review results, used this instrument and criteria. We reviewed the results of over half of the 108 cases to ensure the data were recorded accurately and met our evaluation criteria. We discussed HCFA’s appeal policy and practice with HCFA officials and representatives from five advocacy groups representing Medicare beneficiaries in health plans. In addition, we reviewed a number of HHS OIG reports covering several aspects of Medicare’s appeals process in HMOs. Also, we reviewed a report done by the Medicare Rights Center that discussed systemwide problems with Medicare HMOs. Our office of General Counsel reviewed the results of a class action lawsuit and the resulting appeal by HCFA before the 9th U.S. Circuit Court of Appeals. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed Medicare's managed care beneficiary appeals process, focusing on: (1) the appeals process available to beneficiaries when managed care plans deny care or payment for services; (2) beneficiaries' use of the appeals process and the extent to which they are informed of their appeal rights; and (3) the Health Care Financing Administration's (HCFA) oversight of this process. GAO noted that: (1) Medicare beneficiaries enrolled in managed care plans have the right to appeal if their plans refuse to provide health services or pay for services already obtained; (2) upon receipt of the written denial notice, the beneficiary may appeal and the health plan must reconsider its initial decision; (3) if the plan's reconsidered decision is not fully favorable to the beneficiary, the case is automatically sent to the Center for Health Dispute Resolution (CHDR) to review the decision; (4) CHDR may overturn or uphold the plan's decision; (5) a beneficiary is entitled to an expedited decision from the plan, both on the initial request and on appeal, if the standard time for making the decision could endanger his or her health or life; (6) a beneficiary who is dissatisfied with CHDR's decision may appeal further to an administrative law judge and then to a U.S. District Court provided certain requirements are met; (7) health maintenance organizations (HMO) reported an average of approximately 9 appeals per 1,000 Medicare members annually between January 1996 and May 1998; (8) HMOs reversed their original denial in about 75 percent of appeal cases; (9) the number of appeals may understate beneficiaries' dissatisfaction with the initial decisions by HMOs for two reasons: (a) some beneficiaries may disenroll and switch to another plan or fee-for-service Medicare instead of appealing; and (b) some beneficiaries may not appeal because they are unfamiliar with their appeal rights or the appeals process; (10) GAO found that beneficiaries frequently received incomplete notices that failed to explain their appeal rights, and some beneficiaries did not receive any notices; (11) notices often do not state a specific reason for the denial; as a result, beneficiaries may be uncertain as to whether they are entitled to the requested services and thus discouraged from appealing; (12) GAO also found that beneficiaries may receive little advance notice when plans decide to discontinue paying for services, which places these beneficiaries at financial risk should they decide to continue treatment during their appeal; (13) beneficiaries who lose their appeals are responsible for the treatment costs incurred after the date specified in the denial notice; (14) the agency does not determine whether beneficiaries who were denied services but did not appeal were informed of their appeal rights, nor does it monitor provider groups that contract with health plans; and (15) HCFA has not used available information to develop more effective plan oversight strategies.
|
Among other things, title VII programs support the education and training of primary care providers, such as primary care physicians, physician assistants, general dentists, pediatric dentists, and allied health practitioners. HRSA includes in its definition of primary care services, health services related to family medicine, internal medicine, preventative medicine, osteopathic general practice, and general pediatrics that are furnished by physicians or other types of health professionals. Also, HRSA recognizes diagnostic services, preventive services (including immunizations and preventive dental care), and emergency medical services as primary care. Thus, in some cases, nonprimary care practitioners provide primary care services to populations that they serve. Title VII programs support a wide variety of activities related to this broad topic. For example, they provide grants to institutions that train health professionals; offer direct assistance to students in the form of scholarships, loans, or repayment of educational loans; and provide funding for health workforce analyses, such as estimates of supply and demand. In recent years, title VII programs have focused on three specific areas of need—improving the distribution of health professionals in underserved areas such as rural and inner-city communities, increasing representation of minorities and individuals from disadvantaged backgrounds in health professions, and increasing the number of primary care providers. For example, the Scholarships for Disadvantaged Students Program awards grants to health professions schools to provide scholarships to full-time, financially needy students from disadvantaged backgrounds, many of whom are minorities. After completing medical school, medical students enter a multiyear training program called residency, during which they complete their formal education as a physician. Because medical students must select their area of practice specialty as part of the process of being matched into a residency program, the number of physician residents participating in primary care residency programs is used as an indication of the likely future supply of primary care physicians. Physician residents receive most of their training in teaching hospitals, which are hospitals that operate one or more graduate medical education programs. Completion of a physician residency program can take from 3 to 7 years after graduation from medical school, depending on the specialty or subspecialty chosen by the physician. Most primary care specialties require a 3-year residency program. In some cases, primary care physicians may choose to pursue additional residency training and become a subspecialist—such as a pediatrician who specializes in cardiology. In this case, the physician would no longer be considered a primary care physician, but rather, a cardiologist. According to the AAPA, most physician assistant programs require applicants to have some college education. The average physician assistant program takes about 26 months, with classroom education followed by clinical rotations in internal medicine, family medicine, surgery, pediatrics, obstetrics and gynecology, emergency medicine, and geriatric medicine. Physician assistants practice in primary care medicine, including family medicine, internal medicine, pediatrics, and obstetrics and gynecology, as well in surgical specialties. After completion of a bachelor’s degree in nursing, a nurse may become a nurse practitioner after completing a master’s degree in nursing. According to the AACN, full-time master’s programs are generally 18 to 24 months in duration and include both classroom and clinical work. Nurse practitioner programs generally include areas of specialization such as acute care, adult health, child health, emergency care, geriatric care, neonatal health, occupational health, and oncology. Dentists typically complete 3 to 4 years of undergraduate university education, followed by 4 years of professional education in dental school. The 4 years of dental school are organized into 2 years of basic science and pre-clinical instruction followed by 2 years of clinical instruction. Unlike training programs for physicians, there is no universal requirement for dental residency training. However, a substantial proportion of dentists—about 65 percent of dental school graduates—enroll in dental specialty or general dentistry residency programs. In recent years, the supply of primary care professionals increased, with the supply of nonphysicians increasing faster than physicians. The numbers of primary care professionals in training programs also increased. Little information was available on trends during this period regarding minorities in training or actively practicing in primary care specialties. In recent years, the number of primary care professionals nationwide grew faster than the population, resulting in an increased supply of primary care professionals on a per capita basis (expressed per 100,000 people). Table 1 shows that over roughly the last decade, per capita supply of primary care physicians—internists, pediatricians, general practice physicians, and family practitioners—rose an average of about 1 percent per year, while the per capita supply of nonphysician primary care professionals— physician assistants and nurse practitioners—rose faster, at an average of about 4 percent and 9 percent per year, respectively. Nurse practitioners accounted for most of the increase in nonphysician primary care professionals. The per capita supply of primary care dentists—general dentists and pediatric dentists—remained relatively unchanged. Growth in the per capita supply of primary care physicians outpaced growth in the per capita supply of physician specialists by 7 percentage points in the 1995-2005 period. (See table 2.) By definition, aggregate supply figures do not show the distribution of primary care professionals across geographic areas. Compared with metropolitan areas, nonmetropolitan areas, which are more rural and less populated, have substantially fewer primary care physicians per 100,000 people. In 2005, there were 93 primary care physicians per 100,000 people in metropolitan areas, compared with 55 primary care physicians per 100,000 people in nonmetropolitan areas. Data were not available on the distribution of physician assistants, nurse practitioners, or dentists providing primary care in metropolitan and nonmetropolitan areas. For two groups of primary care professionals—physicians and nurse practitioners—the number in primary care training has increased in recent years. Over the same period, the number of primary care training programs for physicians declined, while programs for nurse practitioners increased. Comparable information for physician assistants and dentists was not available. From 1995 to 2006, the number of physician residents in primary care training programs increased 6 percent, as shown in table 3. Over this same period, primary care residency programs declined, from 1,184 programs to 1,145 programs. The composition of primary care physician residents changed from 1995 to 2006. A decline in the number of allopathic U.S. medical school graduates (known as USMD) selecting primary care residencies was more than offset by increases in the numbers of international medical graduates (IMG) and doctor of osteopathy (DO) graduates entering primary care residencies. Specifically, from 1995 to 2006, USMD graduates in primary care residencies dropped by 1,655 physicians, while the number of IMGs and DOs in primary care residencies rose by 2,540 and 1,415 physicians respectively. (See table 4.) From 1994 to 2005, the number of primary care training programs for nurse practitioners and the number of graduates from these programs grew substantially. During this period, the number of nurse practitioner training programs increased 61 percent, from 213 to 342 programs. The number of primary care graduates from these programs increased 157 percent from 1,944 to 5,000. Little information was available regarding participation of minority health professionals in primary care training programs or with active practices in primary care. Physicians were the only type of primary care professional for whom we found information on minority representation. We found information not specific to primary care for physician assistants, nurse practitioners, and dentists identified as minorities, which may be a reasonable substitute for information on proportions of minorities in primary care. For physicians, we used the proportion of minority primary care residents as a proxy measure for minorities in the active primary care physician workforce. From 1995 to 2006, the proportion of primary care residents who were African-American increased from 5.1 percent to 6.3 percent; the proportion of primary care residents who were Hispanic increased from 5.8 percent to 7.6 percent. Data on American Indian/Alaska Natives were not collected in 1995, so this group could not be compared over time; in 2006, 0.2 percent of primary care residents were identified as American Indian/Alaska Natives. Minority representation among each of the other health professional types—overall, not by specialty—increased slightly. AAPA data show that from 1995 to 2007, minority representation among physician assistants increased from 7.8 percent to 8.4 percent. AANP data show that from 2003 to 2005, minority representation among nurse practitioners increased from 8.8 percent to 10.0 percent. ADEA data show that from 2000 to 2005, the proportion of African-Americans among graduating dental students rose slightly from 4.2 percent to 4.4 percent, while the proportion of Hispanics among graduating dental students increased from 4.9 percent to 5.9 percent. The proportion of Native American/Alaska Native among graduating dental students grew from 0.6 percent to 0.9 percent. Other demographic characteristics of the primary care workforce have also changed in recent years. In two of the professions that were traditionally dominated by men in previous years—physicians and dentists—the proportion of women has grown or is growing. Between 1995 and 2006, the proportion of primary care residents who were women rose from 41 percent to 51 percent. Growth of women in dentistry is more recent. In 2005, 19 percent of professionally active dentists were women, compared with almost 45 percent of graduating dental school students who were women. Accurately projecting the future supply of primary care health professionals is difficult, particularly over long time horizons, as illustrated by substantial swings in physician workforce projections during the past several decades. Few projections have focused on the likely supply of primary care physician or nonphysician primary care professionals. Over a 50-year period, government and industry groups’ projections of physician shortfalls gave way to projections of surpluses, and now the pendulum has swung back to projections of shortfalls again. From the 1950s through the early 1970s, concerns about physician shortages prompted the federal and state governments to implement measures designed to increase physician supply. By the 1980s and through the 1990s, however, the Graduate Medical Education National Advisory Committee (GMENAC), the Council on Graduate Medical Education (COGME), and HRSA’s Bureau of Health Professions were forecasting a national surplus of physicians. In large part, the projections made in the 1980s and 1990s were based on assumptions that managed care plans—with an emphasis on preventive care and reliance on primary care gatekeepers exercising tight control over access to specialists—would continue to grow as the typical health care delivery model. In fact, managed care did not become as dominant as predicted and, in recent years, certain researchers, such as Cooper, have begun to forecast physician shortages. COGME’s most recent report, issued in January 2005, also projects a likely shortage of physicians in the coming years and, in June of 2006, the AAMC called for an expansion of U.S. medical schools and federally supported residency training positions. Other researchers have concluded that there are enough practicing physicians and physicians in the pipeline to meet current and future demand if properly deployed. Despite interest in the future of the health care workforce, few projections directly address the supply of primary care professionals. Recent physician workforce projections focus instead on the supply of physicians from all specialties combined. Specifically, the projections recently released by COGME point to likely shortages in total physician supply but do not include projections specific to primary care physicians. Similarly, ADA’s and AAPA’s projections of the future supply of dentists and physician assistants do not address primary care practitioners separately from providers of specialty care. AANP has not developed projections of future supply of nurse practitioners. We identified two sources—an October 2006 report by HRSA and a September 2006 report by AAFP—that offer projections of primary care supply and demand, but both are limited to physicians. HRSA’s projections indicate that the supply of primary care physicians will be sufficient to meet anticipated demand through about 2018, but may fall short of the number needed in 2020. AAFP projected that the number of family practitioners in 2020 could fall short of the number needed, depending on growth in family medicine residency programs. HRSA based its workforce supply projections on the size and demographics of the current physician workforce, expected number of new entrants, and rate of attrition due to retirement, death, and disability. Using these factors, HRSA calculated two estimates of future workforce supply. One projected the expected number of primary care physicians, while the other projected the expected supply of primary care physicians expressed in full-time equivalent (FTE) units. According to HRSA, the latter projection, because it adjusts for physicians who work part-time, is more accurate. The agency projected future need for primary care professionals based largely on expected changes in U.S. demographics, trends in health insurance coverage, and patterns of utilization. HRSA predicted that the supply of primary care physicians will grow at about the same rate as demand until about 2018, at which time demand will grow faster than supply. Specifically, HRSA projected that by 2020, the nationwide supply of primary care physicians expressed in FTEs will be 271,440, compared with a need for 337,400 primary care physicians. HRSA notes that this projection, based on a national model, masks the geographic variation in physician supply. For example, the agency estimates that as many as 7,000 additional primary care physicians are currently needed in rural and inner-city areas and does not expect that physician supply will improve in these underserved areas. In a separate projection, AAFP reviewed the number of family practitioners in the United States. AAFP’s projections of future supply were based on the number of active family practice physicians in the workforce and the number of completed family practice residencies in both allopathic and osteopathic medical schools. AAFP’s projections of need relied on utilization rates adjusted for mortality and socioeconomic factors. Specifically, AAFP estimated that 139,531 family physicians would be needed by 2020, representing about 42 family physicians per 100,000 people in the United States. To meet this physician-to-population ratio, AAFP estimated that family practice residency programs in the aggregate would need to expand by 822 residents per year. Both reports noted the difficulties inherent in making predictions about future physician workforce supply and demand. Essentially, they noted that projections based on historical data may not necessarily be predictive of future trends. They cite as examples the unforeseen changes in medical technology innovation and the multiple factors influencing physician specialty choice. Additionally, HRSA noted that projection models of supply and demand incorporate any inefficiencies that may be present in the current health care system. Health professional workforce projections that are mostly silent on the future supply of and demand for primary care services are symptomatic of an ongoing decline in the nation’s financial support for primary care medicine. Ample research in recent years concludes that the nation’s over reliance on specialty care services at the expense of primary care leads to a health care system that is less efficient. At the same time, research shows that preventive care, care coordination for the chronically ill, and continuity of care—all hallmarks of primary care medicine—can achieve better health outcomes and cost savings. Despite these findings, the nation’s current financing mechanisms result in an atomized and uncoordinated system of care that rewards expensive procedure-based services while undervaluing primary care services. However, some physician organizations—seeking to reemphasize primary care services— are proposing a new model of delivery. Fee-for-service, the predominant method of paying physicians in the U.S., encourages growth in specialty services. Under this structure, in which physicians receive a fee for each service provided, a financial incentive exists to provide as many services as possible, with little accountability for quality or outcomes. Because of technological innovation and improvements over time in performing procedures, specialist physicians are able to increase the volume of services they provide, thereby increasing revenue. In contrast, primary care physicians, whose principal services are patient office visits, are not similarly able to increase the volume of their services without reducing the time spent with patients, thereby compromising quality. The conventional pricing of physician services also disadvantages primary care physicians. Most health care payers, including Medicare—the nation’s largest payer—use a method for reimbursing physician services that is resource-based, resulting in higher fees for procedure-based services than for office-visit “evaluation and management” services. To illustrate, in one metropolitan area, Boston, Massachusetts, Medicare’s fee for a 25 to 30-minute office visit for an established patient with a complex medical condition is $103.42; in contrast, Medicare’s fee for a diagnostic colonoscopy—a procedural service of similar duration—is $449.44. Several findings on the benefits of primary care medicine raise concerns about the prudence of a health care payment system that undervalues primary care services. For example: Patients of primary care physicians are more likely to receive preventive services, to receive better management of chronic illness than other patients, and to be satisfied with their care. Areas with more specialists, or higher specialist-to-population ratios, have no advantages in meeting population health needs and may have ill effects when specialist care is unnecessary. States with more primary care physicians per capita have better health outcomes—as measured by total and disease-specific mortality rates and life expectancy—than states with fewer primary care physicians (even after adjusting for other factors such as age and income). States with a higher generalist-to-population ratio have lower per- beneficiary Medicare expenditures and higher scores on 24 common performance measures than states with fewer generalist physicians and more specialists per capita. The hospitalization rates for diagnoses that could be addressed in ambulatory care settings are higher in geographic areas where access to primary care physicians is more limited. In recognition of primary care medicine’s value with respect to health care quality and efficiency, some physician organizations are proposing a new model of health care delivery in which primary care plays a central role. The model establishes a “medical home” for patients—in which a single health professional serves as the coordinator for all of a patient’s needed services, including specialty care—and refines payment systems to ensure that the work involved in coordinating a patient’s care is appropriately rewarded. More specifically, the medical home model allows patients to select a clinical setting—usually their primary care provider’s practice—to serve as the central coordinator of their care. The medical home is not designed to serve as a “gatekeeper” function, in which patients are required to get authorization for specialty care, but instead seeks to ensure continuity of care and guide patients and their families through the complex process of making decisions about optimal treatments and providers. AAFP has proposed a medical home model designed to provide patients with a basket of acute, chronic, and preventive medical care services that are, among other things, accessible, comprehensive, patient-centered, safe, and scientifically valid. It intends for the medical home to rely on technologies, such as electronic medical records, to help coordinate communication, diagnosis, and treatment. Other organizations, including ACP, the American Academy of Pediatrics (AAP), and AOA, have developed or endorsed similar models and have jointly recommended principles to describe the characteristics of the medical home. Proposals for the medical home model include a key modification to conventional physician payment systems—namely, that physicians receive payment for the time spent coordinating care. These care coordination payments could be added to existing fee schedule payments or they could be included in a comprehensive, per-patient monthly fee. Some physician groups have called for increases to the Medicare resource-based fee schedule to account for time spent coordinating care for patients with multiple chronic illnesses. Proponents of the medical home note that it may be desirable to develop payment models that blend fee-for-service payments with per-patient payments to ensure that the system is appropriately reimbursing physicians for primary, specialty, episodic, and acute care. In our view, payment system reforms that address the undervaluing of primary care should not be strictly about raising fees but rather about recalibrating the value of all services, both specialty and primary care. Resource-based payment systems like those of most payers today do not factor in health outcomes or quality metrics; as a consequence, payments for services and their value to the patient are misaligned. Ideally, new payment models would be designed that consider the relative costs and benefits of a health care service in comparison with all others so that methods of paying for health services are consistent with society’s desired goals for health care system quality and efficiency. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the committee may have. For information regarding this testimony, please contact A. Bruce Steinwald at 202-512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Jenny Grover, Assistant Director; Sarah Burton; Jessica Farb; Hannah Fein; Martha W. Kelly; and Sarabeth Zemel made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Most of the funding for programs under title VII of the Public Health Service Act goes toward primary care medicine and dentistry training and increasing medical student diversity. Despite a longstanding objective of title VII to increase the total supply of primary care professionals, health care marketplace signals suggest an undervaluing of primary care medicine, creating a concern about the future supply of primary care professionals--physicians, physician assistants, nurse practitioners, and dentists. This concern comes at a time when there is growing recognition that greater use of primary care services and less reliance on specialty services can lead to better health outcomes at lower cost. GAO was asked to focus on (1) recent supply trends for primary care professionals, including information on training and demographic characteristics; (2) projections of future supply for primary care professionals, including the factors underlying these projections; and (3) the influence of the health care system's financing mechanisms on the valuation of primary care services. GAO obtained data from the Health Resources and Services Administration (HRSA) and organizations representing primary care professionals. GAO also reviewed relevant literature and position statements of these organizations. In recent years, the supply of primary care professionals increased, with the supply of nonphysicians increasing faster than physicians. The numbers of primary care professionals in training programs also increased. Little information was available on trends during this period regarding minorities in training or actively practicing in primary care specialties. For the future, health professions workforce projections made by government and industry groups have focused on the likely supply of the physician workforce overall, including all specialties. Few projections have focused on the likely supply of primary care physician or other primary care professionals. Health professional workforce projections that are mostly silent on the future supply of and demand for primary care services are symptomatic of an ongoing decline in the nation's financial support for primary care medicine. Ample research in recent years concludes that the nation's over reliance on specialty care services at the expense of primary care leads to a health care system that is less efficient. At the same time, research shows that preventive care, care coordination for the chronically ill, and continuity of care--all hallmarks of primary care medicine--can achieve improved outcomes and cost savings. Conventional payment systems tend to undervalue primary care services relative to specialty services. Some physician organizations are proposing payment system refinements that place a new emphasis on primary care services.
|
The Financial Institutions Reform, Recovery, and Enforcement Act (FIRREA) established RTC in 1989 to contain, manage, and resolve hundreds of failed thrift institutions. The Federal Deposit Insurance Corporation Improvement Act of 1991 clarified or expanded RTC’s and FDIC’s responsibilities for resolving failed thrifts and banks. As of April 25, 1994, RTC was responsible for resolving 743 thrifts. From fiscal year 1990 to May 19, 1994, FDIC was responsible for resolving 465 failed banks. Under the RTC Completion Act, RTC is to cease operating by December 31, 1995. After that date, FDIC will become responsible for (1) resolving the thrifts that fail after June 30, 1995, and (2) completing the disposition of thrift assets remaining in RTC’s inventory. When RTC assumes control of an institution, its Office of Investigations and its Legal Division’s Professional Liability Section work together. They determine which institution officers and directors, if any, are responsible for, or culpable in, the losses that resulted in the institution’s failure. Once these administrative determinations are made, RTC generally then files a professional liability suit or submits a criminal referral naming the culpable individuals. FDIC operates in much the same manner. Between its inception and June 30, 1994, RTC filed 245 professional liability suits against directors and officers, of which 181 are pending, and made 1,134 criminal referrals to the Department of Justice. FDIC had approximately 400 professional liability suits against directors and officers between January 1990 and July 1994, of which 120 are pending. In addition, FDIC filed 998 criminal referrals in that same period. RTC and FDIC are vulnerable to fraud, abuse, and mismanagement because they do not systematically screen job applicants or current employees to determine if they have been found culpable in the losses that caused federally insured institutions to fail. The Corporations have no systems designed to screen prospective employees to determine if the Corporations have found them culpable in the failures. In addition, after the Corporations make culpability determinations, they have no systems to verify whether they or a conservatorship institution currently employ the individuals deemed culpable. The Corporations’ databases concerning professional liability suits and criminal referrals are not designed to be used for employment screening and contain a number of shortcomings for performing this function. First, the databases are incomplete: Names of culpable individuals against whom legal action was not cost-effective are not always included in Corporation databases, and criminal referral listings to the Department of Justice are incomplete. Second, configuration of the Corporation databases constrains their usefulness in locating names of culpable individuals. Third, one RTC database incorrectly lists individuals as culpable when they are not. These shortcomings become critical when verifying whether prospective employees for, or current employees in, some vital capacity have been found culpable for institution failures. The Corporations’ databases of culpable individuals are incomplete, continuing to leave them vulnerable to fraud and mismanagement. For example, the Corporations do not always include in their databases the names of those individuals found culpable for institution failures if RTC and FDIC have determined that legal action would not be cost-effective. In addition, FDIC criminal referral listings are incomplete. After we compared all FDIC criminal referrals filed with Justice for one failed bank with an FDIC listing of all criminal referrals filed in the past 5 years, we found over half missing from the FDIC listing. FDIC had filed criminal referrals against eight individuals from the failed bank between November 1990 and February 1991, yet only three of the names were on its criminal referral listing. As a result, if the Corporations had performed employment screening, they would have been unaware of the culpability of the five not on the listing. FDIC acknowledged that the criminal referral listing, which came from a database not designed for employment screening, is not appropriate for this purpose due to the number of errors it contains. A similar situation exists with the RTC databases. RTC acknowledged that because each RTC region is responsible for entering its own data, some may not have entered all of the professional liability suit and criminal referral data necessary to make this database an integral part of an accurate screening system. This inconsistent reporting occurs because each region has discretion about how much information is placed in the system. The Corporations’ databases lack systematic means to identify culpable individuals, and the organization of FDIC’s database of professional liability suits constrains its usefulness. Both situations increase Corporation vulnerability. None of the Corporations’ databases pertaining to professional liability suits or criminal referrals systematically provide social security numbers or other similar identifiers necessary to make a positive identification for similar names. To perform our verification when social security numbers were not available, we requested individuals’ home addresses and dates of birth from the Corporations. These were sometimes available only on documents maintained by outside legal counsel. Further, the organization of FDIC’s professional liability suit database, containing over 1,500 individuals’ names, is such that it constrains the usefulness of the database for employment screening. It does not have the capability to retrieve defendants by name. Instead, this database, which FDIC officials acknowledged would be difficult to use for employment screening, identifies failed institutions resolved by FDIC, each followed by a list of defendants’ names. Therefore, we were unable to determine whether the Corporations employed any of these individuals in vital positions. RTC had never filed suit against two other RTC employees we found on its professional liability suit listing. RTC included these two names on the listing because it routinely enters the names of all directors and officers from a failed institution in the database from which it drew the listing. RTC told us that this database was not designed, nor ever intended to be used, for employment screening. Neither RTC nor FDIC has established a systematic means for communicating determinations of culpability to managers in a timely way. Managers become aware of culpability determinations made against current employees through happenstance. However, early notification would allow managers to evaluate and determine, in a timely manner, whether to restrict the employees’ duties and responsibilities or to evaluate their employment status. Such action would help limit the Corporations’ vulnerability to fraud, abuse, and mismanagement. For example, RTC hired an individual in January 1990 as a credit specialist to manage a conservatorship institution with $1.9 billion in assets. RTC filed a professional liability suit against him in December 1992. Being unaware of the suit, a senior manager at the regional RTC office offered the employee a new position (and a paid move) 4 days after RTC had filed the professional liability suit. In fact, the manager did not learn of the suit for 11 days after making the offer (15 days after RTC had filed suit). After learning of the suit on December 28, 1992, the manager acted to limit RTC’s vulnerability: He restricted the individual from all RTC offices and placed him on a fully paid administrative-leave status for approximately 1 month until his employment was to expire because the local RTC office was closing. Previously, the credit specialist’s duties included (1) providing guidance, direction, and control to the institution through the documentation and inventory of assets; (2) directing the sale of owned real estate and loans; and (3) assuming, in the absence of the managing agent in charge of the RTC conservatorship institution, the responsibilities and delegated authorities of the managing agent. In another instance, RTC filed a professional liability suit against the Vice President for Loan Workout (a conservatorship employee) at an RTC conservatorship institution having over $200 million in assets. Following the filing of the suit in December 1992, RTC sent a letter to the individual’s residence requesting that he resign. RTC did not inform the managing agent in charge of the RTC conservatorship institution where the vice president worked of either the suit or the letter. According to the managing agent, he was unaware that RTC had determined the vice president to be responsible for the failure of an institution until the employee showed him a copy of the RTC letter. The managing agent thus had no opportunity to consider revising the vice president’s duties and limiting RTC’s vulnerability. A Virginia newspaper published an article about RTC’s employment of the vice president. The article, entitled “Thrifts: From One Failure to Another,” stated that when RTC filed suit against the individual, it did not have to look far for the defendant as he was working a block away at another failed thrift being managed by RTC. In January 1994, RTC issued a policy covering civil service employees determined to be culpable in the losses of failed institutions. The policy states in part that “Conduct which does not clearly fall into one of [FIRREA’s] prohibited categories is less clear-cut and requires further analysis.” The policy also states, “An example that requires careful consideration is when an RTC employee is named, or about to be named, in a suit filed by the RTC Professional Liability Section.” While this policy is critical for managers in making employment decisions, it does not describe what “further analysis” is required. Thus, the policy provides neither the direction nor the clear guidance that managers need when deciding (1) to hire an individual previously found culpable for the failure of an institution or (2) what personnel action, if any, should be taken against current employees whom the Corporations have found culpable. Similarly, FDIC policy does not provide clear guidance for managers to make necessary employment decisions regarding culpability determinations. FDIC standards of ethical conduct, which adhere to those of the executive branch, state only that FDIC employees cannot be indebted to a failed institution through any extension of credit. Several federal agencies—including RTC and FDIC—coordinate their efforts to pursue claims, prosecutions, and enforcement actions to maximize recoveries at the lowest possible cost. Unfortunately, this effort does not extend to the systematic sharing of information between RTC and FDIC regarding the directors and officers each had found culpable in the failures of federally insured institutions. Responsible officials at RTC and FDIC acknowledged that the Corporations do not customarily share such information. Therefore, they are aware only of those employees against whom their own organization has brought action. After our request, RTC and FDIC identified 12 individuals, one of whom was a conservatorship institution employee, whom the Corporations had previously determined to be culpable and who were holding vital positions in FDIC, RTC, or conservatorship institutions. We identified two additional employees occupying vital positions whom the Corporations had found culpable. These discoveries illustrate the Corporations’ vulnerability. In response to our request for a list of such employees, RTC’s Office of Investigations identified three individuals who held vital positions although they had been determined to be culpable. Those individuals had been found responsible for losses and had been made subjects of professional liability suits or criminal referrals to the Department of Justice since 1989. FDIC’s investigative office identified nine FDIC employees holding vital positions who had been subjects of FDIC professional liability suits or criminal referrals during the 5 preceding years. The 12 RTC/FDIC-identified culpable employees held the vital positions of RTC managing agent, credit specialist, and operations specialist; loan workout officer at RTC conservatorship institutions; or FDIC credit specialist. We did not include in our investigation two other persons whom FDIC identified because they did not have asset disposition responsibilities. These two, however, held positions of trust, as FDIC employed them as investigators to ascertain individuals’ liability for institution failures. Concentrating on employees with asset disposition responsibilities, we obtained a list of 1,132 Corporation employees from RTC and FDIC. We compared this list with RTC’s database of subjects of professional liability suits and RTC’s and FDIC’s databases of subjects of criminal referrals to the Department of Justice. We found two additional employees deemed culpable in vital positions—an FDIC credit specialist and an RTC supervisory operations specialist. FDIC hired the credit specialist in February 1993, although RTC had previously filed a criminal referral that named the individual in 1990. While FDIC was aware that this person had resigned from RTC in May 1992, FDIC’s hiring and supervisory managers were unaware of the criminal referral until we asked that they check with RTC. The FDIC employee’s responsibilities included the analysis of proposed workouts; settlements; and budgets of large, complex assets, ranging to several millions of dollars. RTC hired the supervisory operations specialist in February 1990. In April 1991, RTC made a criminal referral to the Department of Justice, naming this individual. RTC overlooked this individual in the list of culpable employees it provided to us although the criminal referral clearly stated that he was an RTC employee. This oversight further emphasizes the need for both an effective, systematic screening process and adequate databases to support it. The Corporations’ vulnerability may not be limited to the 14 RTC, FDIC, or conservatorship employees that we identified during our investigation. For example, we were limited in our ability to use FDIC’s database of professional liability suits because the databases could not retrieve individuals by name. Thus, we were unable to determine whether the Corporations employed in vital positions any of the individuals whose names were contained in the database. Additionally, RTC maintained no database of employees of institutions that were in conservatorship and could not identify those having previous culpability determinations. Thus, RTC’s vulnerability to fraud, abuse, or mismanagement is increased. In March 1990, the U.S. Secret Service offered to do a vulnerability assessment targeting RTC employees and contractors. RTC did not accept the offer and performed its own assessment, which was published in November 1990. The Secret Service proposal included a review of the application process for new RTC employees and its contractors, as well as a review of RTC databases and criminal referral processes. The subject of RTC’s vulnerability assessment included adherence to RTC’s ethical standards—FIRREA. RTC found that its employment efforts are particularly vulnerable “in view of the need to rapidly employ staff . . . because RTC must ensure that prospective employees meet its own and other ethical requirements.” However, RTC’s ethical standards for its employees do not specifically address the employment of individuals against whom an administrative determination of culpability has been made, with the result that RTC did not identify its vulnerability to such individuals. While we cannot be certain that a Secret Service assessment would have identified the lack of RTC controls for determining which employees and applicants were culpable, as RTC’s focus was on existing ethical standards it did not identify this lack of controls as a vulnerability. RTC does not consider conservatorship employees to be either RTC or contract employees and therefore does not apply the FIRREA employment standards regarding competence, expertise, and integrity to them. We believe that conservatorship employees should be subject to the FIRREA standards because the standards apply to individuals who perform the functions and activities of RTC. “Any individual who, pursuant to a contract or any other arrangement, performs functions or activities of the [RTC], under the direct supervision of an officer or employee of the [RTC], shall be deemed to be an employee of the [RTC] for the purposes of title 18, United States Code and [FIRREA].” (12 U.S.C.A. § 1441 a(n)(1)(West Supp. 1993)). We found that many conservatorship employees perform critical functions of RTC, such as loan workout. They also report directly to an RTC employee, such as the managing agent or credit specialist. Nonetheless, RTC maintains that conservatorship employees are not subject to FIRREA’s employment restrictions. RTC and FDIC do not have the systematic means to always know when they are about to employ, or are already employing, someone whom either Corporation has found to be culpable in the losses that caused the failure of a federally insured financial institution. Their inability to make informed decisions concerning the hiring or duties of such individuals increases the Corporations’ vulnerability to fraud, abuse, or mismanagement. Further, while RTC will transfer its assets and operations to FDIC when RTC closes on December 31, 1995, we believe it is important for both Corporations to address the findings of this report now as they prepare for the transition period. Despite the dwindling number of institutions presently in conservatorship, the vulnerability of any failed thrift to culpable individuals will remain a concern as long as conservatorship is an available means of resolution. Addressing the findings of this report will not only help protect the assets of the institutions under the Corporations’ purview, it will help provide FDIC with assurance that it is aware of any RTC and conservatorship/receivership institution employees who have been found culpable for the losses of failed institutions. We recommend that the Acting Chairman of FDIC and the Deputy and Acting Chief Executive Officer of RTC direct their agencies to perform employment screening before hiring individuals and routinely do so for their current employees, using reliable databases of individuals found responsible for institution failures; develop reliable databases that will effectively identify individuals found culpable in institution failures; share information systematically, enabling each to be aware of those individuals the other has found culpable in the failure of federally insured institutions; and ensure that personnel guidance is clear and appropriate regarding employees and prospective employees for whom the Corporations have made culpability determinations. We also recommend that RTC’s Deputy and Acting Chief Executive Officer ensure that conservatorship employees who occupy positions with responsibilities for asset disposition—such as those performing loan workout functions—be included in the employment screening process. We sent a draft of this report to FDIC and RTC for comment. In their written comments dated September 14 and 16, 1994, respectively, FDIC and RTC agreed with our report and acknowledged that the issues raised are significant. According to FDIC’s Acting Chief Operating Officer and Deputy to the Chairman, FDIC will continue to review the draft report’s conclusions, providing us with the preliminary results of that review, and, in coordination with RTC, develop steps to correct the weaknesses identified. RTC’s Chief Financial Officer indicated that RTC will pursue our recommendations to the fullest extent possible and proposed specific initiatives to address each recommendation. RTC’s initiatives are responsive to our findings and recommendations. If fully and effectively implemented, these initiatives could resolve the issues identified. (See app. II and III for complete agency comments.) As agreed with your office, we plan no further distribution of this report until 30 days after the date of the letter, unless you publicly announce its contents earlier. At that time, we will send copies to the Secretary of the Treasury, the Acting Chairman of FDIC, the Deputy and Acting Chief Executive Officer of RTC, and other interested parties. We will make copies of this report available to others upon request. If you have questions concerning our investigative findings, please contact Robert Hast, Assistant Director for Investigations, of GAO’s New York Regional Office at (212) 264-0730. A list of major contributors is included in appendix IV. We performed our investigation between October 1992 and December 1993. We reviewed and considered relevant laws, regulations, and policies and interviewed responsible management officials at RTC and FDIC headquarters. From RTC, we requested the names of any RTC employee who had been the subject of a professional liability suit or criminal referral for responsibility in a failure of any federally insured institution since the inception of RTC. We requested the same information from FDIC regarding any FDIC employee in the past 5 years. To verify whether the Corporations had provided us the names of all such individuals, we requested personnel information as well as professional liability suit and criminal referral information. We matched RTC and FDIC employees with responsibilities concerning assets of failed institutions against both organizations’ criminal referral listings and against RTC’s professional liability suit listing. From RTC, we obtained listings from databases of (1) federal employees with asset disposition responsibilitiesand (2) individuals against whom RTC had filed professional liability suits or criminal referral actions for responsibility in the failures of federally insured institutions. From FDIC, we obtained listings from databases of (1) employees at FDIC consolidated offices who have responsibilities over assets of FDIC-controlled failed institutions and (2) individuals against whom FDIC had in the past 5 years filed professional liability suit or criminal referral actions for responsibility in the failures of federally insured institutions. James M. Lager, Assistant General Counsel Glenn G. Wolcott, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed whether the Resolution Trust Corporation (RTC) and the Federal Deposit Insurance Corporation (FDIC) have sufficient systems to assist hiring and management officials in culpability determinations. GAO found that: (1) RTC and FDIC are vulnerable to fraud, abuse, or mismanagement because they do not systematically screen employees or applicants found culpable for bank failures; (2) RTC and FDIC databases do not include the names of all culpable directors and officers of failed institutions; (3) RTC and FDIC databases do not include personal identifiers of culpable individuals; (4) the RTC database includes the names of individuals against whom no suits have been filed; (5) RTC and FDIC have no systematic means for promptly notifying managers and supervisors of employees against whom a culpability determination has been made; (6) RTC and FDIC do not share information regarding individuals found culpable for institution failures; (7) certain employees of RTC, FDIC, and conservatorship institutions hold vital positions although they have previously been determined to be culpable; (8) RTC and FDIC cannot ensure the identification of all culpable employees; (9) RTC is vulnerable to conservatorship employees who have been found culpable, since it does not maintain a database of non-federal conservatorship employees; and (10) RTC and FDIC should address these vulnerability issues to ensure the proper disposition of failed institutions' assets and to protect insurance funds' and taxpayers' interests.
|
Since the terrorist attacks of September 11, 2001, the United States has undertaken military operations worldwide to fight terrorism as part of GWOT. To pay for the incremental costs of GWOT, the Congress has provided over $165 billion in appropriations for military operations through fiscal year 2004. This amount includes funds for operations in Afghanistan and more recently Iraq, homeland security, and other global counterterrorism military and intelligence operations. Figure 1 shows the location of DOD’s major operations in support of GWOT during fiscal year 2004. Most of the costs associated with GWOT fall into two accounts—operation and maintenance and military personnel. Operation and maintenance account funds obligated in support of GWOT are used for a variety of purposes, including transportation of personnel, goods, and equipment; unit operating support costs; and intelligence, communications, and logistics support. Military personnel funds obligated in support of GWOT cover the pay and allowances of mobilized reservists as well as special payments or allowances for all qualifying military personnel, both active and reserve, such as Imminent Danger Pay and Family Separation Allowance. Our analysis of the military services’ reported obligations for the first seven months of fiscal year 2004 and the services’ forecasts as of June 2004 of full fiscal year costs suggests the services’ combined operation and maintenance costs could exceed supplemental GWOT funding by about $13 billion. At the same time, the services’ forecasts suggest the Army and the Air Force will have some surplus military personnel funds while the Navy and the Marine Corps will have a small shortfall. Using the surplus military personnel funds to offset some of the operation and maintenance shortfalls could result in a net shortfall of about $12.3 billion. Table 1 shows the services’ forecasts. Our analysis suggests that the services will require additional funding to satisfy operation and maintenance expenses and in some cases military personnel expenses. The services, in concert with the Office of the Under Secretary of Defense (Comptroller), plan to take a variety of actions to cover forecasted shortfalls. To support GWOT in fiscal year 2004, the Congress appropriated $65 billion to DOD in an emergency supplemental appropriation. Of this $65 billion, about $63 billion was appropriated directly to the services’ and other defense agencies’ appropriations accounts and about $2 billion to a transfer fund called the Iraqi Freedom Fund. Of the $63 billion, $5.3 billion was designated for classified programs, which we did not review, $3.07 billion was for procurement, $500 million was for military construction, $624 million was for the working capital funds, and $672 million was for other appropriations. Table 2 shows the operation and maintenance and the military personnel appropriations provided to the services and DOD-wide agencies for GWOT, exclusive of the amounts designated for classified programs. We recognize that estimating the costs of ongoing military operations is difficult because operational requirements can differ substantially from what was assumed in developing budget estimates. For example, according to Office of the Under Secretary of Defense (Comptroller) representatives, in developing the President’s fiscal year 2004 GWOT budget request, DOD assumed, among other things, that the number of military personnel would decline from a wartime high of 130,000 to 99,000 by the end of fiscal year 2004, that it would be able to make greater use of sealift as opposed to more expensive airlift, and that military units replacing those involved in the invasion of Iraq would have fewer armored vehicles than the units they replaced. Conditions in Iraq have prevented much of this from happening, and costs have not decreased as anticipated. This includes having higher numbers of troops in Iraq, with DOD stating that it plans to keep troop levels at 138,000 for the foreseeable future. Our analysis of reported obligations for the first seven months of fiscal year 2004 and the military services’ forecasts as of June 2004 of their likely costs for GWOT through the end of fiscal year 2004 suggest that anticipated costs will exceed the supplemental funding provided for GWOT. Our comparison of the percentage of fiscal year 2004 GWOT operation and maintenance funds obligated by the military services for the first seven months of the fiscal year (i.e., October 1, 2003, through April 2004, the latest month for which obligation data are available for all the services) showed that all of the military services had obligated more than 60 percent of their available appropriations, including any funds transferred from the Iraqi Freedom Fund. As shown in figure 2, the percentage of available operation and maintenance funds that were obligated as of April 30, 2004, ranged from a low of nearly 61 percent for the Air Force to a high of over 77 percent for the Marine Corps. Therefore, we believe that if funds continue to be obligated at the current rate or higher, operation and maintenance reported obligations made in fiscal year 2004 will be higher than the funds available for obligation in fiscal year 2004. Each of the military services completed a midyear budget review in May-June 2004 for the Office of the Secretary of Defense, including a forecast of their requirements for GWOT and, in some cases, has updated those forecasts. Each service concluded it did not have sufficient GWOT funding for GWOT operation and maintenance, while two of the services—the Navy and the Marine Corps—forecasted a shortfall in GWOT military personnel funds (see next section). A summary of each service’s review follows. The Army forecasts a funding shortfall of about $10.2 billion. The shortfall includes $5.3 billion for support to deployed Army forces; $3.4 billion for a variety of activities, including $2 billion for refurbishing equipment used in Operation Iraqi Freedom and $753 million in contractor logistics support; $800 million for equipment maintenance; and $650 million for contract guards and garrison support units in the United States. The two largest components of the shortfall in support for deployed forces are the Logistics Civil Augmentation Program (LOGCAP) contract that provides a wide array of support services such as feeding and housing soldiers and the costs, such as those for spare parts, associated with the higher operating tempo of U.S. forces. LOGCAP costs have grown significantly as contractors replaced soldiers providing complex support functions. Higher than initially planned troop levels in Iraq as of spring 2004 (130,000 instead of 99,000) and an increase in troop levels in Afghanistan (from about 14,000 to about 21,000) have increased all aspects of troop support. Additional factors driving costs are the decision to change the force mix of units serving in Iraq from one-third armor, two-thirds wheeled vehicles to one-half armor, one-half wheeled vehicles, which will lead to higher operational tempo and maintenance costs because armor vehicles are more expensive to operate and an increased use of airlift to move critical equipment to Iraq this past spring. The Air Force forecasts a shortfall of about $1.5 billion, which includes the costs for increased operating tempo, such as more flying hours than anticipated, higher transportation costs to move Air Force units and equipment, body armor for airmen in combat areas, night vision gear, and operation of surveillance equipment. In addition, the Office of the Under Secretary of Defense (Comptroller) directed the Air Force to fund $116 million for its own support provided under the LOGCAP contract, which would otherwise have been paid by the Army. The Navy forecasts a shortfall of $931 million, which includes the costs for higher steaming and flying hours. For example, Navy representatives told us that they had planned for 11 additional steaming days per quarter but are actually at 18 additional days per quarter, which totals $231 million of the shortfall. In addition, there are 4,000 Navy personnel in Iraq and Kuwait that were not planned to deploy, which increases the operational costs, and the transportation costs to move additional Navy and Marine Corps personnel (which were also not expected to deploy) to GWOT operations. The Marine Corps forecasts a shortfall of $446 million. This shortfall reflects the cost of having 26,500 Marines in Iraq and the additional deployment of two Marine Expeditionary Units in support of GWOT operations when initially Marine Corps forces were expected to decrease their presence in fiscal year 2004. It also includes the cost of refurbishing equipment that had been in Iraq in fiscal year 2003. Furthermore, the equipment and maintenance costs associated with adding extra armor on vehicles are much higher than anticipated because of the extra wear-and- tear the vehicles are experiencing due to the extra weight. The shortfall does not include an additional $140 million needed for aircraft force protection that is being funded by the Navy. In addition to the military services, the Congress also provided GWOT operation and maintenance appropriations for the defense agencies, including the Special Operations Command and the Defense Logistics Agency. These agencies are also reflected in the midyear budget review. Our analysis of their collective reported obligations through April 2004, which represents almost 58 percent of the fiscal year, indicates that the defense agencies are also obligating their funds rapidly, with about 66 percent of their appropriated operation and maintenance funds obligated through April 2004. The military services have been obligating their funds for military personnel at a rate that nearly mirrors the percentage of the fiscal year that has passed. As figure 3 shows, with seven months of the fiscal year gone, the Army, the Navy, and the Air Force have obligated over one-half of their appropriations and the Marine Corps has obligated almost half, including any funds transferred from the Iraqi Freedom Fund. For example, the Army and the Navy have obligated almost 60 and 59 percent, respectively, of their GWOT appropriation. While three of the four services’ reported obligations for military personnel are about at the expected level for this point in the fiscal year, the military services’ forecasts as of June 2004 predict a surplus in the Army and Air Force accounts and a shortage in the Navy and Marine Corps accounts. Details are as follows. The Army forecasts a surplus of $800 million due to less than expected use of reservists in support of Operation Noble Eagle and savings on costs to move soldiers from one home station to another. The Air Force forecasts a surplus of $112 million due to the deactivation of reservists. The Navy forecasts a shortfall of $61 million resulting from factors including the increase in Family Separation Allowances for personnel in Iraq and Kuwait. The Navy has also activated 1,300 reservists. The Marine Corps forecasts a shortfall of $107 million. There are 26,500 Marines in Iraq, including 4,000-5,000 reservists, plus the deployment of two Marine Expeditionary Units in support of GWOT that were not anticipated when the Marines’ budget estimate was developed. To fund forecasted GWOT shortfalls, the Office of the Under Secretary of Defense (Comptroller) and the military services are planning to take a number of actions. These actions include taking steps to reduce costs, transferring funds from the Iraqi Freedom Fund, transferring funds between appropriations accounts, and deferring planned peacetime activities to use those funds to support GWOT. These potential shortfalls could require DOD to move funds between or within appropriation accounts. DOD uses “transfer authority” to shift funds between appropriation accounts, for example, between military personnel and operation and maintenance. Transfer authority is granted by the Congress to DOD usually pursuant to specific provisions in authorization or appropriation acts. In the fiscal year 2004 National Defense Appropriation Act, DOD was given general transfer authority to shift $2.1 billion between appropriations accounts, as well as other transfer authorities that are more specific in nature. DOD was also given transfer authority in the fiscal year 2004 Emergency Supplemental Appropriation Act to shift $3 billion of the funds appropriated in that act. In both cases, the Secretary of Defense must determine that this transfer is necessary in the national interest and that it would fund unforeseen and higher priority items than those originally funded, and he must notify the Congress promptly of the transfer. The ability to shift funds within a specific appropriation account, like operation and maintenance, is referred to as “reprogramming.” In general, DOD does not need statutory authority to reprogram funds within an account as long as the funds to be spent would be used for the same general purpose of the appropriation and the reprogramming does not violate any other specific statutory requirements or limitations. For example, DOD could reprogram operation and maintenance funds originally appropriated for training to cover increased fuel costs because both uses meet the general purpose of the operation and maintenance account, as long as the shift does not violate any other specific congressional prohibition or limitation. According to a representative in the Office of the Under Secretary of Defense (Comptroller), DOD has sufficient funds within its overall appropriation to cover forecasted GWOT shortfalls. Therefore, DOD does not plan to ask the Congress for additional funding, but instead will cover the shortfall in its fiscal year 2004 GWOT funding by both transferring and reprogramming normal annual appropriation and GWOT funds. However, as explained earlier, there are statutory dollar limits on the amount of funds DOD can transfer and, according to a DOD representative, as of June 18, 2004, DOD had exhausted most of its transfer authority. According to this representative, DOD plans to ask the Congress for an additional $1.1 billion in transfer authority, which would give the department sufficient authority to move funds from one service to another and get funds to the operation and maintenance accounts that have the greatest shortfalls. Also, according to most service representatives, they plan to reprogram funds within their appropriations to the extent allowed by law. Finally, DOD plans to transfer the remaining amount in the Iraqi Freedom Fund, which has its own transfer authority, to the Army and Marine Corps operation and maintenance accounts. To cover their forecasted GWOT shortfalls, each of the military services has identified a number of steps it plans to take. Some of these steps involve actions they can take internally, such as seeking to reduce costs and revising spending priorities, or reprogramming, within the same appropriation account, while others involve transferring funds between accounts. The Army, the service with the largest forecasted shortfall in operation and maintenance, is taking a variety of actions to address its forecasted shortfall. Actions include emphasizing the need to control costs, reprogramming funds within and transferring funds across accounts, seeking help from the other military services for bills now being paid by the Army, and deferring what amounts to a total of about $3.4 billion in activities until fiscal year 2005 or beyond, including deferring refurbishment of equipment used in Operation Iraqi Freedom. In a December 2003 message, the Vice Chief of Staff of the Army asked units to control costs and look for alternatives to the LOGCAP contract with the realization that costs were growing rapidly. Army representatives told us that to control costs they have implemented a number of measures, including higher level review of LOGCAP tasks over $10 million as well as over other contract actions and equipment and supply purchases, strengthened management controls on new work performed under the LOGCAP contract, and the review of supply requisitions to identify and cancel duplicate or inactive requisitions after 30 days as well as a management review of requisitions that have a high-dollar value, involve large quantities, or involve pilferable items. The Army will also seek to transfer the previously discussed anticipated $800 million surplus that is attributable to GWOT in its military personnel appropriation account and reprogram funds within its military personnel Army and National Guard appropriations to cover the forecasted $650 million shortage for contract guards and garrison support units, and it is waiting for the Congress’ approval of the Office of the Secretary of Defense’s request to transfer funds from other service and defense agency accounts, as well as the previously discussed transfer of remaining funds in the Iraqi Freedom Fund. The Army has already received almost $1 billion in transfers from the transportation working capital fund that reflect surpluses in that account and anticipates a reduction in transportation rates and usage changes that would produce a $265 million savings. Finally, the Army is seeking to have the other military services pay some bills it is currently paying. These bills include having the Marine Corps pay almost $313 million and the Air Force pay almost $116 million in LOGCAP costs, which is the Army’s estimate of the cost of LOGCAP services being provided to those services. In total, the Army has identified $6.8 billion in funding sources for its operation and maintenance shortfall and will defer activities for the remaining $3.4 billion to fiscal year 2005 and beyond. The Air Force is taking a variety of actions to reduce or defer spending in its active component operation and maintenance account in order to absorb its forecasted GWOT shortfall. Actions include decreasing peacetime flying hours in the fourth quarter of this fiscal year, reducing depot maintenance, deferring facility sustainment and restoration modernization projects, eliminating training events, decreasing contractor logistics support, slowing civilian hiring, and curtailing lower priority requirements such as travel, supplies, and equipment. The Navy is also taking a variety of actions to cover its forecasted GWOT shortfall. To cover its forecasted operation and maintenance shortfall of $931 million, the Navy, like the Air Force and the Army, plans to reduce or defer spending in its operation and maintenance account, by reducing activities involving facility sustainment and restoration modernization projects by $300 million and non-GWOT flying and steaming hours by $226 million. According to Navy representatives, if the fleet does not want to reduce flying and steaming hours, it can defer its depot maintenance. The Navy received $121 million in transfers from the Office of the Secretary of Defense. The Navy will cover the remaining $284 million shortfall in operation and maintenance and the $61 million shortfall in military personnel through the transfer and reprogramming of funds from investment accounts. The Marine Corps plans to fund both $334 million of its forecasted $446 million operation and maintenance shortfall and its $107 million military personnel shortfall with funds transferred from the Iraqi Freedom Fund and Department of the Navy investment accounts. According to Marine Corps representatives, they also plan to reduce or defer spending in noncritical areas such as facilities improvements or sustainment projects. As discussed earlier, each of the military services expects to take steps to make funds available for GWOT by reducing and deferring planned activities. Actions such as reducing training can have both short- and long-term impacts. In the short term, units train less if flying and steaming hours are reduced. In the long term, for example, Air Force representatives told us that part of the reduction in peacetime flying hours would affect the Air Education and Training Command’s training of new pilots, which would slow new pilot production. While some actions, such as reduced training or travel, cannot be restored, actions that involve deferring planned activities can be restored in future fiscal years to the extent funding is available. As discussed earlier, the Army both plans to defer $3.4 billion in activities until 2005 and beyond and expects to receive transfers of funds from other services and defense agency accounts, which would affect the other services’ spending plans; the Air Force plans to reduce depot maintenance; the Navy plans to reduce facility repair activities; and the Marine Corps plans to seek the transfer of funds from investment accounts. We believe that the deferral of these activities will add to the requirements that will need to be funded in fiscal year 2005 and potentially later years and so could result in a “bow wave” effect in future fiscal years. Activities that are deferred also run the risk of costing more in future years. Recent congressional committee actions have signaled the Congress’ intent to require greater accountability regarding the use of GWOT funds. On May 12, 2004, the President submitted a budget amendment for DOD requesting $25 billion for the Iraqi Freedom Fund Contingent Emergency Reserve in fiscal year 2005. The House Committee on Appropriations included provisions in its bill for accountability related to the use of these funds. The committee bill includes numerous reporting requirements, including a new requirement for a comprehensive biannual report to the Congress that provides a detailed and specific accounting of the expenditure of taxpayer funds in Iraq and Afghanistan. In its committee report on the defense appropriations bill, the Senate Committee on Appropriations expressed its disappointment in the responsiveness of DOD in providing reports already required by various laws. The report does not require new reports be provided, but directs DOD to provide meaningful detail to describe the purposes and specific use of funding in all reports submitted to the committee. We have been reporting on the cost of ongoing military operations for more than a decade. In that reporting, we have analyzed DOD’s monthly cost reports detailing the reported obligations of funds in support of the operations. DOD currently prepares a monthly Consolidated DOD Terrorist Response Cost Report that contains reported obligations by operation and within each operation and by appropriation account for the military services and defense agencies. Within these accounts, the report provides obligation data in about 50 categories that are defined in chapter 23 of the DOD Financial Management Regulations. However, we have reported for several years and as recently as May 2004 that large amounts of reported obligations for GWOT are in miscellaneous categories in both the operation and maintenance and the military personnel accounts. For example, in fiscal year 2003, the $43.7 billion in operation and maintenance reported obligations were reported in four major categories: civilian personnel, personnel support, operating support, and transportation. As shown in figure 4, the operating support category, which details obligations for such operation-related activities as facilities support, fuel, and spare parts and totaled about $32.1 billion, showed about $15.5 billion in miscellaneous categories. This amount was comprised of about $7 billion for other supplies and equipment and about $8.4 billion for other services and miscellaneous contracts, which totaled about 35 percent of the total reported operation and maintenance obligations. Similarly, we reported that within the military personnel account, of $15.6 billion in reported obligations, $3.8 billion, or 24 percent, was in the miscellaneous category of other military personnel. We reported that in discussing the results of our analysis with the Office of the Under Secretary of Defense (Comptroller) and the military services, there was recognition of the large amount of reported obligations captured in miscellaneous categories and that the Comptroller’s office is considering how best to provide more specific detail in future cost reports. Chapter 23 of the Financial Management Regulations is DOD’s guidance on contingency operations cost definition and reporting. In our opinion, the categories defined in the guidance provide a uniform framework for capturing obligations, but the miscellaneous categories do not provide the specificity or transparency needed for the Congress and others to understand clearly how funds appropriated for contingency operations are being used, particularly since these categories involve billions of dollars in reported obligations. In our annual reporting on the cost and funding of ongoing military operations, we have recognized that estimating the costs of ongoing military operations is difficult because operational requirements can differ substantially from what was assumed in developing budget estimates. As a result, the actual funding requirement is often more or less than what was initially estimated, and the military services have sometimes used surpluses to fund activities that were not part of the contingency operation. We have found that in some years funding was insufficient for some services while it was sufficient for others and that within a service it was sufficient for one appropriation account but not for another. For example, in June 1996 we reported that the Army and the Navy reported obligations for operation and maintenance that were in excess of their supplemental funding, while the Air Force and the Marine Corps reported obligations that were less than their supplemental funding. In that year, both the Air Force and the Marine Corps used the excess funding for a variety of otherwise unfunded operational needs. In other years, the Congress rescinded excess funding or reduced subsequent year funding based on an expected carryover of funds. Largely because of the security situation in Iraq, the military services are forecasting costs as of June 2004 in excess of their supplemental GWOT funding. DOD is taking a variety of actions to cover these shortfalls. It has also asked the Congress to provide a $25 billion contingent reserve for GWOT in fiscal year 2005. To ensure accountability for the use of those funds, the Congress is contemplating requiring periodic reports on the use of such funds. Our past work has shown that current cost reporting includes large amounts of funds that have been reported as obligated in miscellaneous categories and so provides little insight on how those funds have been spent. This may result in reduced transparency and accountability to the Congress and the American people. Our work has also discussed the difficulty of accurately budgeting for annual funding needs and the resulting existence of both funding shortfalls and surpluses, which at times have been spent on noncontingency-related activities. This in turn helps highlight the importance of providing useful information to the Congress for its oversight role. In light of the fact that we have reported for years on the large amounts of reported obligations in the miscellaneous categories of the Consolidated DOD Terrorist Response Cost Reports, we recommend that the Secretary of Defense take the following three actions: (1) review recent Consolidated DOD Terrorist Response Cost Reports to identify the larger groupings of reported obligations within the “other supplies and equipment,” “other services and miscellaneous contracts,” and “other military personnel” cost categories; (2) revise Chapter 23 of the Financial Management Regulations to include these groupings as reporting categories so that the amounts classified in the “other” categories are minimized; and (3) direct the military services to begin reporting obligations using these new cost categories as soon as they are identified. To better assess the adequacy of previously provided funding, the Congress may wish to expand its reporting requirements for DOD on the use of GWOT funds to include reports at the half year and the end of the third quarter of each fiscal year that include an assessment of the adequacy of funding for GWOT in that fiscal year, including (a) if funding appears to be insufficient, the Secretary of Defense’s plan for covering any shortfall and (b) if funding appears to exceed forecasted costs, the procedures that will be followed to ensure that any excess funds are not used for non-GWOT purposes. DOD did not provide us comments by the date we requested. However, we discussed our analysis with a representative from DOD’s Office of the Under Secretary of Defense (Comptroller) and representatives from each military service’s budget office. We also discussed our proposed recommendation and matter for congressional consideration. The Office of the Under Secretary of Defense representative stated that he agreed with our proposed recommendation and that the department had been discussing ways to provide more detail in the cost report’s miscellaneous categories. This representative and the service representatives also stated that they had no objections to the matter for congressional consideration and, in fact, provide information to the Congress whenever it is requested. DOD also provided technical comments and we have incorporated them as appropriate. In particular, the Army clarified that they were not deferring the purchase of ceramic body armor and we agreed to delete references to that issue in the final report. We have also updated the reported obligations to reflect April data. We are sending copies of this report to other interested congressional committees; the Secretary of Defense; the Under Secretary of Defense (Comptroller); and the Director, Office of Management and Budget. Copies of this report will also be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please call me on (757) 552-8100. Principal contributors to this report were Steve Sternlieb, Ann Borseth, John Buehler, and David Mayfield. To assess the adequacy of funding for the Global War on Terrorism (GWOT), we reviewed (1) the President’s fiscal year 2004 budget request for supplemental appropriations, (2) applicable laws appropriating funds for GWOT, and (3) Department of Defense (DOD) reports on the obligation of GWOT funds. We obtained budget forecasts from the military services based on their midyear budget review and subsequent updates through June 2004, including key cost factors of GWOT operations. We also discussed these forecasts with service representatives. We compared the latest available obligation reports against total available appropriated funds. We focused our work on the obligation of funds appropriated for operation and maintenance and military personnel because they represented the large majority of funds obligated in fiscal year 2004 through April 2004. To assess actions planned to address forecasted GWOT funding shortfalls, we reviewed service documents related to the midyear budget review and subsequent updates through June 2004 and discussed with DOD and the military services the actions the services planned to take and their likely impact on current programs. We also reviewed applicable legislation on DOD’s authority to transfer funds. To provide observations on congressional efforts to improve accountability of GWOT funds, we reviewed available material in DOD appropriations and authorization bills for fiscal year 2005, committee press releases, and statements of key leaders to identify proposed actions to improve accountability. We also reviewed our reports related to the cost and funding of ongoing military operations dating back to fiscal year 1994. We visited the following locations during our review: Office of the Under Secretary of Defense (Comptroller), Washington, D.C. Department of the Army, Headquarters, Washington, D.C. U.S. Army Forces Command and Headquarters, Third Army, Fort McPherson, Georgia. Department of the Air Force, Headquarters, Washington, D.C. Air Force Central Command, Shaw Air Force Base, South Carolina. Department of the Navy, Headquarters, Washington, D.C. United States Marine Corps, Headquarters, Washington, D.C. First Marine Expeditionary Forces, Headquarters, Camp Pendleton, California. We performed our work from January through June 2004 in accordance with generally accepted government auditing standards. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
To support the Global War on Terrorism in fiscal year 2004, the Congress appropriated $65 billion to the Department of Defense (DOD) in an emergency supplemental appropriations act. To assist the Congress in its oversight role, GAO reviewed (1) the adequacy of current funding for fiscal year 2004 war-related activities and (2) actions DOD is undertaking to cover anticipated shortfalls, if any. Based on the body of work GAO has done on the cost of contingency operations, GAO is also making observations on efforts to require greater accountability to the Congress on the use of funds appropriated to DOD for contingency operations. GAO's analysis of reported obligations for the first seven months of fiscal year 2004 through April 2004 and the military services' forecasts as of June 2004 of their likely costs for the Global War on Terrorism for operation and maintenance and military personnel through the end of fiscal year 2004 suggests that anticipated costs will exceed the supplemental funding provided for the war by about $12.3 billion for the current fiscal year. DOD and the services are taking a variety of actions to cover anticipated shortfalls in their war-related funding. These actions include taking steps to reduce costs, transferring funds among appropriations accounts, and deferring some planned activities to use those funds to support the war. Also, DOD plans to ask the Congress for additional transfer authority, which would give it sufficient authority to move funds from one service to another and get funds to the operation and maintenance accounts that have the greatest shortfalls. The deferral of activities planned for fiscal year 2004 adds to the requirements that will need to be funded in fiscal year 2005 and potentially later years and could result in a "bow wave" effect in future fiscal years. GAO's past work has shown that current cost reporting includes large amounts of funds that have been reported as obligated in miscellaneous categories and thus provide little insight on how those funds have been spent. This is likely to result in reduced transparency and accountability to the Congress and the American people. Recent congressional actions have signaled the Congress' intent to require greater accountability regarding the use of GWOT funds. For example, in action on the President's $25 billion request for an Iraqi Freedom Fund Contingent Emergency Reserve in fiscal year 2005, the House Committee on Appropriations included provisions in its bill for cost reporting related to the use of these funds. But additional actions are necessary.
|
HUD implemented the MTW demonstration program in 1999. As of June 2013, 35 public housing agencies (PHA) were participating through the end of their fiscal year 2018. To put in place the innovations intended under the program’s authorizing legislation, agencies may request waivers of certain provisions in the United States Housing Act of 1937, as amended. For example, housing agencies may combine the funding they are awarded annually from different programs—such as public housing capital funds, public housing operating funds, and voucher funds—into a single, authoritywide funding source. In addition to addressing the program’s three statutory purposes—reduce costs and achieve greater cost-effectiveness in federal housing expenditures, give families with children incentives to obtain employment and become self-sufficient, and increase housing choices for low-income families—MTW agencies must meet five requirements. The agencies must (1) serve substantially the same total number of eligible low-income families that they would have served had funding amounts not been combined; (2) maintain a mix of families (by family size) comparable to those they would have served without the demonstration; (3) ensure that at least 75 percent of households served are very low-income;establish a reasonable rent policy to encourage employment and self- sufficiency; and (5) assure that the housing provided meets HUD’s housing quality standards. (4) A standard agreement (between HUD and each MTW agency) governs the conditions of participation in the program. The agreement includes an attachment that sets out reporting requirements, as well as the information that MTW agencies must include in annual reports. For example, these reports must include detailed information on the impact of each activity. MTW agencies also must self-certify that they are in compliance with three of the five statutory requirements: assisting substantially the same total number of eligible low-income families that they would have served had funding amounts not been combined; maintaining a mix (by family size) comparable to those they would have served had funding amounts not been combined under the demonstration; and ensuring that at least 75 percent of households served are very low-income. In our 2012 report, we identified a number of weaknesses related to MTW data, performance indicators, and identification of lessons learned—all of which resulted in a limited ability to determine program outcomes as they related to statutory purposes. Although MTW agencies reported annually on their activities, which included efforts to reduce administrative costs and encourage residents to work, the usefulness of this information was limited because it was not consistently outcome-oriented. For example, for similar activities designed to promote family self-sufficiency, one MTW agency reported only the number of participants, which is generally considered an output, and another did not provide any performance information. In contrast, a third agency reported on the average income of program graduates, which we consider an outcome. To be consistent with GPRAMA, HUD’s guidance on reporting performance information should indicate the importance of outcome-oriented information.specific guidance on the reporting of performance information—for Without more example, to report quantifiable and outcome-oriented information—HUD could not be assured of collecting information that reflected the outcomes of individual activities. As we reported in 2012, HUD had not identified the performance data needed to assess the results of similar MTW activities or of the program as a whole. Obtaining performance information from demonstration programs is critical—because the purpose of a demonstration is to test which approach obtains positive results. Although HUD started collecting additional data from MTW agencies (including household size, income, and educational attainment) in its MTW database, it had not analyzed the data. And since 2009, HUD had required agencies to provide information on the impact of activities, including benchmarks and metrics, in their annual MTW reports. While these reports were informative, they did not lend themselves to quantitative analysis because the reporting requirements did not call for standardized data, such as the number of residents who found employment. Whether these data would be sufficient to assess similar activities and the program as a whole was not clear, and as of April 2012 HUD had not identified the data it would need for such an assessment. HUD also had not established performance indicators for MTW. According to GPRAMA, federal agencies should establish efficiency, output, and outcome indicators for each program activity as appropriate. Federal internal control standards also require the establishment of performance indicators. As we noted in 2012, specific performance indicators for the MTW program could be based on the three statutory purposes of the program. For example, agencies could report on the savings achieved (reducing costs). However, without performance indicators HUD could not demonstrate the results of the program. The shortage of standard performance data and performance indicators had hindered comprehensive evaluation efforts, which are key to determining the success of any demonstration program. We recommended in 2012 that HUD (1) improve its guidance to MTW agencies on providing performance information in their annual reports by requiring that such information be quantifiable and outcome-oriented, (2) develop and implement a plan for quantitatively assessing the effectiveness of similar activities and for the program, and (3) establish performance indicators for the program. HUD partially agreed with these recommendations. Since our report, HUD has revised the performance reporting requirements for MTW agencies. The Office of Management and Budget (OMB) approved these revisions on May 31, 2013. The new requirements state that MTW agencies are to report standard metrics and report outcome information on the effects of MTW policy changes on residents. HUD also provided a standard format to allow analysis and aggregation across agencies for similar activities. We are currently assessing the extent to which these new requirements address our recommendations. Furthermore, as we indicated in our 2012 report, while HUD had identified some lessons learned on an ad hoc basis, it did not have a systematic process in place for identifying such lessons. As previously noted, obtaining impact information from demonstration programs is critical. Since 2000, HUD had identified some activities that could be replicated by other housing agencies. For example, a HUD-sponsored contractor developed five case studies to describe issues and challenges involved in implementing MTW. However, these and subsequent efforts had shortcomings. In most cases, the choice of lessons learned was based on the opinions of HUD or contracted staff and largely involved anecdotal (or qualitative) data rather than quantitative data. Because HUD had not developed criteria and a systematic process for identifying lessons learned, we reported in 2012 that it was limited in its ability to promote useful practices for broader implementation. Thus, we recommended that HUD create a process to systematically identify lessons learned. HUD agreed and in response, stated that once its revised reporting requirements were implemented, the resulting data would inform an effort to establish lessons learned. Consistent with this, HUD noted that one purpose of the revised reporting requirements that OMB approved in May 2013 was to identify promising practices learned through the MTW demonstration. HUD had policies and procedures in place to monitor MTW agencies but could have done more to ensure that MTW agencies demonstrated compliance with statutory requirements and to identify possible risks relating to each agency’s activities. For example, as noted in our 2012 report, HUD had not issued guidance to MTW agencies clarifying key program terms, including definitions of the purposes and statutory requirements of the MTW program. Federal internal control standards require the establishment of clear, consistent goals and objectives. Agencies also must link each of their activities to one of the three program purposes cited in the MTW authorizing legislation. However, at that time HUD had not clearly defined what some of the statutory language meant, such as “increasing housing choices for low-income families.” HUD officials acknowledged that the guidance could be strengthened. At the time, they told us that they planned to update the guidance to more completely collect information related to the program’s statutory purposes and requirements. As discussed later, HUD has since updated its guidance. Additionally, we reported in 2012 that HUD had only recently assessed agencies’ compliance with two (self-certified) requirements—to serve substantially the same total number of eligible low-income families that they would have served had funding amounts not been combined and ensure that at least 75 percent of households served were very low- income. Also, HUD had not assessed compliance with the third (also self- certified) requirement—to maintain a comparable mix of families. Federal internal control standards require control activities to be in place to address program risks. formulate an approach for assessing compliance with program requirements. Without a process for systematically assessing compliance with statutory requirements, HUD lacked assurance that agencies were complying with them. GAO/AIMD-00-21.3.1. program offices to perform an annual risk assessment of their programs or administrative functions using a HUD risk-assessment worksheet. By not performing annual risk assessments or tailoring its monitoring efforts to reflect the perceived risk of each MTW agency, HUD lacked assurance that it had properly identified and addressed risks that may prevent agencies from addressing program purposes and meeting statutory requirements. HUD also lacked assurance that it had been using its limited monitoring resources efficiently. Finally, we reported that HUD did not have policies or procedures in place to verify the accuracy of key information that agencies self-report, such as the number of program participants and the average income of residents “graduating” from MTW programs. Internal control standards and guidance emphasize the need for federal agencies to have control activities in place to help ensure that program participants report information accurately. reported performance information during their reviews of annual reports or annual site visits. GAO guidance on data reliability recommends tracing a sample of data records to source documents to determine whether the data accurately and completely reflect the source documents.performance information, it lacked assurance that this information was accurate. To the extent that HUD relied on this information to assess program compliance with statutory purposes and requirements, its analyses were limited. GAO/AIMD-00-21.3.1 and GAO-01-1008G. information that MTW agencies self-report. HUD partially agreed with our recommendations, citing potential difficulties in verifying MTW performance data. HUD also described steps it was taking to improve its guidance to MTW agencies and implement risk-based monitoring procedures. In May 2013, OMB approved revised reporting guidance to MTW agencies. The guidance requires agencies to report information related to the program’s statutory purposes and requirements. For example, it includes a template for data on compliance with the requirement to maintain a comparable mix of families. Additionally, according to a HUD official, the recently approved reporting requirements will result in more standardized data that HUD can verify either through audits or during site visits. As noted above, we are assessing this guidance. Legislation has been proposed to expand the number of PHAs that can participate in the MTW program, and a 2010 HUD report recommended expanding the program to up to twice its size. We reported in 2012 that HUD and some stakeholders believed that expansion could provide information on the effect of the MTW program and allow more PHAs to test innovative ideas, but questions remained about the lack of performance information on current MTW activities. Since our report was issued, four additional agencies were admitted into the program. HUD required these agencies to implement and study rent reform activities through partnerships with local universities and a research organization. HUD, Moving to Work (2010). had occurred in some of the communities affected by the MTW program and indicated that expansion could enable more PHAs to address local needs and therefore benefit additional communities. Similarly, officials from MTW agencies that we contacted stated that expansion of the program would provide a broader testing ground for new approaches and best practices. Finally, information from a private research organization, affordable housing advocates, and MTW agencies suggested that allowing additional PHAs to participate in the program could result in additional opportunities to test innovative ideas and tailor housing programs and activities to local conditions. In 2004, the Urban Institute reported that the local flexibility and independence permitted under MTW appeared to allow strong, creative PHAs to experiment with innovative solutions to local challenges. We have reported separately on cost savings that could be realized from allowing additional housing authorities to implement some of the reforms MTW agencies have tested. Some proponents of expansion that we interviewed also noted that expanding the MTW program could provide more PHAs with the ability to use funding from different sources more flexibly than possible without MTW status. As we have seen, MTW agencies may request waivers of certain provisions of the 1937 Housing Act in order to combine annual funding from separate sources into a single authoritywide funding source. HUD field office staff with responsibility for monitoring MTW agencies observed that the single-fund flexibility was beneficial because it enabled participating agencies to develop supportive service programs, such as job training or educational programs, which help move families toward self sufficiency. Further, officials from the MTW agencies we interviewed agreed that this flexibility was beneficial. For example, officials from one MTW agency stated it had been able to use the single fund to organize itself as a business organization, develop a strategic plan based on the housing needs of low-income families in the community, leverage public funds and public and private partnerships, and develop mixed-income communities. However, a lack of performance information (which creates a limited basis for judging what lessons could be taken from the program to date), limited HUD oversight, and concerns about the program’s impact on residents raised questions about expanding the MTW program. In its 2010 report to Congress, HUD acknowledged that the conclusive impacts of many MTW activities, particularly as they relate to residents, were not yet known. For example, the report noted that the rent reforms implemented under MTW varied greatly and were not implemented using a controlled experimental methodology. As a result, it was not clear which aspects of rent reforms should be recommended for all PHAs. The report also noted the limitations relating to evaluating the outcomes of MTW—limitations that stemmed from the weak initial reporting requirements and lack of a research design. The report concluded that, given these limitations, expansion should occur only if newly admitted PHAs structured their programs for high-quality evaluations that permitted lessons learned to be generalized for other PHAs. Similarly, representatives of affordable housing advocates and legal aid organizations that we interviewed stated that because lessons had not been learned from MTW, there was no basis for expanding the program. Abravanel and others, An Assessment of HUD’s Moving to Work Demonstration (2004). agencies were added under the current program design, HUD might need additional resources. Researchers and representatives of several affordable housing advocates and legal aid agencies with whom we met also suggested that an expanded program could negatively affect residents. For example, two research organizations had stated that some voucher policies could reduce portability—that is, residents’ ability to use their rental vouchers outside the area that the voucher-issuing PHA served. One of these organizations stated that differences in the way voucher programs were implemented across MTW agencies could reduce residents’ ability to use vouchers outside of the area where they received assistance. Officials from the other organization noted that some MTW agencies prohibited vouchers from being used outside of the originating jurisdictions, thereby limiting housing choices. According to HUD officials, MTW agencies with policies that limit portability could make exceptions. For example, these agencies had made exceptions for residents seeking employment opportunities. Until more complete information on the program’s effectiveness and the extent to which agencies adhered to program requirements is available, it will be difficult for Congress to know whether an expanded MTW would benefit additional agencies and the residents they serve. Mr. Chairman, Ranking Member Capuano, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions that you may have at this time. For further information about this testimony, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Paige Smith, Assistant Director; Emily Chalmers; John McGrail; Lisa Moore; Daniel Newman; Lauren Nunnally; Barbara Roesmann; and Andrew Stavisky. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Implemented in 1999, HUDs MTW demonstration program gives participating PHAs the flexibility to create innovative housing strategies. MTW agencies must create activities linked to three statutory purposesreducing costs, providing incentives for self-sufficiency, and increasing housing choicesand meet five statutory requirements. Congress has been considering expanding MTW. This testimony discusses (1) the programs progress in addressing the three purposes, (2) HUDs monitoring efforts, and (3) potential benefits of and concerns about expansion. This testimony draws from a prior report on the MTW program ( GAO-12-490 ). For that report, GAO analyzed the most current annual reports for 30 MTW agencies; compared HUDs monitoring efforts with internal control standards; and interviewed agency officials, researchers, and industry officials. For this testimony, GAO also reviewed actions HUD has taken in response to the reports recommendations. Opportunities existed to improve how the Department of Housing and Urban Development (HUD) evaluated the Moving to Work (MTW) program, which is intended to give participating public housing agencies (PHA) flexibility to design and test innovative strategies for providing housing assistance. GAO reported in April 2012 that HUD had not (1) developed guidance specifying that performance information collected from MTW agencies be outcome-oriented, (2) identified the performance data needed to assess results, or (3) established performance indicators for the program. The shortage of such standard performance data and indicators had hindered comprehensive evaluation efforts; such evaluations are key to determining the success of any demonstration program. In addition, HUD had not developed a systematic process for identifying lessons learned from the program, which limited HUD's ability to promote useful practices for broader implementation. Since the GAO report, HUD has revised reporting requirements for MTW agencies. These requirements were approved by the Office of Management and Budget in May 2013. GAO is reviewing this new guidance. In 2012, GAO also reported that HUD had not taken key monitoring steps set out in internal control standards, such as issuing guidance that defines program terms or assessing compliance with all the program's statutory requirements. As a result, HUD lacked assurance that MTW agencies were complying with statutory requirements. Additionally, HUD had not done an annual assessment of program risks, although it had a requirement to do so, and had not developed risk-based monitoring procedures. Without taking these steps, HUD lacked assurance that it had identified all risks to the program. Finally, HUD did not have policies or procedures in place to verify the accuracy of key information that MTW agencies self-report. For example, HUD staff did not verify self-reported performance information during their reviews of annual reports or annual site visits. Without verifying at least a sample of information, HUD could not be sure that self-reported information was accurate. According to HUD, the recently approved reporting requirements will result in more standardized data that HUD can verify either through audits or during site visits. Finally, GAO noted in 2012 that expanding the MTW program might offer benefits but also raised questions. According to HUD, affordable housing advocates, and MTW agencies, expanding MTW to additional PHAs would allow agencies to develop more activities tailored to local conditions and produce more lessons learned. However, data limitations and monitoring weaknesses raised questions about expansion. HUD had reported in 2010 that expansion should occur only if newly admitted PHAs structured their programs to permit high-quality evaluations and ensure that lessons learned could be generalized. Since the GAO report was issued, four additional agencies were admitted into the program. HUD required these agencies to implement and study rent reform activities through partnerships with local universities and a research organization. Until more complete information on the program's effectiveness and the extent to which agencies adhered to program requirements is available, it will be difficult for Congress to know whether an expanded MTW would benefit additional agencies and the residents they serve. GAO recommended that HUD improve MTW information and monitoring. HUD partially agreed with these recommendations and has since issued new guidance to MTW agencies.
|
The Results Act is aimed at improving performance of government programs by requiring agencies to clarify their missions, establish goals and strategies for reaching them, measure performance, and report on their accomplishments. Beginning with fiscal year 1999, the head of each agency is to prepare and submit to Congress and the president a report on program performance. The first of these annual reports is to be submitted no later than March 31, 2000. These reports are to contain two main parts: a report on the actual performance achieved as compared with the performance goals expressed in the performance plan and the plans and schedules to achieve those goals that were not met. If a performance goal becomes impractical or infeasible to achieve, the agency is to explain why that is the case and what legislative, regulatory, or other actions are needed to accomplish the goal, or whether the goal ought to be modified or discontinued. Finally, the reports should also relate performance measurement information to program evaluation findings, in order to give a clear picture of an agency’s performance and its efforts at improvement. To address our objectives, we relied on our large body of work on agencies’ performance data problems and related issues. We drew examples from our reviews of agencies’ efforts to implement the Results Act, such as our reviews of agencies’ fiscal years 1999 and 2000 performance plans, and products we have issued on major management challenges and risks, such as our performance and accountability series and high-risk series. Because this report is based primarily on our previously issued reports, we did not obtain agency comments. Our work on this report was done from October 1999 to January 2000, in Washington, D.C., in accordance with generally accepted government auditing standards. Agencies need reliable information during their planning efforts to set realistic goals and later, as programs are being implemented, to gauge their progress toward achievement of those goals. In our assessments of annual performance plans, we identified challenges that will affect agencies’ abilities to reliably report on the achievement of program goals and, in cases where goals are not met, either identify opportunities for improvement or whether goals need to be adjusted. For example, we concluded in our review of fiscal year 1999 performance plans that future plans would be more useful if they would, among other things, (1) more fully articulate how strategies and resources will lead to improved performance and (2) provide much greater confidence that performance information will be credible and useful for decisionmaking. Although, on the whole, the fiscal year 2000 plans showed moderate improvements over the fiscal year 1999 plans, we identified these two issues as continuing key weaknesses common among agencies’ plans. Most of the fiscal year 2000 plans related strategies and programs to performance goals; however, few plans indicated how the strategies would contribute to accomplishing the expected level of performance. Agencies need to understand and articulate how what they do on a day-to-day basis contributes to mission-related results. Such an understanding is important for agencies to pinpoint opportunities to improve performance and design and implement appropriate initiatives. This information is also helpful to congressional and other decisionmakers in assessing the degree to which strategies are appropriate and reasonable. The outcomes of many federal programs are the result of the interplay of several factors, and only some of these are within a program’s control.Further, on a daily basis, agencies do not produce outcomes, but rather outputs, such as activities, products, or services that are intended to contribute to outcomes. Thus, a key analytic challenge for agencies is knowing how their programmatic efforts contribute to their desired outcomes. The inconsistent attention to this critical element undermined the value of agencies’ performance plans and, unless addressed, it also will severely limit the value of their performance reports. In 1997, we reported that the Department of Justice’s Immigration and Naturalization Service (INS) lacked data on the overall effectiveness of its southwest border strategy. For example, data were insufficient to indicate whether illegal aliens were deterred from entering the United States, whether there had been a decrease in attempted reentries by those who had previously been apprehended, and whether the strategy had reduced border violence. We noted that, despite the investment of billions of dollars in the strategy, INS had amassed only a partial picture of the effects of increased border control and did not know whether the investment was producing the intended results. We reported that a comprehensive, systematic evaluation of the agency’s strategy to deter illegal entry along the southwest border would provide INS with information on whether its border enforcement strategy had produced the intended results. After our report, INS contracted with independent research firms for an evaluation. Another weakness that we identified in our review of agencies’ fiscal year 1999 and fiscal year 2000 performance plans was the limited confidence they provided in the credibility of performance information. Credible performance information is essential for accurately assessing agencies’ progress towards the achievement of their goals—the cornerstone of performance reporting. As shown in figure 1, our analysis of agencies’ fiscal year 2000 performance plans noted that most of the plans provided only limited confidence that performance information would be credible. Only the plans for the Department of Education, the Department of Justice, the Department of Transportation, and the Social Security Administration provided general confidence that their performance information would be credible. Decisionmakers must have assurance that the program and financial data being used will be sufficiently timely, complete, accurate, useful, and consistent if these data are to inform decisionmaking. However, like the fiscal year 1999 performance plans, most of the fiscal year 2000 plans lacked information on the procedures the agencies would use to verify and validate performance information. Similar to our findings with the fiscal year 1999 plans, we also found that, in general, the fiscal year 2000 plans failed to include discussions of strategies to address known data limitations. We reported that when performance data are unavailable or of low quality, a performance plan would be more useful to decisionmakers if it briefly discussed how the agency plans to deal with such limitations. Without such a discussion, decisionmakers will have difficulty determining the implications for assessing the subsequent achievement of performance goals that agencies include in their performance reports. In order to successfully measure and report progress toward intended results, agencies need to build the capacity to gather and use performance information. However, our work over the past several years has identified limitations in agencies’ abilities to produce credible performance data. Specifically, those limitations relate to program design issues that may make it difficult to collect timely and consistent national data, the relatively limited level of agencies’ program evaluation capabilities, and long-standing weaknesses in agencies’ financial management capabilities. In several program areas, devolution of program responsibility from the federal level and consolidation of individual federal programs into more comprehensive, multipurpose grant programs have shifted both program management and accountability responsibilities toward the states. These programs vary greatly in the kind and degree of flexibility afforded to state or local entities, distribution of accountability across levels of government, and availability of direct measures of program performance. In our report on grant program design features, we noted that relatively few flexible programs collected uniform data on the outcomes of state or local service activities. Collecting such data requires conditions—such as uniformity of activities, objectives, and measures—that do not exist under many flexible program designs. For instance, we reported that the block grants enacted as part of the Omnibus Budget Reconciliation Act of 1981 carried no uniform federal information and reporting requirements. States collected a wide range of program information, but the collection efforts were designed to meet the needs of the individual states. Congress had limited information on program activities, services delivered, and clients served. As a result, it was difficult, in many cases, to aggregate state experiences and speak from a national perspective on the block grant activities or their effects. Similarly, without uniform information definitions and collection methodologies, it was difficult to compare state efforts or draw meaningful conclusions about the relative effectiveness of different strategies. Education is one of many agencies where the interest in having enough information for accountability and federal program management continually competes with the aim of providing local agencies with the flexibility needed to implement their programs on the basis of their local needs. The Safe and Drug-Free Schools program, for example, allows a wide range of activities, such as drug prevention instruction for students; staff training; general violence-prevention instruction; and special one-time events, such as guest speakers and drug- and alcohol-free social activities.States are also permitted to define the information they collect on program activities and effectiveness. Under the Safe and Drug-Free Schools and Communities Act, Education oversees state programs and state agencies monitor local programs. Under the act, each state may establish its own reporting requirements for local education agencies. Although these requirements have some common elements, state requirements vary widely. With no requirements that states use consistent measures, Education faces, as our work has shown, a difficult challenge in assembling the required state reports to develop a nationwide picture of the program’s effectiveness. Because states, localities, or nongovernmental organizations operate many of its programs, the Department of Health and Human Services (HHS) experiences similar challenges. The Personal Responsibility and Work Opportunity Reconciliation Act dramatically altered the nation’s system for providing assistance to the poor. Among the many changes, the act replaced the existing entitlement program for poor families (Aid to Families With Dependent Children) with fixed block grants to the states to provide Temporary Assistance for Needy Families (TANF). Under the TANF block grant, states have flexibility in designing and implementing their own assistance programs within federal guidelines. Meanwhile, HHS has a broad range of responsibilities for ensuring accountability from the states. The welfare reform law gives HHS administrative and oversight responsibilities, the performance of which will rely on state-provided data. HHS needs to ensure that it receives comparable and reliable data from the states to help it fulfill its oversight responsibilities, which include ensuring that states enforce the federal 5-year time limit on receiving welfare benefits, meet minimum work participation rates, and maintain a certain level of state welfare spending. Enforcing the time limit, for example, will be difficult because information on the total amount of time that someone has received TANF is not always available in individual states, let alone across states. In addition, the law gives HHS authority to assess penalties if states fail to comply with certain requirements and provides for states to receive bonuses if they meet certain performance standards. HHS needs to collect state data to determine performance penalties and bonuses. In view of the increased flexibility of states in designing their programs, obtaining comparable and reliable data to assess the effect of welfare reform on children and families could be difficult for HHS. The Environmental Protection Agency (EPA) provides another example of a federal agency that depends on the state and local agencies it is working with to provide the performance information that indicates whether results are being achieved. For example, the state water quality reports required by the Clean Water Act are a key source of information for measuring progress in cleaning up the nation’s lakes, rivers, and streams. However, EPA has found that the wealth of environmental data EPA and states collect are often difficult to compile in a meaningful way. As contained in the Clean Water Act, Congress left the primary monitoring responsibility to the states for measuring progress in cleaning up the nation’s lakes, rivers, and streams. However, inconsistencies in water quality assessments and in assessment methodologies from state to state make it difficult to aggregate the data and to use the information to conclusively determine whether the quality of rivers, lakes, and streams is getting better or worse over time. Absent this information, it has been difficult for EPA to set priorities, evaluate the success of its programs and activities, and report on its accomplishments in a credible and informed way. The unavailability of reliable performance information can also be traced to a lack of standards and of common definitions for terms used to evaluate programs. For example, we reported that the agencies involved in wetlands-related activities inconsistently used terms such as protection, restoration, rehabilitation, improvement, and enhancement in describing and reporting on their accomplishments. Even when the same terms are used, the agencies do not define them in the same way. As a result, the consistency and reliability of data on the status of wetlands acreage are questionable. Thus, neither the progress made toward achieving the governmentwide goal of no net loss of the nation’s remaining wetlands nor the contributions made by the agencies in achieving this goal can be accurately measured. Weaknesses in the availability of direct measures of performance can be overcome by drawing on information from other sources, such as program evaluation studies, research on the effectiveness of service delivery, or aggregate data such as vital statistics that describe the general status of a population. In this regard, our report on grant program design features noted that 13 of the 21 flexible grant programs reviewed used such sources along with, or as a substitute for, performance measures collected from program operations. We found that agencies that made use of multiple sources had information that covered more aspects of program performance than those that relied upon a single source. Program evaluation studies are important for assessing how well programs are working, determining factors affecting performance, and identifying improvement opportunities. In our report on the analytic challenges facing agencies in measuring performance, we stated that supplementing performance data with impact evaluations might help provide agencies with a more complete picture of program effectiveness. Evaluations can play a critical role in helping to address those measurement and analysis difficulties agencies face that stem from two features common to many federal programs: the interplay of federal, state, and local government activities, and objectives and the aim to influence complex systems or phenomena whose outcomes are largely outside government control. Furthermore, systematic evaluation of how a program was implemented can provide important information about why a program did or did not succeed as well as suggest ways to improve it. However, as we reported in our assessment of agencies’ fiscal year 1999 performance plans, we continue to be concerned about the lack in many federal agencies of the capacity to undertake the program evaluations that will be vital to the success of the Results Act. In our earlier review of agencies’ strategic plans, we found that many agencies had not given sufficient attention to how program evaluations will be used in implementing the Results Act and improving performance. In another report, we noted that agencies’ program evaluation capabilities would be challenged to meet the new demands for information on program results.We found that the resources allocated for conducting program evaluations were small and unevenly distributed across the 13 departments and 10 independent agencies we surveyed for that report. Good evaluation information about program effects is difficult to obtain. Each of the tasks involved—measuring outcomes, ensuring the consistency and quality of data collected, establishing the causal connection between outcomes and program activities, and separating out the influence of extraneous factors—raises formidable technical or logistical problems that are not easily resolved. Thus, evaluating program impact generally requires a planned study and, often, considerable time and expense. The experiences of the Head Start program illustrate the importance—and difficulty—of systematic program evaluation. Head Start, administered by HHS’ Administration for Children and Families, is one of the most popular federal early childhood programs and has long enjoyed both congressional and public support. Between fiscal years 1990 and 1998, annual Head Start funding nearly tripled, from $1.5 billion to almost $4.4 billion. Head Start’s purpose is to improve the social competence of children in low- income families, and in the past 33 years, the program has provided a comprehensive set of services to about 16 million low-income children. Educational, medical, nutritional, mental health, dental, social, and other services have been provided to low-income children and their families in all 50 states, the District of Columbia, Puerto Rico, and the U.S. territories, as well as to migrant and Native American populations. Given the size of the Head Start program and the efforts to expand the program’s annual enrollment to one million children by 2002, investing in studies that will assess its impact is important. Specifically, the challenge for HHS is to determine whether the same outcomes would have occurred if children and families were in other kinds of early childhood programs, or none at all. HHS has substantially strengthened its emphasis on determining whether Head Start has achieved its purpose. In part in response to the direction of Congress, HHS has new initiatives that will, in the next few years, provide information not previously available on outcomes, such as gains made by children and their families while in the program. In addition, the program is currently designing an impact study to assess whether children and their families would have achieved these gains without participating in Head Start. Congress has required that HHS submit a final report on the impact of the Head Start program by September 30, 2003. The long-standing inability of many federal agencies to accurately record and report financial management data on both a year-end and an ongoing basis for decisionmaking and oversight purposes continues to be a serious weakness. Without reliable data on costs, decisionmakers cannot effectively evaluate programs’ financial performance or control and reduce costs. Under the Chief Financial Officers (CFO) Act, agencies are expected to develop and deploy modern financial management systems and to routinely produce sound cost and operating performance information, among other things. Further, the Federal Financial Management Improvement Act (FFMIA) focuses on ensuring greater attention to making much needed improvements in financial management systems. The primary purpose of FFMIA is to ensure that agency financial management systems routinely provide reliable, useful, and timely financial information. With such information, government leaders will be better positioned to invest scarce resources, reduce costs, oversee programs, and hold agency managers accountable for the way they run government programs. Table 1 shows the financial statement audit results for fiscal year 1998 for the 24 CFO Act agencies. In addition, financial management systems for 21 of the 24 agencies were found by auditors not to comply substantially with FFMIA’s requirements for fiscal year 1998. The three agencies in compliance were the Department of Energy, National Aeronautics and Space Administration, and the National Science Foundation. For some agencies, the preparation of financial statements requires considerable reliance on ad hoc programming and analysis of data produced by inadequate financial management systems that are not integrated or reconciled, and that often require significant audit adjustments. The key for agencies is to take steps to continuously improve internal controls and underlying financial and management information systems. These systems must generate timely, accurate, and useful information on an ongoing basis. The overhauling of financial and related management information systems is the overarching challenge for agencies in generating timely, reliable data throughout the year. The following examples illustrate serious financial management weaknesses and systems problems. While the Department of Defense (DOD) is responsible for vast operations—with an estimated $1 trillion in assets, nearly $1 trillion in liabilities, and a net cost of operations of $280 billion in fiscal year 1998— no major part of the department has been able to pass the test of an independent audit because of pervasive financial management weaknesses. Such weaknesses led us in 1995 to put DOD financial management on our list of high-risk areas vulnerable to waste, fraud, and mismanagement—a designation that continued unchanged in our more recent high-risk update. These financial management weaknesses limit the reliability and timeliness of DOD’s currently available financial information. DOD management and/or auditors have repeatedly found DOD systems to be inadequate for measuring the cost of operations and programs. For example: DOD has acknowledged that the lack of a cost accounting system is the single largest impediment to controlling and managing weapon systems costs, including costs of acquiring, managing, and disposing of weapon systems. DOD is unable to provide actual data on the cost associated with functions to be considered for A-76 outsourcing competitions, including the capital costs associated with operations. DOD has long-standing problems accumulating and reporting the full costs associated with working capital fund operations, which provide goods and services in support of the military services. As a result of our assessment of DOD’s fiscal year 2000 performance plan, we noted that the lack of adequate cost information impairs the development of cost-based performance measures and indicators across virtually the entire spectrum of DOD’s program operations. While DOD developed 43 unclassified performance measures and indicators to measure a wide variety of activities—from force levels to asset visibility— these measures and indicators contained few efficiency measures based on cost. In our most recent testimony on DOD financial management, we reported that DOD has started to devote additional resources to correcting its financial management weaknesses. DOD’s Financial Management Improvement Plans represent an ambitious undertaking and are an important step toward long-term improvements in the department’s accountability. However, eliminating DOD’s financial management weaknesses represents a major challenge because they are pervasive and entrenched in an extremely large decentralized organization. As another example, in January 1999, we designated the Federal Aviation Administration’s (FAA) financial management a high-risk area because of serious and long-standing accounting and financial reporting weaknesses.These weaknesses render FAA vulnerable to waste, fraud, and abuse; undermine its ability to manage its operations; and limit the reliability of financial information it provides to Congress. Beginning with fiscal year 1994, the Department of Transportation’s Office of the Inspector General has audited FAA’s financial statements and has consistently been unable to determine whether the financial information is reliable. This pattern of negative financial audit results has continued with its most recent report—a disclaimer of opinion—on FAA’s fiscal year 1998 financial statements, citing as a primary reason the inability to verify property, plant, and equipment (PP&E) reported at a cost of $11.9 billion. We previously reported that many problems in the PP&E accounts affect FAA’s ability to efficiently and effectively manage programs that use these assets. We also reported that many problems in these accounts result from the lack of a reliable system for accumulating project cost accounting information. The inadequacy of FAA’s cost accounting system is a weakness that prevents the agency from having reliable and timely information about the full cost of program activities. The lack of cost accounting information also limits FAA’s ability to, among other things, meaningfully evaluate performance in terms of efficiency and cost- effectiveness. FAA senior management have indicated that they recognize the urgency of correcting their financial management deficiencies and have taken steps to address them, including efforts to continue to develop a cost accounting system, which FAA expects will be fully operational in 2001. However, as we reported in July 1999, while FAA has taken steps that are likely to lead to or already have resulted in improved accountability for FFA assets, much still remains to be done. Agencies’ March 2000 performance reports will provide them with an opportunity to show the progress they have made in addressing data credibility issues. As far back as our earliest assessment of agencies’ efforts to implement the Results Act, and more recently in our reviews of agencies’ strategic and performance plans, we identified data credibility issues as a persistent and continuing challenge for agencies. In passing the Results Act, however, Congress emphasized that the usefulness of agencies’ performance information depends, to a large degree, on the reliability and validity of their data. During this past year, we issued several reports on practices and approaches that agencies have proposed or adopted that address data credibility issues. For example, we reported that applied practices, such as identifying actions to compensate for unavailable or low-quality data and discussing implications of data limitations for assessing performance, can help agencies describe their capacity to gather and use performance information. To illustrate, the Department of Transportation stated in its fiscal year 1999 performance plan that one of the most significant limitations of both internal and external data is timeliness. One way the department plans to deal with this limitation is to compile preliminary estimates from the portion of data that is available in time to report on the performance measures. According to the plan, fatality data from the first 6 months of the year could be compared with data from the first 6 months of the previous year for an initial performance measurement. In our report on reasonable approaches to verify and validate performance information, we identified a wide range of possible approaches that can be organized into four general strategies, as follows: Management can seek to improve the quality of performance data by fostering an organizational commitment and capacity for data quality. Verification and validation can include assessing the quality of existing performance data. Assessments of data quality are of little value unless agencies are responding to identified data limitations. Building quality into the development of performance data may help prevent future errors and minimize the need to continually fix existing data. These approaches can help agencies improve the quality, usefulness, and credibility of performance information. However, as noted earlier, making stakeholders aware of significant data limitations allows them to judge the data’s credibility for their intended use and to use the data in appropriate ways. All data have limitations that may hinder their use for certain purposes but still allow them to be used for others. Stakeholders may not have enough familiarity with the data to recognize the significance of their shortcomings. Therefore, appropriate use of performance data may be fostered by clearly communicating how and to what extent data limitations affect assessments of performance. For example, we noted that when the Department of the Treasury’s Customs Service field staff realized management was using performance data to make decisions, the staff began providing explanations for any incorrect data. Customs also said it required each office to establish a data quality function, responsible for verification and validation, which would be inspected annually. A federal environment that focuses on results relies on new types of information that are different from those that have traditionally been collected by federal agencies. Obtaining more credible results-oriented performance information is essential for agencies to plan their efforts and gauge progress toward the achievement of their goals. However, as we previously reported, agencies have encountered some difficult analytic and technical challenges in obtaining timely and reliable results-oriented performance information and in ensuring that program evaluations that allow for the informed use of that information are undertaken. The Results Act requires agencies to describe in their annual performance plans how they will verify and validate the performance information that will be collected. Including such information in performance reports can be equally important in helping to assure report users of the quality of the performance data. Discussing data credibility and related issues in performance reports can provide important contextual information to Congress and agencies to help them address the weaknesses in this area. For example, this sort of discussion in an agency’s performance report can alert Congress to the problems the agency has had in collecting needed results-oriented performance information. Agencies can also alert Congress to the cost and data quality trade-offs associated with various collection strategies, such as relying on sources outside the agency to provide performance data and the degree to which those data are expected to be reliable. Finally, in order to give a clear picture of the agency’s performance and its efforts at improvement, annual reports on performance can also relate performance measurement information to program evaluation findings. We are sending copies of this report to Senator Joseph I. Lieberman, Ranking Minority Member, Senate Governmental Affairs Committee; Representative Richard A. Gephardt, Minority Leader, House of Representatives; Representative Henry A. Waxman, Ranking Minority Member, House Government Reform Committee; Representative John M. Spratt, Jr., Ranking Minority Member, House Budget Committee; and the Honorable Jacob J. Lew, Director, Office of Management and Budget. Copies will be made available to others on request. Please contact me at (202) 512-8676 if you have any questions. Dottie Self was the key contributor to this report. J. Christopher Mihm Associate Director, Federal Management and Workforce Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO identified some of the challenges agencies face in producing credible performance information and how those challenges may affect performance reporting, focusing on: (1) whether the weaknesses identified in agencies' performance plans imply challenges for the performance reports; (2) some of the challenges agencies face in producing credible performance data; and (3) how performance reports can be used to address data credibility issues. GAO noted that: (1) it appears unlikely that agencies consistently will have for their first performance reports the reliable performance information needed to assess whether performance goals are being met or specifically how performance can be improved; (2) over the past several years GAO has identified limitations in agencies' abilities to produce credible data and identify performance improvement opportunities; (3) these limitations are substantial, long-standing, and will not be quickly or easily resolved; (4) they are likely to be reflected in agencies' initial performance reports as they have been in the performance plans to date; (5) in administering programs that are a joint responsibility with state and local governments, Congress and the executive branch continually balance the competing objectives of collecting uniform program information to assess performance with giving states and localities the flexibility needed to effectively implement intergovernmental programs; (6) the relatively limited level of agencies' program evaluation capabilities suggests that many agencies are not well positioned to undertake necessary evaluations; (7) program evaluations are important to providing information on the extent to which an agency's efforts contributed to results and to highlight opportunities to improve those results; (8) long-standing weaknesses in agencies' financial management capabilities make it difficult for decisionmakers to effectively assess and improve many programs' financial performance; (9) in order to help agency managers select appropriate techniques for assessing, documenting, and improving the quality of their performance data, some agencies proposed or adopted reasonable approaches to verify and validate performance information; (10) these approaches include senior management actions, agencywide efforts, and specific program manager and technical staff activities, which could be used, where appropriate, to improve the quality, usefulness, and credibility of performance information; (11) performance reports provide agencies with an opportunity to show the progress made in addressing data credibility issues; (12) the Government Performance and Results Act requires agencies to describe in their annual performance plans how they will verify and validate the performance information that will be collected; and (13) including information in performance reports describing the quality of the reported performance data and the implications of missing data can be equally important and can provide key contextual information to Congress and other users of the performance reports.
|
Drinking water infrastructure includes treatment and storage facilities and distribution systems (pipes and conduits), while wastewater infrastructure includes sewage collection systems and treatment works. According to EPA officials, there are some 2 million miles of pipe in drinking water systems alone. While estimates vary, the amount needed for future capital investments in water and wastewater infrastructure appears large. According to EPA’s most recent survey of drinking water systems, conducted in 1999, the needs are $150.9 billion over 20 years. A 1996 report on a similar EPA survey of wastewater systems identified needs of $128 billion over 20 years, and a subsequent analysis by EPA estimated an additional $56 billion to $87 billion to correct existing sanitary sewer overflow problems. The Water Infrastructure Network—a consortium of industry, municipal, and nonprofit associations—recently estimated needs of up to $1 trillion over the next 20 years for drinking water and wastewater systems combined, when both the capital investment needs and the cost of financing are considered. The actual future needs will likely be met by some combination of local, state, and federal funding sources. This report is intended to provide information on the amount of federal financial assistance available for drinking water and wastewater facilities in local communities. We use the term “made available” to encompass several forms of federal funding. Because of differences in the programs and in the ways that federal agencies account for their financial assistance, the information that best reflected the amounts made available for drinking water and wastewater facilities came from appropriations, obligations, or expenditures, depending on the agency and the specific program in question. For example, the data for EPA include appropriated amounts for the revolving loan fund capitalization grants to the states for each year; the states may not have loaned the funds (i.e., actually made them available) to local water systems until after the end of the fiscal year in which they were appropriated. In contrast, the data for HUD and Commerce consist of obligated amounts—that is, the amounts of funds allocated by the agencies to drinking water and wastewater infrastructure projects during the fiscal year. For these agencies, we report obligations rather than appropriated amounts because their appropriations are for broader purposes than drinking water and wastewater. In other cases, the data consist of expenditures—that is, funds actually expended during the fiscal year regardless of when they were appropriated or obligated. For the loan programs of the Small Business Administration and USDA’s Rural Utilities Service, the amounts reported are the face value of the loans or loan guarantees that were available to be made for the fiscal year. Because most of these loans are repaid, the ultimate cost to the federal government is significantly less than the face value. Further details on the costs of the loan programs—and the nature and amount of other financial assistance provided by each agency—are included in the following sections of this report. The federal government has been providing financial assistance for wastewater treatment facilities since the enactment of the Federal Water Pollution Control Act Amendments of 1956, which provided grants to local governments for constructing treatment facilities, but limited the federal contribution to 30 percent of eligible construction costs. The Federal Water Pollution Control Act Amendments of 1972, later designated the Clean Water Act, increased the federal share of costs to 75 percent. According to the Congressional Budget Office, federal outlays for wastewater treatment grants rose 10-fold during the 1970s, reaching a high $8.4 billion in 1980. Subsequent amendments in 1981 and 1987, respectively, reduced and then phased out the construction grant program, replacing it with grants to the states to capitalize state revolving funds. As a condition of receiving the federal funds, states are required to contribute to the revolving funds as well. Under the Clean Water State Revolving Fund program, states provide loans to communities to finance wastewater treatment works projects as well as other water quality projects. The 1987 law envisioned that loan repayments, by financing future loans, would allow the state revolving funds to operate without sustained federal support, and authorized appropriations only through 1994. However, the Congress has continued to appropriate funds each year since. For each of the most recent fiscal years—1999, 2000, and 2001—the Congress has appropriated about $1.3 billion for the revolving loan fund program. The first major federal legislation on drinking water, enacted in 1974, was the Safe Drinking Water Act, which required EPA to set standards or treatment techniques for contaminants that could adversely affect human health. The Congress amended the act in 1986 to establish deadlines intended to accelerate EPA’s efforts to set standards for more contaminants and added a number of other significant requirements to be met by EPA and public water systems. In part to help local water systems with the costs of meeting federal standards, the Safe Drinking Water Act Amendments of 1996 established a Drinking Water State Revolving Fund, similar to that under the Clean Water Act, which also requires state contributions. The law authorized appropriations of $9.6 billion through 2003; actual appropriations through fiscal year 2001 have totaled $4.4 billion. From fiscal years 1991 through 2000, nine federal agencies made about $44 billion available for drinking water and wastewater capital improvements. Four agencies—EPA, USDA, HUD, and Commerce— account for about 98 percent of the total. The remaining federal assistance, which totaled about $1.1 billion over the 10 years, was provided by the Appalachian Regional Commission, the Federal Emergency Management Agency, the Department of the Interior’s Bureau of Reclamation, the Small Business Administration, and the U.S. Army Corps of Engineers. Of the nine federal agencies that provided financial assistance for drinking water and wastewater infrastructure from fiscal years 1991 through 2000, four of them—EPA, USDA’s Rural Utilities Service, HUD, and Commerce’s Economic Development Administration—accounted for about 98 percent of the total. EPA and USDA alone accounted for over 85 percent of the assistance. Over 82 percent of the total assistance was provided in the form of grants; the remainder consisted of loans and loan guarantees. Although the programs differed in terms of eligibility criteria, allowable uses, and funding priorities, for the most part, the financial assistance was available to a broad range of entities. EPA’s financial assistance came primarily in the form of grants to the states to capitalize the Drinking Water and Clean Water State Revolving Funds, which are used to finance improvements at local drinking water and wastewater treatment facilities, respectively. The states lend money from the funds to local communities or facilities for improvements needed to comply with the Safe Drinking Water Act or Clean Water Act and to protect public health or water quality. According to EPA, money invested in the funds provides about four times the purchasing power over 20 years compared with what would occur if the money were distributed as grants because the revolving fund money gets recycled when loans are repaid and funds then become available for new loans. (See fig. 2 for a breakdown of EPA’s financial assistance for infrastructure projects during the period that we reviewed.) To obtain the capitalization grants, states are required to match 20 percent of the federal grant money. Under the Drinking Water State Revolving Fund, EPA regularly updates the allotments to the states based on their proportional share of the needs identified in EPA’s periodic surveys. Allotments from the Clean Water State Revolving Fund derive from the percentages specified in the 1987 amendments to the Clean Water Act. The percentages were based on several factors, including the states’ respective wastewater treatment needs, as reflected in EPA’s 1976 and 1980 needs surveys. Drinking water projects designated as priorities for financial assistance are those that (1) address the most serious risk to human health, (2) are necessary to ensure compliance with Safe Drinking Water Act requirements, and (3) will assist systems most in need on a per household basis, according to state “affordability” criteria. Under the Clean Water Act, states have greater flexibility. While the states must similarly establish a priority ranking system for their projects, they are not bound to fund these projects in priority order. Decisions on which projects receive what assistance are based on considerations of the severity of local water pollution problems and other factors. USDA’s Rural Utilities Service provides direct loans, loan guarantees, and grants to construct or improve drinking water, sanitary sewer, solid waste, and storm drainage facilities in rural communities, as part of the Rural Community Advancement Program. Among other things, the program is designed to provide basic human amenities, alleviate health hazards, and promote the orderly growth of rural areas by financing new and improved rural drinking water and waste disposal facilities. From fiscal year 1991 through fiscal year 2000, USDA provided about $12.5 billion under this program. (See fig. 3 for a breakdown of USDA’s financial assistance during the period that we reviewed.) In general, to be eligible for USDA assistance, a facility must serve a rural area with a population of 10,000 or less and must be unable to finance its needs from its own resources or obtain other credit at reasonable rates and terms. Priority for USDA assistance is given to public entities in areas with populations of less than 5,500 and requests that involve the consolidation of small facilities or utilities serving low-income communities. Grants and loans (or loan guarantees) are made directly to local entities, including municipalities, counties, or other political subdivisions of states, nonprofit corporations, such as cooperatives, or Indian tribes. To obtain grant assistance, the median household income of the applicant’s service area must fall below the statewide nonmetropolitan median household income. Grants are limited to amounts required by applicants to establish reasonable user rates and cannot exceed 75 percent of eligible project costs. Thus, local grantees must provide matching funds. For loans and loan guarantees, the allowable loan term is limited to 40 years, the useful life of the facilities, or the maximum allowable term under state law, whichever is shorter. HUD’s Community Development Block Grant program is intended to aid in the development of viable urban communities. Generally, the program focuses on community development, and drinking water and wastewater facility investment is often an integral part of that effort. From fiscal year 1991 through fiscal year 2000, HUD provided over $4 billion in block grants that were used for drinking water and wastewater projects. In addition to the block grants, HUD provided $39.9 million in assistance for water and wastewater projects specifically designated in the appropriations process. (See fig. 4 for a breakdown of the assistance available from HUD during the period that we reviewed.) HUD’s community development block grant program funds are distributed directly to larger communities, called entitlement communities, as well as to states for distribution to smaller communities. To determine the amount of its grants to communities and states, HUD uses formulas that combine several objective measures of community needs, such as population, the extent of poverty, the age of the housing stock, and the extent of overcrowding. To be eligible for funding under the block grant program, all activities or projects must meet at least one of the program’s designated national objectives: benefiting low- and moderate-income persons, preventing or eliminating slums or blight, and addressing particularly urgent community development needs caused by conditions that pose a serious and immediate threat to the community’s health or welfare. Under its Public Works Program, Commerce’s Economic Development Administration provides grants to communities in economic decline to revitalize, expand, and upgrade their physical infrastructure—including water and sewer facilities. Proposed projects must be located within an economically distressed area, as defined by the Administration, and must contribute to the long-term development of the area by creating or retaining jobs and raising income levels. From fiscal year 1991 through fiscal year 2000, Commerce provided $1.1 billion in grants to local communities for drinking water and wastewater projects. (See fig. 5 for a breakdown of the financial assistance that the Economic Development Administration provided during the period we reviewed.) Depending on the recipient, matching funds may be required for Public Works Program grants. A basic grant covers up to 50 percent of a project’s total cost. However, projects in severely depressed areas may receive grants of up to 80 percent of the costs, and projects for recognized Indian tribes may be granted up to 100 percent of the costs. Priority is given to the projects that assist the nation’s most economically distressed areas, such as areas with persistently high rates of poverty, previously unserved distressed areas and applicants, and areas undergoing significant economic downturns and dislocations. Programs or activities within the Appalachian Regional Commission, the Federal Emergency Management Agency, the Department of the Interior’s Bureau of Reclamation, the Small Business Administration, and U.S. Army Corps of Engineers provided about $1 billion in financial assistance for drinking water and wastewater infrastructure over the period we reviewed. With the exception of the Federal Emergency Management Agency, these agencies targeted their assistance to specific projects, geographic areas, or some combination of the two. The specifics of each program follow: The Appalachian Regional Commission provided $271.5 million in grants for projects under state Appalachian development plans. The Commission makes grants to states or private nonprofit agencies within the Appalachian Region. In general, grants are limited to 50 percent of project costs, but can be increased to 80 percent in designated distressed counties or limited to 30 percent in designated competitive counties. The Federal Emergency Management Agency provided $45.5 millionfor drinking water and wastewater infrastructure under its Hazard Mitigation Grants program to implement measures that permanently reduce or eliminate future damages and losses from natural hazards through safer building practices and improving existing structures. Eligible recipients include state agencies, local governments, other public entities, and private nonprofit organizations. Interior’s Bureau of Reclamation provided a total of $737.4 million for water infrastructure projects in 17 Western states. Among the recipients of these grants were water supply agencies and wastewater collection and treatment agencies. Under its Water Reclamation and Reuse Program, the Bureau provides grants to investigate and identify opportunities for reclamation and reuse of municipal, industrial, domestic, and agricultural wastewater and for the design and construction of demonstration and permanent facilities to reclaim and reuse wastewater. Financial assistance for construction projects may not exceed 25 percent of project costs up to a maximum of $20 million per project. Grants are also available to conduct research on related topics, such as desalination of wastewater, naturally impaired groundwater, and surface water reclamation. Funding for studies, demonstration projects, and research is limited to 50 percent of the total cost. The Small Business Administration guaranteed $27.6 million in loans under its Small Business Loans Program. In this program, guaranteed loans are made to small businesses that cannot obtain financing in the private marketplace, but can demonstrate the ability to repay the guaranteed loans. Loans can be used to construct, expand, or convert facilities, including water and wastewater infrastructure. The U.S. Army Corps of Engineers provided $23.7 million for drinking water and wastewater infrastructure during the 10-year period. The Corps does not have a program that provides financial assistance for water infrastructure, but its annual appropriations identified and funded certain project-specific assistance. In general, the Corps requires a 25-percent matching contribution from recipients. Figure 6 shows the financial assistance that these five federal agencies provided over the period that we reviewed. The 46 states that responded to our survey cumulatively made about $25 billion in state funds available to local communities and utilities for drinking water and wastewater improvements during the period that we reviewed. Specifically, the states reported that they: contributed about $10.1 billion to match EPA’s capitalization grants for the Drinking Water and Clean Water State Revolving Funds. This amount consisted of about $3.3 billion from state appropriations or other state sources, and about $6.8 billion that the states leveraged— that is, raised through the sale of state-issued bonds backed by the funds. made about $9.1 billion in grants and loan commitments under state- sponsored programs. made another $4.4 billion available for loans by selling general obligation and revenue bonds. In addition, the states reported that they contributed about $1.4 billion from state appropriations, interest earnings, and other state sources for purposes, such as matching non-EPA federal funds and financing state- designated specific drinking water or wastewater projects. Figure 7 depicts the financing made available by the states during the period. As noted previously, states are required to match 20 percent of the revolving loan fund capitalization grants they receive from EPA. States, at their option, may contribute more than the required minimum. According to the responses to our survey, the 46 states collectively provided about $10.1 billion in matching funds for the two revolving loan fund programs. Forty-two of the states reported amounts that exceeded the 20 percent minimum matching amount for one or both of the programs. The states reported that about $3.3 billion of their contributions came from state appropriations or other sources. In addition, 19 states reported that they had generated about $6.8 billion to be used for drinking water and wastewater infrastructure, in bond revenues backed by the state revolving loan funds. In total, states’ contributions to the Drinking Water and Clean Water State Revolving Funds accounted for about $10.1 billion of the total $25 billion that the states reported making available. Unlike federal agencies, which provided assistance mostly in the form of grants, states provided more financial assistance for water infrastructure in the form of loans—$5.7 billion versus $3.4 billion for grants. The states reported a total of 56 state-sponsored grant programs, 29 state-sponsored loan programs, and 35 state-sponsored programs that include loans and/or grants. (Detailed information that the states provided on their programs can be found in app. II.) Figure 8 shows the amount of state grant and loan programs over the 10-year period. States provided funds through a variety of grant programs. Drinking water facilities received $767 million, wastewater facilities received $2.0 billion, and grant programs that funded both drinking water and wastewater projects received $585 million. There were 56 state-sponsored grant programs for drinking water and/or wastewater infrastructure. Some examples follow: Alaska offered municipal construction grants that provided engineering assistance for water and sewer projects. Connecticut offered grants of 20 to 50 percent of the costs for nitrogen removal through its Clean Water Fund. Maine offered grants to help financially stressed communities replace and upgrade wastewater treatment facilities. Minnesota offered special infrastructure grants to construct phosphorus removal facilities. Missouri provided special infrastructure grants for water and sewer improvements in its state parks. States responding to our survey indicated that, apart from the state revolving loan program, they made loan commitments of about $5.7 billion under state-sponsored loan programs to help meet drinking water and wastewater facility needs. The states committed $3.9 billion under programs specifically targeted at wastewater or drinking water facilities. In addition, states committed $1.8 billion under other state-sponsored programs in which drinking water or wastewater infrastructure was among the types of projects eligible for funding. Twenty-nine states reported having state loan programs that covered a variety of specific purposes. Some examples follow: Delaware offered low-interest loans to wastewater and water utilities to expand and upgrade wastewater and drinking water infrastructure. Mississippi offered emergency loans for making necessary repairs to existing water pollution control systems, or drinking water systems, and offered loans to finance capital improvements such as water, sewer, and access roads; to fund improvements needed to implement projects by private companies; and to promote economic growth. Ohio offered loans for emergency remediation threats for drinking water and preliminary engineering plans and other costs associated with wastewater and publicly owned drinking water facilities. The state also provided interest-free loans to pay a portion of sewer or drinking water line extension project costs, which would otherwise have been paid by assessments on agricultural land. Tennessee offered loans to cities and utility districts that must relocate utilities because of road projects. The states reported making available about $4.4 billion for loans from revenues stemming from general obligation and state revenue bonds. Fifteen of the 46 states reported using bond issues, including $3.3 billion in state revenue bonds and $1.1 billion in general obligation bonds. There were nine wastewater bond programs, eight drinking water bond programs, and seven bond programs that covered both wastewater and drinking water projects. Some example follow: Virginia’s Resource Authority program issues bonds to provide financing to local government for wastewater and drinking water projects. Texas’ Water Development Fund funds a variety of regional water supply, wastewater, and flood control projects through general obligation funds. We provided a draft of this report to EPA for its review and comment. We received comments from several senior officials within EPA’s Office of Water, including the Director, Drinking Water Protection Division, Office of Ground Water and Drinking Water; and the Director, Municipal Support Division, Office of Wastewater Management. They generally agreed with the report’s findings, but suggested that we present the 10-year summary of federal and state financial assistance in constant dollars, rather than current dollars. We originally presented the data in current dollars because (1) the data include a mixture of appropriations, obligations, and expenditures and (2) our primary purpose is to present aggregate data, not trends. However, for greater comparability among the annual figures, we adjusted the financial assistance data to constant dollars. EPA also provided us with technical corrections and clarifications, which we incorporated in the report as appropriate. To identify the federal departments and agencies that provide financial assistance for water and wastewater infrastructure, we reviewed the literature, documents, program information, and past reports that address infrastructure assistance. We identified nine departments and agencies that provide financial assistance for water and wastewater infrastructure, including EPA, USDA, HUD, Commerce, the Appalachian Regional Commission, the Federal Emergency Management Agency, the Department of Interior, the Small Business Administration, and the U.S. Army Corps of Engineers. We asked officials at these departments and agencies for data on the amount of financial assistance provided for drinking water and wastewater infrastructure for fiscal years 1991 through 2000. We contacted officials to clarify any questions about the data and to obtain additional information as needed. Where possible, we verified the information we obtained with data from other, independent sources. To identify the amounts and sources of state funds, we mailed a survey to all 50 states asking for information on their financial contributions to drinking water and wastewater capital improvements and on their state-sponsored programs. Prior to designing the survey, we spoke to officials from industry associations, EPA as well as the U.S. Bureau of the Census to determine the data available from other sources. In developing the survey we talked with officials from EPA and pre-tested the survey in several states. We directed this survey to officials from state offices responsible for the Clean Water State Revolving Fund and asked them to coordinate with other state officials to complete the survey. We received responses from 46 states. To check the reliability of the state data, we called states to clarify any data items that appeared to be questionable or inconsistent. In addition, we compared the annual amounts that the states reported as contributions to their Clean Water and Drinking Water State Revolving Funds with similar data available from EPA’s National Information Management System database. While we did not independently verify the accuracy or reliability of EPA’s data, we noted that some of the annual amounts differed between the two sources. In total, the states reported to us contributions totaling about 20 percent less to the Clean Water State Revolving Fund and about 20 percent more to the Drinking Water State Revolving Fund than indicated by the data in EPA’s information system. Because GAO’s survey data stemmed from only 46 states while EPA’s data included all 50 states as well as Puerto Rico, some differences in the data were expected. We sorted our data by the state fiscal year and report annual figures and cumulative totals in that manner. Finally, we present 10-year summary data on federal and state financial assistance in constant year 2000 dollars. We used the Gross Domestic Product (GDP) chain type price index to adjust for inflation. A copy of our survey is included in appendix III of this report. We conducted this review from January 2001 through November 2001 in accordance with generally accepted government auditing standards. As we agreed with your office, unless you announce its contents earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Administrator, EPA, and make copies available to others who request them. If you or your staff have questions about this report, please call me on (202) 512-3841. Key contributors to this report are listed in appendix IV. Dollars in millions (in constant year 2000 dollars) Dollars in millions (in constant year 2000 dollars) Legend: SRF = State Revolving Fund Numbers may not add to totals due to rounding. Earmarks are grants for water and wastewater projects specifically designated in the appropriations process. Depending on the level at which we were able to identify the amounts provided (or used) specifically for drinking water and wastewater infrastructure, the amounts presented in this appendix generally represent appropriated, obligated, and expended amounts as follows: USDA’s Rural Utilities Service – appropriated amounts except that the amounts for loans and loan guarantees represent the face value. (Because most of the loans are repaid, the actual federal outlay is significantly less than the face value.) Purpose To provide partial grants and engineering assistance to larger communities for water and sewer projects. To improve sanitation conditions in rural and native Alaska villages through planning and construction grants and engineering/technical assistance. Alaska's Village Safe Water Program, in existence since the early 1970s, offers assistance to community systems that provide piped or hall services. To assist regional housing authorities in supplementing housing projects that have been approved for development with HUD Indian Housing Development Funds. Grants are limited to 20 percent of HUD's total development cost per project and can only be used for (1) the cost of on-site water and sewer facilities; (2) the construction of roads to project sites; (3) electrical distribution facilities; and (4) energy-efficient housing design features. Supplements are needed to offset the higher costs for materials in Alaska. Provide assistance to local agencies in the acquisition and construction of water conservation and groundwater recharge projects, development of new local water supplies, and infrastructure and reliability projects. To correct domestic water system deficiencies to enable systems to meet drinking water standards mandated under the California Safe Drinking Water Act. Provide funding to public water systems facing financial hardship due to the cost of treatment or source replacements of their water sources. To provide construction loans for wastewater construction to assist agencies in financial hardship. To provide grant and loan funding to municipalities for design and construction of projects to recycle water. To provide grants for planning, redesigning, and constructing wastewater facilities in small communities. The financing program for the small water resources projects (SWRP) provides an economical source of capital for expansion and rehabilitation of existing public water systems in Colorado. Under this program the authority provides loans that appreciably lower the costs of borrowing for those municipal governments and special districts having a population greater than 1,000 or a customer base of at lease 650 taps. Loan amounts can range from $300,000 to $25,000,000. Individual construction loans up to $25,000,000 do not require approval from the general assembly (as do other authority water supply loans greater than $25,000,000). These loans are fully insured by a private bond insurance company and are issued without using the moral obligation of the state. The SWRP is funded solely by the authority. To provide loans to construct or rehabilitate raw water infrastructure such as dams, diversion structures, wells, raw water pipelines, ditches, and canals. The purchase of water rights and land can be financed as part of the project. Purpose To assist political subdivisions of the state of Colorado that are socially or economically impacted by the development processing and energy conversion of minerals and mineral fuels. To provide assistance for planning design and construction of eligible drinking water treatment and distribution system projects. Serving 5,000 or less population. Provide financial assistance to governmental agencies for planning design and construction of eligible wastewater projects. Serving 5,000 or less population. To provide loans to small domestic drinking water systems before the federal law was finalized; served as an interim program. To correct water supply problems in wells contaminated by industrial wastes program. The program no longer requires matching funds. To provide infrastructure grants. To provide grants for the removal of nitrogen from wastewater. Grants of 20, 30, and 50 percent for all, nitrogen removal and CSO, respectively. To provide low-interest loans and/or grants to wastewater and water utilities for infrastructure expansion and upgrades. To restore or replace contaminated private drinking water wells often by installing a connection to a public water system. Transmission and distribution lines needed to bring public water system to areas not previously served are also covered. Approximately half of the annual $4 million budget is used to install transmission and distribution lines. To construct and financially assist the water projects, including the rehabilitation, improvement or extension of existing systems or facilities. To provide financial assistance to qualifying entities for the construction of wastewater treatment facilities. To assist community development projects ranging from park projects to water and sewer projects. To assist community development projects for water and wastewater. To address legislative directives. Purpose To provide economic development, infrastructure, and environmental facilities to political subdivisions. To provide debt service reimbursement to communities that constructed drinking water treatment plants before the creation of the drinking water SRF program. State provides 50 percent reimbursement of principal portion of debt service payment for a 10-year period. To help local governments upgrade wastewater treatment plants with BNR-advanced treatment. Funds are provided under a 50/50 cost-share agreement as part of the Chesapeake Bay nutrient reduction strategy. To help local governments upgrade wastewater infrastructure. Funds are provided for priority sewerage projects needed to address public health/water quality problems in financially disadvantaged communities. To provide state grant funds to assist local governments in upgrading their drinking water infrastructure. Funds are provided for priority drinking water projects that address the public health problems in financially disadvantaged communities. To help financially stressed communities replace and upgrade wastewater treatment facilities. To help towns replace the malfunctioning subsurface disposed systems that pollute water bodies or create a public nuisance. To help unsewered communities to construct wastewater collection and treatment systems. To fund municipal wastewater treatment facilities. This program is no longer in effect. To provide grants to metro area municipalities for sewer separation program. Program is no longer in effect. To provide supplemental grant assistance to hardship communities augmenting available CWSRF loans and USDA rural development funds. To make funds available to wastewater SRF loan recipients at 15 percent of eligible costs. No description given. To build phosphorus removal facilities and appurtenances in the Table Rock Lake watershed. No description given. To provide water and sewer improvements at Missouri's state parks. To assist municipalities, counties, public water and sewer districts, political subdivisions or instrumentalities of the state in the planning, design, and construction of wastewater systems. To assist municipalities, counties, public water and sewer districts, political subdivisions or instrumentalities of the state in the planning, design, and construction of wastewater systems. Purpose To enhance the state's water quality. To assist disadvantaged communities with planning, designing, and building wastewater systems. To assist disadvantaged communities with planning, designing, and building wastewater systems. No description given. Industrial Development Grant Drinking Water Systems Emergency Loan Fund (DWSELF) To make emergency loans to eligible applicants: (1) to make necessary repairs to existing drinking water systems to meet the emergency; (2) to complete construction needed to provide a permanent correction to the problems that caused the emergency; (3) to cover reasonable administration costs of the DWSRF program and/or the federal DWSRF program, and conducting activities under the local governments and rural water systems improvements revolving loan program act; and (4) to earn interest on fund accounts. To help counties or cities finance small infrastructure projects to promote economic growth. To provide low-interest loans to counties and municipalities for capital improvements, such as water, sewer, and access roads. To make loans to eligible applicants for emergency projects for the purpose of making necessary repairs to existing water pollution control systems to meet the emergency. To offer low-interest loans to counties or incorporated cities to finance improvements necessary to implement private company investments. To provide funding for the planning, design, and construction of renewable resource projects. To help solve serious health and safety problems and to assist communities with the financing of public facilities projects, including the construction or repair of drinking water systems, wastewater treatment facilities, sanitary or storm sewer systems, solid waste disposal and separation systems, and bridges. Loans, though available, have never been used. To help communities with a population less than 50,000 meet their greatest community development needs. All projects must principally benefit low and moderate income persons. The basic categories of local community development projects are: (a) economic development, (b) housing, (c) public facilities, (d) planning grants. Montana Department of Commerce (MDOC) administers the program. Public facilities projects can include conventional community facilities such as water, sewer, and solid waste. Projects can also include those designed to principally serve low and moderate income persons (i.e., head start centers, mental health centers). The program is a federally funded competitive grant program. Purpose To provide small towns with populations of 800 or less with a grant up to 50 percent of the project cost in conjunction with a CWSRF loan. To assist units of local government with no wastewater collection or treatment systems. Applicant must demonstrate a critical public health or environmental threat. Additional eligibility criteria apply. Funding covers 90 percent of project cost up to $3 million. To provide low-interest loans and grants to qualifying applicants to address water and wastewater infrastructure needs. To provide up to $400,000 for units of local government for the construction of water or wastewater projects. Applicant must demonstrate a critical public health, environmental, or economic development need. Funding can be used as the local match to other state/federal programs. To provide up to $40,000 for units of local government for the preliminary planning of water and/or wastewater systems. Projects might include preliminary engineering reports, capital improvement plans, feasibility studies, sewer/water rate studies. Applicant must demonstrate a critical environmental, public health, or economic development need. Water Supply Development To assist water supply systems in addressing water quantity and Wastewater State Aid Grant Program State Drinking Water Filtration Grant Program CSO Program Safe Drinking Water Program (Water Supply Bond Loan Program) Special Appropriations Program Colonias Waste Water Construction Program quality problems. To provide 20 percent to 30 percent grants in the form of loan subsidies for wastewater infrastructure projects. To help public water systems comply with the surface water treatment rules of the federal safe drinking water act of 1986. To fund planning and design of project to eliminate combined sewer overflow problems. To accommodate existing and future needs in the 23 designated Pinelands Regional Growth Areas. To fund drinking water projects. The SRF program has replaced this program. Loans made through 2000 are the final loans from the state program. To administer and provide construction. Oversight for projects funded by legislative special appropriations. To provide funding for projects intended to construct or improve wastewater treatment and disposal in the Colonias communities of New Mexico. To provide low-interest loans and limited grant funds to build water supply and wastewater systems for communities with populations less than 10,000. To assist local entities with construction or rehabilitation water projects within a particular service area. Projects must receive legislative authorization within 3 years of NMFA board approval. Public Project Revolving Fund AB198 Grant Program Village Capital Improvement Fund (VCIF) Ohio Water and Sewer Rotary Commission Drinking Water Emergency Loan Fund (DWELF) Purpose To provide low-cost, low-interest rate loans to local government entities for infrastructure and capital equipment purchases. To provide grants to public purveyors of water to pay for the costs of capital improvements to publicly owned community water systems and publicly owned nontransient water systems, as required or made necessary by the state health board, or made necessary by the Safe Drinking Water Act. Funds are available only to communities of less than 6,000 residents. To help villages finance preliminary engineering plans, detailed engineering plans, feasibility studies, and legal costs incurred for planning phases of wastewater and/or public drinking water facilities. VCIF is a partially interest-free loan program. To provide no-interest loans to pay that portion of the cost of a sewer or drinking water line extension project, which otherwise would have been paid by assessments on agricultural land. To provide loans to local government agencies to build the wastewater systems, drinking water systems, and solid waste disposal facilities needed to comply with water pollution control standards. To help municipal corporations, counties, townships, and regional sewer/water districts maintain operations and adequate wastewater, drinking water, and solid waste disposal facilities. To provide emergency loans to owners or operators of public drinking water systems for emergency remediation of a "threat of contamination"—anything that prevents a public water system from supplying adequate quantities of safe, potable water to existing users. Emergency Grant Program To provide financial assistance for a wastewater or drinking Contract Financial Assistance Rural Economic Action Plan "REAP" water emergency, which is defined as a life, health, or property- threatening situation. There is a maximum of $100,000 per project per applicant during any fiscal year. Each applicant is subjected to a "grant priority evaluation policy" that considers the following: nature of the emergency, water and sewer rates, monthly debt payment, local contribution, median household income, and benefit to other systems, among others. To fund special drinking water and wastewater infrastructure projects that are approved by the legislature. To assist communities of 7,000 or less with their wastewater or drinking water systems. Maximum amount is $150,000. Applicants are subjected to a specific “REAP grant priority evaluation policy.” Criteria considered include: population, water and sewer rates, indebtedness per customer, median household income, applicant's ability to finance the project, amount of grant requested, previous grant assistance, and enforcement orders. Sewer Assessment Deferral Loan Program (SADLP) Purpose To assist communities with drinking water and wastewater infrastructure. Funds provided for this program come from bonds. To provide financial assistance to property owners who would experience financial hardship as a result of the mandatory connection to the public sewer system. Property owners acquire loans through local communities. To provide financing to construct and improve public drinking water systems and public wastewater collection systems. To provide financial assistance to municipalities for constructing, improving, and repairing those facilities that are essential for supporting economic activity which stimulates and promotes employment opportunities in Oregon. To fund drinking, waste, and storm water system construction. To fund drinking, waste, and storm water system construction. To fund drinking, waste, and storm water system construction. To fund wastewater system construction. To fund drinking, waste, and storm water system construction. To fund drinking, waste, and storm water system construction. To finance projects not meeting the requirements of federal programs. To construct publicly owned water and sewer facilities. To provide assistance for water and sewer related purposes. To provide assistance during environmental emergencies that require immediate and comprehensive response from the state to deter pollution and protect public health. To provide state financial and technical assistance to large, costly water-related projects that need significant state cost share participation and most likely require significant federal financing and federal authorization. Projects are designated for this program through a legislative process. To assist a wide variety of drinking water, wastewater, storm water, groundwater protection, and watershed restoration projects. Most of the assistance is in the form of grants. To assist local governments with their water, wastewater, and solid waste projects. Program name Utility Relocation Loan Program (URLP) Purpose To assist cities and utility districts that must relocate utilities due to Tennessee Department of Transportation (TDOT) road projects. To assist local governments and businesses with infrastructure improvements and job-specific training. To develop adequate water supply and wastewater facilities in certain areas without such facilities or where the financial resources of the residents were inadequate to meet those needs. The program can fund the construction, acquisition, or improvements to water supply and wastewater collection and treatment works, including all necessary engineering work. The program will not fund ongoing operation and maintenance expenses and applies only to areas of the state meeting the definition of an "economically distressed area," primarily in counties along the Texas/Mexico border. To encourage the optional development of major regional water supply, wastewater, or flood control projects where local interests could not afford such projects at the time the assistance is provided. The state provides the assistance by financing and owning up to 50 percent of the regional facilities until such time as sufficient growth occurs for the state's interest to be bought out by the regional participants. Funds for the program come from the sale of general obligation bonds for which the legislature has pledged to pay all or part through draws on general revenue funds while the state retains partial ownership of the facilities. Recipients of assistance are required to purchase the state's interest when they begin using any portion of the project capacity owned by the state. To help communities develop water supplies. The program has evolved over the years, adding additional authority to fund wastewater and flood control projects. Current development fund loans are made to political subdivisions for water supply, water pollution control, and flood control projects. Recent assistance for these programs has averaged approximately $70 million per year. In addition, approximately $25 million per year of these general obligation bond funds is used to supply the state match for the CWSRF and DWSRF programs. Board of Water Resources To provide low-cost financing to develop Utah's water Water Quality Project Assistance Program resources. To help political subdivisions of the state fund water quality projects. To assist state agencies or subdivisions that are or may be socially or economically affected, directly or indirectly, by mineral lease development on federal lands. Purpose To provide low-cost financing for public drinking water system infrastructure projects. To help Virginia localities meet the local share requirement of federal earmark grants. To help local governments fund wastewater and drinking water projects. To provide technical and financial assistance to local governments and individuals to help them control point-source pollution, primarily nitrogen and phosphorus. To provide financial assistance for needed improvements to publicly owned water systems. 35percent of the grants were provided for new or improved source, storage, treatment, or transmission main facilities. To abate (1) direct discharges of untreated or improperly treated domestic wastes (35 percent state grant; 65 percent SRF); (2) combined server/wet weather overflow (25 percent state grant; 50 percent SRF loan; 25 percent other); (3) phosphorus removal; and (4) sludge and septage (50 percent state grant; 50 percent SRF). To provide low-interest loans to local governments for the repair, replacement, and rehabilitation of infrastructure. To assist public water system utilities in upgrading existing systems and keeping pace with the increasing demands placed upon them. Program is now being phased out. To provide financial assistance for the acquisition, construction, and improvement of public waste facilities. Program is now being phased out. To assist local governments and tribes with the planning, design, acquisition, construction, and improvement of water pollution control facilities and related activities. Program will continue until 2021. To provide for the planning, design, acquisition, construction, and improvement of public waste disposal and management facilities. Program is now being phased out. To correct violations of the drinking water standards. To construct wastewater and water facilities for public utilities. To fund public service facility owned by the applicant and available for use by the general public, including water and sewer projects, storm drainage projects, street and road projects, solid waste disposal projects, acquisition of emergency vehicles public administration buildings, health care facilities, senior citizens’ centers, jail and detention facilities, and facilities needed to provide services to the disabled and similar facilities as authorized by the Board. Key contributors to this report include Terri Dee, Les Mahagan, Diane Raynes, Laura Shumway, and Lisa Turner. Also making significant contributions were Christine Bonham, June Foster, Karen Keegan, and Jonathan McMurray.
|
U.S. drinking water and wastewater systems encompass thousands of treatment and collection facilities and more than a million miles of pipes and conduits. The estimated cost to repair, replace, or upgrade aging facilities; accommodate the nation's growing population; and meet new water quality standards ranges from $300 billion to $1 trillion over the next 20 years. Although user rates are the major source of facilities' financing, federal and state government agencies also offer financial support. From fiscal years 1991 through 2000, nine federal agencies provided $44 billion for drinking water and wastewater capital improvements. Four agencies--the Environmental Protection Agency and the Departments of Agriculture, Housing and Urban Development, and Commerce--accounted for about 98 percent of that account. State governments made $25 billion available for water infrastructure programs during the past 10 years.
|
One of DOE’s strategic goals is to promote a diverse supply of reliable, affordable, and environmentally sound energy. To that end, DOE is promoting further reliance on nuclear energy under the administration’s National Energy Policy. According to DOE officials, the department has three priorities for promoting nuclear energy. The first priority is deploying new advanced light water reactors under the Nuclear Power 2010 program. The second priority is the Global Nuclear Energy Partnership, launched in February 2006. The partnership’s objectives are to demonstrate and deploy new technologies to recycle nuclear fuel and minimize nuclear waste, and to enable developing nations to acquire and use nuclear energy while minimizing the risk of nuclear proliferation. The third priority is R&D on the Next Generation Nuclear Plant. According to DOE officials, the department remains committed to this project even though the Global Nuclear Energy Partnership has assumed a higher priority. DOE is engaged in R&D on the Next Generation Nuclear Plant as part of a larger international effort to develop advanced nuclear reactors (Generation IV reactors) that are intended to offer safety and other improvements over the current generation of nuclear power plants (Generation III reactors). DOE coordinates its R&D on advanced nuclear reactors through the Generation IV International Forum, chartered in 2001 to establish a framework for international cooperation in R&D on the next generation of nuclear energy systems. In 2002, the Generation IV International Forum (together with DOE’s Nuclear Energy Research Advisory Committee) identified what it considered the six most promising nuclear energy systems for further research and potential deployment by about 2030. DOE has selected one of the six advanced nuclear systems— the very-high-temperature reactor—as the design for its Next Generation Nuclear Plant, in part because it is considered to be the nearest-term reactor design that also has the capability to produce hydrogen. According to DOE officials, the very-high-temperature reactor is also the design with the greatest level of participation among the Generation IV International Forum members. Furthermore, the very-high-temperature reactor builds on previous experience with gas-cooled reactors. For example, DOE conducted R&D on gas-cooled reactors throughout the 1980s and early 1990s, and two gas- cooled reactors have previously been built and operated in the United States. The basic technology for the very-high-temperature reactor also builds on previous efforts overseas, in particular high-temperature gas- cooled reactor technology developed in England and Germany in the 1960s, and on technologies being advanced in projects at General Atomics in the United States, the AREVA company in France, and at the Pebble Bed Modular Reactor company in South Africa. In addition, Japan and China have built small gas-cooled reactors. DOE has developed a schedule for the R&D, design, and construction of the Next Generation Nuclear Plant that is intended to meet the requirements of the Energy Policy Act of 2005, which divides the project into two phases. For the first phase, DOE has been conducting R&D on fuels, materials, and hydrogen production. DOE also recently announced its intent to fund several studies on preconceptual, or early, designs for the plant. DOE plans to use the studies, which are expected to be completed by May 2007, to establish initial design parameters for the plant and to further guide R&D efforts. DOE is planning to begin the second phase in fiscal year 2011 by issuing a request for proposal that will set forth the design parameters for the plant. If R&D results at that time do not support the decision to proceed, DOE may cancel the project. Assuming a request for proposal is issued, DOE is planning to choose a design by 2013 from among those submitted by reactor vendors. Construction is scheduled to begin in fiscal year 2016, and the plant is expected to be operational by 2021. In addition, DOE is planning for the appropriate licensing applications for the plant to be submitted for NRC review and approval during the second phase of the project. See figure 1 for the overall Next Generation Nuclear Plant project schedule. As scheduled by DOE, the Next Generation Nuclear Plant project is expected to cost approximately $2.4 billion, part of which is to be funded by industry. According to DOE officials, the department budgeted about $120 million for the project from fiscal years 2003 through 2006. This amount includes about $80 million for R&D on the nuclear system of the plant and about $40 million for R&D on the hydrogen production system. Initial research results since DOE initiated R&D on the Next Generation Nuclear Plant project in 2003 have been favorable, but the most important R&D has yet to be done. For example, DOE is planning a series of eight fuel tests in the Advanced Test Reactor at Idaho National Laboratory. Each test is a time-consuming process that requires first fabricating the fuel specimens, then irradiating the fuel for several years, and finally conducting the postirradiation examination and safety tests. DOE is at the beginning of the process. In particular, DOE officials said they have successfully fabricated the fuel for the first test and addressed previous manufacturing problems with U.S. fuel development efforts in which contaminants weakened the coated particle fuel. However, the irradiation testing of the fuel in the Advanced Test Reactor has not yet begun. The first test is scheduled to begin early in fiscal year 2007 and to be completed in fiscal year 2009. The eighth and final test is scheduled to begin in fiscal year 2015, and the fuel testing program is scheduled to conclude in fiscal year 2019. As a result, DOE will not have the final results from all of its fuel tests before both design and construction begin. While DOE has carefully planned the fuel tests and expects favorable results, a DOE official acknowledged that they do not know if the fuel tests will ultimately be successful. DOE is also at the beginning stages of R&D on other key project areas such as the hydrogen production system for the plant and materials development and testing. For example, Idaho National Laboratory successfully completed a 1,000-hour laboratory-scale test of one of two potential hydrogen production systems in early 2006. DOE ultimately plans to complete a commercial-scale hydrogen production system for demonstration by fiscal year 2019, which will allow time to test the system before linking it to the very-high-temperature reactor. DOE also has selected and procured samples of graphite—the major structural component of the reactor core that will house the nuclear fuel and channel the flow of helium gas—and designed experiments for testing the safety and performance of the samples. Nevertheless, much of the required R&D for the graphite has not yet begun and is not scheduled to be completed until fiscal year 2015. Regarding licensing of the plant, DOE and NRC are in the process of finalizing a memorandum of understanding that will establish a framework for developing a licensing strategy. As required by the Energy Policy Act of 2005, DOE and NRC are to jointly submit a licensing strategy by August 2008. NRC has drafted a memorandum of understanding and submitted it to DOE, but its approval has been delayed by additional negotiations on details of the agreement. Nevertheless, NRC has already taken certain other actions to support licensing the Next Generation Nuclear Plant. In particular, NRC has been developing a licensing process that could be used for advanced nuclear reactor designs and that would provide an alternative to its current licensing framework, which is structured toward light water reactors. In addition to developing a licensing strategy, NRC will need to enhance its technical capability to review a license application for a gas-cooled reactor, such as the Next Generation Nuclear Plant. In 2001, NRC completed an assessment of its readiness to review license applications for advanced reactors. The assessment identified skill gaps in areas such as accident analysis, fuel, and graphite, which apply to gas-cooled reactors. Furthermore, NRC identified a “critical” skill gap in inspecting the construction of a gas-cooled reactor. As a result of its 2001 assessment, NRC issued a detailed plan in 2003 to address the gaps in expertise and analytical tools needed to license advanced reactors, including gas-cooled reactors. However, NRC has since taken limited steps to enhance its technical capabilities related to gas-cooled reactors because, until recently, it had not anticipated receiving a license application for a gas- cooled reactor. DOE is beginning to obtain input from potential industry participants that would help DOE determine its approach to ensuring the commercial viability of the Next Generation Nuclear Plant. In the interim, DOE is pursuing a more technologically advanced approach—with regard to size, fuel type, and the coupling of electricity generation and hydrogen production in one plant—compared with the recommendations of the Independent Technology Review Group and the Nuclear Energy Research Advisory Committee. These technological advances require substantial R&D on virtually every major component of the plant. For example, the advanced uranium fuel composition that DOE is researching is not proven and requires fundamental R&D. The Independent Technology Review Group cautioned that attempting to achieve too many significant technological advances in the plant could result in it becoming an exercise in R&D that fails to achieve its overall objectives, including commercial viability. Another key factor likely to affect the plant’s commercial viability is the time frame for its completion. For example, the plant’s commercial attractiveness could be affected by competition with other high-temperature gas-cooled reactors under development and potentially available sooner, such as one in South Africa, although these other reactor designs would also need to be licensed by NRC before being deployed in the United States. DOE acknowledges the risk of designing and building a plant that is not commercially viable and has taken initial steps to address this challenge. For example, DOE has established what it considers to be “aggressive but achievable” goals for the plant, such as producing hydrogen at a cost low enough to be competitive with gasoline. Furthermore, DOE is beginning to obtain industry input to help the department develop an approach for ensuring the commercial viability of the plant. DOE initiated two efforts in July 2006 to obtain input from industry on the design of the plant and the business considerations of deploying the plant. Specifically, DOE announced its intent to fund multiple industry teams to develop designs (and associated cost estimates) for every aspect of the plant, including the reactor and hydrogen production technology, by May 2007. In addition, DOE began participating in meetings with representatives from reactor vendors, utilities, and potential end users in order to obtain their insight into the market conditions under which the plant would be commercially viable. Until DOE develops a better understanding of the business requirements for the Next Generation Nuclear Plant, DOE is conducting R&D to support two distinct designs of the very-high-temperature reactor—pebble bed and prismatic block—rather than focusing on one design that may ultimately be found to be less commercially attractive. As recommended by the Independent Technology Review Group, DOE revised its R&D plans to lessen the technological challenges of designing and building the Next Generation Nuclear Plant. Most importantly, it reduced the planned operating temperature of the reactor from 1,000 degrees Celsius to no more than 950 degrees Celsius. According to Idaho National Laboratory officials, this small reduction is significant because it enables DOE to use existing metals rather than develop completely new classes of materials. DOE, however, has not adopted other recommendations—in particular to revise its R&D plans to focus on a uranium dioxide fuel kernel, which has been more widely used and researched than the advanced uranium oxycarbide fuel kernel DOE is currently researching. The Independent Technology Review Group considered DOE’s fuel R&D plan on an advanced uranium fuel composition more ambitious than necessary and concluded that focusing on the more mature fuel technology would reduce the risk of not meeting the schedule for the plant. Nevertheless, DOE has continued to focus on the advanced uranium oxycarbide fuel because it has the potential for better performance. DOE officials also told us that the most significant challenge with regard to the fuel is not its composition but rather the coatings, which is independent of the fuel kernel composition. To respond to the recommendation, DOE decided to test the performance of the two types of fuel kernels side-by-side as part of its fuel R&D plan. The Nuclear Energy Research Advisory Committee also recommended that DOE re-evaluate the project’s dual mission of demonstrating both electricity and hydrogen production. Although the advisory committee did not recommend what the project’s focus should be—electricity generation or hydrogen production—it wrote that the dual mission would be much more challenging and require more funding than either mission alone. Instead, DOE’s R&D is currently supporting both missions, and DOE officials said they consider the ability to produce hydrogen (or to use process heat for other applications) key to convincing industry to invest in the Next Generation Nuclear Plant rather than advanced light water reactors similar to the current generation of nuclear power plants operating in the United States. Moreover, a key Nuclear Energy Research Advisory Committee recommendation was to accelerate the project and deploy the plant much earlier than planned by DOE in order to increase the likelihood of participation by industry and international partners. Representatives of the Nuclear Energy Institute, which represents utilities that operate nuclear power plants, also told us that accelerating the project would increase the probability of successfully commercializing the plant. As one possible approach to acceleration, the advisory committee further recommended that DOE design the Next Generation Nuclear Plant to be a smaller reactor that could be upgraded and modified as technology advances. However, DOE officials consider the advisory committee’s schedule high risk and doubt that the degree of acceleration recommended could be achieved. Furthermore, according to DOE officials, a smaller reactor would require the same R&D as a larger reactor but would not support future NRC licensing of a full-scale plant, which is critical to the plant’s commercial viability. Idaho National Laboratory officials also consider the schedule proposed by the advisory committee to be high risk, potentially resulting in the need to redo design or construction work. Nevertheless, the laboratory has also proposed accelerating the schedule, though to a lesser extent than recommended by the advisory committee. According to laboratory officials, if DOE does not begin design sooner than currently planned, too much R&D and design work will be compressed into a short time frame after DOE begins design in fiscal year 2011, and the department will not be able to complete the plant by fiscal year 2021. Consequently, the laboratory has proposed beginning design earlier than planned by DOE, which would also reduce the scope of the R&D by focusing on fewer design alternatives. The laboratory’s proposed schedule would result in completing the plant up to 3 years earlier than under DOE’s schedule. While the laboratory’s proposed schedule would slightly reduce the project’s total cost estimate, it would require that DOE provide more funding in the near term. For example, in fiscal year 2007, Idaho National Laboratory estimates that R&D on the very-high-temperature reactor design would need to be increased from $23 million (the amount requested by DOE in its fiscal year 2007 budget submission) to $100 million. DOE officials believe that the laboratory’s current proposed schedule is the best option for the plant and stated that they would consider accelerating it if there were adequate funding and sufficient demand among industry end users to complete the project sooner. In addition, DOE officials said that even if the schedule is not accelerated, increasing the funding for the project would enable additional R&D to be conducted to increase the likelihood that the plant is completed by fiscal year 2021. For example, DOE officials stated that its current R&D plans for the very- high-temperature reactor design could support doubling the department’s fiscal year 2007 budget request of $23 million. However, DOE has limited funding for nuclear energy R&D and has given other projects, such as developing the capability to recycle fuel from existing nuclear power plants, priority over the Next Generation Nuclear Plant. While DOE is making progress in implementing its plans for the Next Generation Nuclear Plant, these efforts are at the beginning stages of a long project and it is too soon to determine how successful DOE will be in designing a technically and commercially viable plant. As we note in our report, it is also too soon, in our view, to support a decision to accelerate the project. Accelerating the schedule would require that DOE narrow the scope of its R&D and begin designing the plant before having initial research results on which to base its design decisions. This could result in having to redo work if future research results do not support DOE’s design decisions. In addition, DOE has only recently begun to systematically involve industry in the project. Such input is critical to key decisions, such as whether DOE should design a less technologically advanced plant that is available sooner rather than a larger, more technologically advanced plant that requires more time to develop. Finally, DOE’s history of problems managing large projects on budget and within schedule raises concerns about the department’s ability to complete the Next Generation Nuclear Plant in the time frame set forth in the Energy Policy Act of 2005, and accelerating the schedule would only add to these concerns. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. Raymond H. Smith Jr. (Assistant Director), Joseph H. Cook, John Delicath, and Bart Fischer made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Under the administration's National Energy Policy, the Department of Energy (DOE) is promoting nuclear energy to meet increased U.S. energy demand. In 2003, DOE began developing the Next Generation Nuclear Plant, an advanced nuclear reactor that seeks to improve upon the current generation of operating commercial nuclear power plants. DOE intends to demonstrate the plant's commercial application both for generating electricity and for using process heat from the reactor for the production of hydrogen, which then would be used in fuel cells for the transportation sector. The Energy Policy Act of 2005 required plant design and construction to be completed by 2021. This testimony, which summarizes a GAO report being issued today (GAO-06-1056), provides information on DOE's (1) progress in meeting its schedule for the Next Generation Nuclear Plant project and (2) approach to ensuring the project's commercial viability. For the report, GAO reviewed DOE's research and development (R&D) plans for the project and the reports of two independent project reviews, observed R&D activities, and interviewed DOE, Nuclear Regulatory Commission (NRC), and industry representatives. DOE has prepared and begun to implement plans to meet its schedule to design and construct the Next Generation Nuclear Plant by 2021, as required by the Energy Policy Act of 2005. Initial R&D results are favorable, but DOE officials consider the schedule to be challenging, given the amount of R&D work that remains to be conducted. For example, while researchers have successfully demonstrated the manufacturing of coated particle fuel for the reactor, the last of eight planned fuel tests is not scheduled to conclude until 2019. DOE plans to initiate the design and construction phase in fiscal year 2011, if the R&D results support proceeding with the project. The act also requires that DOE and NRC develop a licensing strategy for the plant by August 2008. The two agencies are in the process of finalizing a memorandum of understanding to begin work on this requirement. DOE is just beginning to obtain input from potential industry participants that would help determine the approach to ensuring the commercial viability of the Next Generation Nuclear Plant. In the interim, DOE is pursuing a more technologically advanced approach, compared with other options, and DOE has implemented some (but not all) of the recommendations made by two advisory groups. For example, as recommended by one advisory group, DOE lessened the need for R&D by lowering the reactor's planned operating temperature. In contrast, DOE has not accelerated its schedule for completing the plant, as recommended by the Nuclear Energy Research Advisory Committee. The committee was concerned that the time frame for completing the plant is too long to be attractive to industry, given that other advanced reactors may be available sooner. However, DOE believes the approach proposed by the committee would increase the risk of designing a plant that ultimately would not be commercially viable. GAO believes DOE's problems with managing other major projects call into question its ability to accelerate design and completion of the Next Generation Nuclear Plant.
|
In response to an increase in the threat of potential terrorist attacks in the United States involving WMDs, Congress directed the federal government to enhance its capability to deter, prevent, respond, and recover from terrorist attacks using such weapons. Among the resulting efforts, Congress in fiscal year 1999 approved the development of National Guard WMD CSTs. The CSTs are designed to support civil authorities in the event of a domestic WMD event by identifying WMD agents and substances, assessing current and projected consequences, advising on response measures, and assisting with appropriate requests for additional support. In describing WMD agents, DOD commonly uses the term chemical, biological, radiological, nuclear, and high-yield explosives (CBRNE). Like traditional National Guard units, the CSTs are under the day-to-day control of the governors of their respective states and territories. The CSTs can also be activated for federal service by the President, at which time they would fall under DOD command. Unlike traditional National Guard units, which generally consist of part-time soldiers who conduct regular drills, the CSTs are composed of full-time Army and Air National Guard members. Each 22-person team is divided into six sections: command, operations, communications, administration and logistics, medical/analytical, and survey. The members of the CSTs are trained in their various disciplines and operate sophisticated equipment that helps them accomplish their mission. Table 1 shows examples of some of the tasks associated with each CST section. The CSTs employ military-provided equipment that is common to active duty military units, such as chemical defense equipment and uniforms. They also use a large variety of specialized commercial equipment, such as the protective ensembles worn in the hazard zone and much of the teams’ laboratory equipment. The CSTs employ several vehicles for transporting and supporting the six sections of the team. Among these are two specially constructed vehicles: the Unified Command Suite, which contains a wide range of radio, data, and video communications equipment, and the Analytical Laboratory System, which contains such equipment as a gas chromatograph/mass spectrometer for organic material analysis and a gamma spectrometer for radiological material analysis as well as other laboratory support equipment. Figures 1 and 2 show the Unified Command Suite and the Analytical Laboratory System, respectively. The equipment in the Analytical Laboratory System helps the CSTs conduct a “presumptive identification” of a CBRNE sample. If requested by the incident commander, the CST then transfers a sample to a Centers for Disease Control and Prevention-approved laboratory for confirmation and official identification. NGB is responsible for managing the CST program and is the principal channel of communication between DOD and the adjutant general commanding the National Guard unit in each state. NGB also coordinates with other DOD commands and organizations to support various aspects of the CST program. For example, the joint service Chemical and Biological Defense Program conducts the acquisition process for much of the CST equipment, and the Army’s Maneuver Support Center assists in developing CST doctrine and conducting key CST-specific training. The Secretary of Defense must certify each CST as ready to execute its WMD mission. This certification involves a series of staffing, equipping, and training steps that take from 18 to 24 months. To achieve certification, each CST must complete the following steps: 1. Have the required personnel and equipment resources and be trained to undertake the full mission for which it is organized or designed. For example, at least 85 percent of assigned personnel must have completed all of their CST-specific individual training. 2. Undergo an external evaluation by Army experts according to the CST’s approved mission training plan. 3. Notify its adjutant general that it has completed the above steps, whereupon the adjutant general submits a request for certification to NGB, which then reviews and forwards the request to the Army Staff and to the Assistant Secretary of Defense for Homeland Defense. The Secretary of Defense makes the final determination of approval for CST certification. Although certification is a onetime event, a CST that loses a significant number of key personnel associated with command and control or with medical and assessment capabilities that substantially degrades the team’s ability to conduct its mission must undergo a revalidation process. In addition, each CST undergoes an external evaluation every 18 months, during which Army experts assess each team’s ability to meet specific mission standards associated with all related WMD threats. Both civil and military responders, including the CSTs, conduct WMD response operations in a three-tiered approach based on the National Response Plan and the National Incident Management System. The National Response Plan represents a comprehensive all-hazards approach intended to enhance the ability of the United States to manage domestic incidents. Fire and rescue, law enforcement, and emergency medical personnel constitute the first tier. If the extent of the event exceeds the ability of the first tier to manage the consequences of the situation, the state-level civil and military forces may be activated and deployed as the second tier. If the governor determines that the forces and resources available in the state require additional support, then the governor may request assistance from the President of the United States, constituting the third tier. The CSTs are generally included in the second tier of the response. In addition to preparing to respond to WMD and catastrophic terrorist events in their respective states, the CSTs also adhere to NGB’s Response Management Plan. Under this plan, NGB monitors the readiness status of each certified CST to ensure that at a given time, a designated number of CSTs are always ready to respond to a national need or the need of a state without an available CST. To facilitate planning for such responses, the plan divides the nation into six response sectors, as shown in table 2. Under the Response Management Plan, the CSTs are scheduled on either “bronze,” “silver,” or “gold” status on a rotating basis. At any given time one certified team per response region is in gold status and must be ready to deploy a full CST (personnel and equipment) within 3 hours from its home station to an incident site within its region, should the need arise. At the same time, another certified team per response region is placed in silver status. While this team is in a slightly lower state of readiness it must be prepared to assume gold status in the event the gold team is deployed. The remaining certified teams are in bronze status and are focused more on training, block leave, equipment preparation, and state-directed missions. Bronze teams must, however, be prepared to respond to incidents within their region within 72 hours and to assume silver or gold status within 48 and 96 hours, respectively. Because the CSTs are state-controlled units, the respective governors are the final deployment authority for CST missions and, unless the CSTs are federalized, they remain under the command authority of the governors and state adjutants general. The CSTs generally conduct three types of mission: response, stand-by, and assist. Response missions are deployments in support of requests from local, state, or federal agencies, such as a CST deployment to help civil authorities identify a potentially toxic chemical left by a suspected terrorist. Stand-by missions involve providing CST expertise at special events, such as the national political conventions. Assist missions include a range of CST involvement, including technical assistance, reconnaissance, or assistance with CBRNE vulnerability assessments. For example, CST commanders and team members may provide technical assistance by phone to a local incident commander at a hazardous materials scene. Table 3 summarizes deployments of the CSTs for missions other than training exercises. As shown in table 3, CSTs deployed on response missions far less often than on stand-by and assist missions. The table does not show total activity by the CSTs, since the majority of their time is devoted to training in order to maintain individual and team readiness. It also may not reflect all CST deployments to assist in states affected by hurricanes in 2005. Each CST costs approximately $7.7 million to establish, or approximately $424 million to establish all 55 CSTs. This cost estimate includes initial equipment, vehicles, personnel, and training support. Sustaining each CST in these categories costs approximately $3.4 million a year, or $189 million a year to sustain all 55 teams. DOD funds the establishment and sustainment of the CST program and NGB manages most of this funding. These estimates do not include utilities for CST facilities, which are paid by the states via a general calculation of all state facilities requirements and funded through NGB. The estimates also do not include federally funded costs for construction of CST facilities, since these costs vary widely depending on how and where the states decide to station their teams. There are also additional federal costs associated with the CST program that are not associated with the specific teams themselves. For example, approximately $65 million for fiscal year 2006 is associated with the following categories: funding for CST airlift; various CST-unique training courses; equipment replenishment and modifications; maintenance of secure Internet access for CSTs; Unified Command Suite maintenance and support; civilian personnel involved in CST oversight functions; and U.S. Army personnel whose mission is to evaluate, train, and develop doctrine for CSTs. NGB is also in the process of creating additional units meant to follow CSTs in response to WMD events and to be part of larger National Guard response forces. The mission of the 17 currently authorized National Guard CBRNE Enhanced Response Force Packages (CERFP) is to support local, state, and federal agencies managing the consequences of a CBRNE event by providing capabilities to conduct personnel decontamination, emergency medical services, and casualty search and rescue. Each CERFP comprises approximately 186 personnel taken from existing Army and Air National Guard medical, engineer, chemical, and other units. Unlike CST members, CERFP personnel do not serve in their units on a full-time basis but rather must be mobilized for duty. Like CSTs, however, CERFPs are intended to be part of the state response to a WMD incident and can also be federalized and placed under DOD authority. Based on the CSTs’ readiness measures for staffing, training, and equipment; the data we obtained from the CSTs on each of these measures; the process NGB has in place to maintain and monitor CST readiness; and the discussions we had with CSTs and state, local, and federal officials in the 14 states and territories we visited, we found that the certified CSTs have thus far been trained, equipped, and staffed to conduct their mission. Further, NGB, DOD, and the states have guidance in place for operational command and control of the CSTs, specifying how and when teams will operationally respond to a WMD event. However, confusion about the types of non-WMD missions the CSTs conduct to help them prepare for WMD missions could impede coordination between state, local, and federal officials about the appropriate use of the CSTs. The certified CSTs have thus far had the staff, equipment, and training they need to conduct the mission that Congress intended for them. Staffing, equipment, and training data we collected from 52 of the 55 CSTs in late 2005 confirmed this state of readiness, as did the discussions we had with CST personnel in the 14 states and territories we visited and state National Guard command staff, CST program managers at NGB, and state and local emergency responders. Additionally, NGB has a clear plan to maintain, monitor, and periodically evaluate the teams’ overall readiness. For example, for the certified CSTs we visited, in addition to fulfilling initial certification criteria that established strict standards for staffing, equipment, and training readiness, these teams have passed the external evaluations they are required to undergo every 18 months and have continued to prepare and execute training and exercise plans to maintain their readiness. Based on our review of the mission and training standards for the CST program and our interviews and observations of CST personnel during our site visits, we found CST members to be motivated soldiers who have mastered complex technical tasks and can perform them under duress. The teams we visited reported that they maintain high morale in spite of the training pressures, the need for around-the-clock availability, and the added burden of training to perform the duties of other positions on the team so that the CST will have added depth and flexibility. Their fitness regimen is designed to keep them in superior physical condition, allowing them to perform in physically challenging response environments for an extended time. For example, teams are trained to conduct their work in fully contained protective suits and masks while carrying their own oxygen supply tanks on their backs. This is physically challenging even in moderate climate conditions. CST personnel are prepared for their mission through a regimen of individual training that varies from 376 to 1,148 hours in the first 2 years, depending upon the duty position. The teams complete an initial external evaluation in order to obtain DOD certification, and they undergo a similar evaluation every 18 months thereafter. The teams are required to conduct 12 collective training events each year to help them develop and maintain the skills necessary to complete the WMD response tasks outlined in the CST’s Mission Training Plan. NGB further monitors the 55 CSTs through two readiness reporting databases that inform NGB as to how well teams are meeting basic readiness criteria and provide detailed information on their personnel, equipment, and training status. One of these systems is a primary mechanism for NGB’s administration of the Response Management Plan. DOD assesses the teams’ proficiency in their critical tasks through external evaluations administered by U.S. Army subject matter experts. We observed an external evaluation for a phase one CST that required the team to locate and identify small amounts of chemical, biological, and radiological substances hidden inside a large warehouse, and it was able to do this successfully. Following the event, the Army experts and the CST members held an after-action review during which they discussed and assessed the team’s performance in critical mission areas, highlighting processes and procedures that worked well and those that required improvement. Army experts administer external evaluations to each CST every 18 months to assure both DOD and NGB of the team’s continued readiness. In response to our data collection instrument, 94 percent of CST commanders characterized external evaluations as an accurate indicator of their readiness. Some CST commanders who responded to our data collection instrument said the evaluations were good measures of the basic readiness of the teams to conduct their mission but did not adequately assess teams for their ability to interact with and support a civilian incident commander while at a site in company with multiple other local, state, and possibly federal authorities. CST members told us that a multiple-agency incident response site will be the normal circumstance for an actual CST WMD mission. In addition to the external evaluations, the CSTs conduct a number of exercises every year that involve other civil responders with which they would work in the event of an actual WMD response. CST members and state, local, and federal officials we met with reported that these exercises are invaluable for helping all stakeholders understand each other’s capabilities and how best to work together. Emergency responders and state officials who work with CSTs in the states and territories we visited gave generally positive reviews of the teams. Reflecting mostly on their experience with the CSTs in exercises and other coordinating venues, state and local officials we interviewed reported a high degree of confidence in the readiness of the CSTs to conduct their mission. They also reported that the CSTs’ ability to provide on-scene initial identification of CBRNE substances, along with their communications capability, exceed that of most civilian response teams and are vital assets for WMD response in their states. NGB, DOD, and the states have guidance in place for operational command and control of the CSTs, specifying how and when teams will operationally respond to a WMD event. The basis of CST operational deployment guidance is the National Response Plan and the National Incident Management System. States and territories we visited were in the process of updating their emergency response plans, and these plans identify the state National Guard’s role, and sometimes specifically the CST role, in the response. State officials acknowledged that their plans were being revised to conform to the National Response Plan. Officials in states and territories we visited expressed a need to become better organized to address homeland security and WMD threats. CSTs have successfully tested their command and control structures by deploying to response, stand-by, and assist missions under the authority of their respective state governors and adjutants general. To practice operational command and control, the CSTs also participate in various training exercises with federal, state, local, and nongovernmental agencies and organizations. Evaluation data on these missions and exercises are limited and often informal. However, the information available indicates that CSTs met NGB, state, and local expectations about coordination command and control, and comments by state and local officials we interviewed were overwhelmingly positive. In addition to operations within their states, CSTs have sometimes deployed outside their state based on requests for assistance. In these cases, the CSTs come under the command and control of the governors and adjutants general of the states in which they are operating. The CSTs have also been deployed to other states based on NGB requests that they respond to an event or disaster. For example, NGB managed the deployment of the CSTs to states affected by hurricanes in 2005 using the Response Management Plan to maintain enough teams in a high state of readiness in each response region. According to after-action reports on these events and comments from officials we interviewed during our site visits, the CSTs were integrated into the operational command and control of state military commands in the Gulf states, reported to incident commanders when responding to specific events, and performed their duties according to the response plan. DOD also has guidance in place for operational command and control of the CSTs in the event the teams are federalized. In such an event, the CSTs would come under the command of DOD’s U.S. Northern Command. To date, no CSTs have been federalized. While the CSTs principally focus on responding to WMD and catastrophic terrorist attacks, some CSTs are preparing for this mission by responding to non-WMD events, causing confusion among civilian as well as National Guard officials about when the CSTs should and should not be employed. This confusion results from a lack of clear guidance interpreting the legislation that establishes the CST mission to “prepare for or to respond to” WMD or terrorist attacks and from DOD’s use of the term chemical, biological, radiological, nuclear, and high-yield explosives (CBRNE) in its characterization of the CSTs’ official mission. In a 2004 memo, the NGB’s Deputy Director for Domestic Operations advised all state National Guard headquarters that approve missions for their respective CSTs to ensure that their teams responded only to intentional uses of WMD, to terrorist attacks, or to threatened terrorist attacks. He cautioned that the military’s formal definition of CBRNE included unintentional events, such as accidental hazardous materials spills, that were outside the scope of the CSTs’ mission. As part of their coordination efforts with state and local emergency management officials, CST members highlight the WMD and catastrophic terrorism mission limitation of the CSTs. While CST commanders and team personnel accepted this formal limitation on their mission, they also reported that it is sometimes necessary for mission readiness purposes to respond to events that have no connection to WMD or terrorism. For example, 61 percent of CST commanders who responded to our data collection instrument consider it to be part of their respective CST’s mission to respond to CBRNE incidents that are known to be the result of accidents or acts of nature—that is, to incidents that are not attacks. Additionally, 92 percent of commanders who responded thought that this type of response should be part of their mission, and many of those with whom we met endorsed responding to non-CBRNE events as well. CST commanders value non-WMD and nonterrorism responses for a variety of reasons, and NGB officials agreed. Deployments to actual incidents, regardless of the cause, can function as a valuable means of exercising the CSTs’ core capabilities, such as communication and coordination with state, local, and federal responders and authorities, and help CSTs prepare for responses to incidents that are WMD related. Moreover, CST commanders and other officials explained that it is often difficult to determine the cause of a destructive event until the CST arrives on scene—only then can the possibility of terrorism be conclusively dismissed. The Hurricane Katrina response provides a recent example of CST deployments that were not directly related to WMD or terrorism but provided CSTs with real-life opportunities to exercise their capabilities to respond to WMD events. Following an NGB request, 18 teams sent personnel and vehicles to assist in the response effort. This assistance, often in the form of satellite communications capabilities, enabled local authorities to coordinate with each other as well as with state and federal officials. For example, one southeastern CST sent personnel to establish a communications outpost just outside the Louisiana Superdome. According to NGB officials, there were lengthy discussions about whether these types of responses were appropriate CST missions. They ultimately concluded that response to large-scale disasters like Katrina were within the CSTs’ mandate to prepare for or respond to WMD or terrorism events. The use of CSTs for missions that do not involve catastrophic terrorist acts or WMD, as well as deployment criteria that can differ across 54 state and territorial governments, can lead to confusion at the local level and the potential for unmet expectations. Local responders we met in the 14 states and territories we visited reported that they value the CSTs’ expertise and capabilities and think that they can be put to wider use within their communities, although they recognized the need to protect the CSTs from overuse. But there remains no guidance that would assist CSTs or state and local officials in understanding what types of non-WMD missions are appropriate for the CSTs to conduct in preparing for their WMD terrorism mission. As a result, the parameters of allowable CST missions vary across states and among state civilian authorities, state National Guard headquarters staff, CST commanders, and others involved in approving CST missions. For example, some states did not acknowledge NGB’s requests for use of their CSTs for hurricane response operations, and at least one state refused to allow its team to participate. Following the destruction of the space shuttle Columbia in February 2003, multiple CSTs were involved in collecting debris across five states; but some state authorities and CST commanders declined to assist because they did not consider it to be a legitimate deployment. Further, in their responses to our data collection instrument, 59 percent of CST commanders recognized a need for their CSTs to provide operational support to local hazardous materials teams prior to those teams’ deployment to an incident scene, while 41 percent did not. Seventy-eight percent of commanders who responded identified a need to support hazardous materials teams during the response itself, while 22 percent did not. NGB officials acknowledged that while the conduct of non-WMD specific operations by the CSTs is a valuable way for the teams to satisfy their mission to prepare for or respond to WMD or catastrophic terrorist attacks, some confusion results among the CSTs and state and local officials. They also acknowledge that NGB needs to work with DOD to clarify the types of missions that are appropriate for CSTs to perform as part of the preparation to respond to a WMD or catastrophic terrorist attack. A February 2006 report by the White House on lessons learned from the Hurricane Katrina operations recommended that the option of expanding the role of CSTs to an all-hazards approach should be explored. Further, DOD has requested that Congress expand the CSTs’ mission to include man-made and natural disasters. If the types of such non-WMD missions in which the CSTs participate are not made clear, this could exacerbate confusion at the state and local levels about the mission of the CSTs. The CSTs are currently limited to conducting operations within the borders of the United States and its territories. However, DOD has requested that Congress allow CSTs to operate in conjunction with officials in Mexico and Canada in order to help accomplish their mission in states bordering these countries. CST members and NGB and DOD officials also told us that there have been informal discussions within DOD regarding a range of potential overseas operations for CSTs, including training, cooperative programs with foreign countries, prestaged support missions, as well as possibly direct support to the warfighter. However, DOD officials could not identify for us whether there is a validated requirement for CSTs to operate overseas, and they told us they have no plans to request a further expansion of the CST’s mission to encompass overseas operations. Legislation governing the CST program specifically prohibits the CSTs from conducting operations outside the borders of the United States or its territories. This law emphasizes this restriction by requiring that any request by the Secretary of Defense for a legal change be submitted with a justification for the request and a written plan to sustain the CSTs’ capabilities. Regulations detailing the composition, management, training, and doctrine of the CSTs explicitly define the CST mission as supporting civil authorities at a domestic CBRNE incident site, whether the CSTs are operating in a state or federal status. DOD has requested that Congress allow CSTs to coordinate and operate with Mexican and Canadian officials in the event of a cross-border WMD incident. The CSTs in border states are currently not permitted to conduct exercises and coordination that involve cross-border movement, which may limit their effectiveness in planning for WMD events in their regions. Therefore, the legislative change DOD proposed could improve the effectiveness of state WMD emergency planning. DOD officials said that the CSTs would be federalized in order to conduct operations across the border. Some CST members we spoke with during our site visits said they would like to engage in training outside the United States in order to exploit unique or superior training opportunities. For example, several CSTs expressed a desire to train at facilities such as the Defense Research and Development Center in Alberta, Canada, in order to undergo live-agent training, which several CST members told us would significantly enhance their training and exercise efforts. They also pointed out that the U.S. Marine Corps’ Chemical and Biological Incident Response Force has trained at the Canadian facility and greatly benefited as a result. CST members with whom we spoke said that permitting the CSTs to train at superior or unique facilities in other countries could increase their knowledge, skills, and experience, better preparing them to execute their mission. DOD and NGB are also informally considering such limited overseas missions as assisting foreign nations in developing CBRNE response teams similar to the CSTs and prepositioning CSTs at international events, such as the Olympics, to help provide critical monitoring and response support. CST commanders with whom we spoke told us that limited overseas roles for CSTs, such as foreign assistance and prestaged support missions, may provide them valuable experience and therefore have a positive effect on CSTs’ readiness to perform their stated mission. During the course of our work, we heard from NGB and DOD officials and some CST commanders that NGB and DOD have also informally considered even more demanding overseas missions for the CSTs, including assisting warfighting forces in such places as Iraq and Afghanistan. DOD officials could not identify for us whether a validated requirement exists for any of these more expansive overseas missions, and they told us that they have no plans to request that Congress expand the CSTs’ mission to encompass them. Should such overseas missions be contemplated in the future, however, our review of CST capabilities, along with our discussions with CST members, indicates that support to the warfighter in places like Iraq and Afghanistan is not practicable because of inappropriateness of the CSTs’ commercial-grade equipment for use in austere conditions. Further, such operations would likely have a negative effect on CST readiness and availability, drawing much more heavily on existing CST equipment and personnel and reducing states’ access to CSTs, a critical component of the domestic WMD response infrastructure. NGB has made progress in establishing an institutional management approach to sustain the CST program once all 55 teams are certified. However, NGB faces several challenges to the program in such areas as staffing, coordination planning, equipment maintenance and acquisition, training and exercise oversight, readiness reporting, facilities, and varying state oversight and support of their CSTs. Although these challenges have not yet affected the overall readiness of the CSTs, if the current efforts to address them are unsuccessful, they could impede the progress of the newer teams and increase the risk to the long-term sustainment of the program. NGB recognizes that the CST program—with 19 teams not yet certified—is still in the development process. In seeking to fully establish and sustain the CST program, NGB has made progress in developing institutional mechanisms that should facilitate standardization and continuous improvement within individual CSTs and across the program as a whole. For example, NGB’s CST standardization program is an attempt to establish a baseline level of interoperability among all CSTs in critical areas, such as training, logistics, personnel administration, and budgeting. One of the CST program managers responsible for developing the standardization program explained that it was initiated to ensure total program oversight and accountability for the CSTs and to assist the states in their CST oversight responsibility. Under the standardization program, NGB will evaluate each CST every 18 months. This evaluation will be coordinated with state-level command inspections that the Army requires. Program personnel have completed a series of test visits to uncertified CSTs, and NGB expects to begin formal evaluative visits in May 2006. NGB has also issued a CST regulation that details the processes and procedures for CST management. One of the CST program managers described the regulation as a desk reference guide for state officials as well as for the CSTs themselves. It clarifies CST operations in many areas, including mission requests and validation, command and control, personnel and administration, reporting requirements, funding, and certification. Other general management efforts NGB has led or helped coordinate include the following: The recent consolidation of Army-directed training and external evaluation responsibilities for the CSTs. This should facilitate progress and consistency across the program in terms of collective training and external evaluations. Establishment of working groups at both the CST and program management levels to facilitate improvements in doctrine, organization, training, materiel, leadership and education, personnel, and facilities for the CST program. Development and oversight of doctrine and other guidance to assist the establishment of the 55 CSTs. In creating this doctrine and guidance, NGB and the Army organization responsible for writing the doctrine have sought to incorporate lessons learned by the teams from the first few phases of the program as they established themselves in their respective states and territories. Further information on DOD management efforts related to the CSTs can be found in appendix II. NGB faces several challenges to the CST program that could impede the progress of the newer teams as well as hinder the long-term sustainment of the CST program. One challenge is that CSTs struggle to maintain their official allotment of 22 fully trained, mission-capable personnel because of turnover, team structure, and retention challenges. NGB officials reported that CST positions exhibit an annual turnover rate of 25 to 35 percent. This is due to team members departing after their tours are complete, dismissal of team members for a variety of reasons, and reassignments within teams to replace departed personnel. After vacant positions are filled, new CST members are away from their teams for the first year, satisfying training requirements. Once they return they must be integrated into the team’s collective exercises and other existing operations. As a result, CSTs sometimes conduct their missions with less than full unit strength, and 75 percent of CST commanders responding to our data collection instrument reported that the ability of CSTs to perform their mission is adversely affected by the lack of available personnel because of training, leave, and other manpower issues. However, the commanders also said that their teams remain ready to conduct their mission, reporting, for example, that a CST can perform its mission with less than 22 people as long as other members of the team can substitute for a gap. The CST structure also creates a staffing challenge because few of the advanced military occupational specialties on the team are represented by more than one person. For example, the nuclear medical science officer, who is responsible for operating the CST’s mobile laboratory and is critical to the CST’s ability to identify CBRNE substances, is the only member of the team with that special skill. Likewise, there is a single physician’s assistant and a single modeler assigned to each team. If these or other highly technical positions remain vacant for an extended period, the team must rely upon cross-trained personnel within the team or borrow key personnel from other teams. Seventy-nine percent of CST commanders responding to our data collection instrument reported that this lack of depth among key personnel adversely affects the team’s ability to perform its mission. Additionally, 88 percent of commanders who responded report that there are too few duty positions in the team’s eight-member survey section. CST commanders reported to us that the survey teams should have more people and that responding with too few personnel restricts a CST’s ability to make multiple entries into an incident scene in search of suspected CBRNE substances, degrades its ability to remain on scene for long periods without relief, and increases the time required for resolution of an incident. CSTs reported that their teams have still been able to conduct their missions and that cross-training other team members to add depth to various team sections may actually increase their overall capabilities. CST staffing challenges are further exacerbated by recruiting and retention difficulties. When key personnel such as the nuclear medical science officer or physician’s assistant depart, the resulting open spots are especially hard to fill because qualified applicants are difficult to attract from the civilian world and are not widely represented within the military. CST commanders and NGB officials explained that the lack of promotion opportunity within the teams was another major factor affecting a soldier’s decision to become or remain a CST member, and that career progression is particularly limited for the team’s Air Guard contingent. They also listed other factors that frustrate a team’s ability to recruit and retain CST members, including the team’s substantial training requirements and its full-time alert status for possible deployment. NGB has pursued a number of efforts aimed at addressing these staffing challenges. For example, during live responses, NGB augments the lead CST with additional individuals and sometimes with entire teams. NGB has also been working to fund and conduct a limited operational experiment to validate the CSTs’ personnel and equipment list. Recommendations for adjustments to the number of authorized personnel may result from this experiment. In a further attempt to address staffing challenges, NGB is currently compiling the latest turnover data and other relevant personnel information to send to the service secretaries to encourage them to authorize $150 per month incentive pay for CST personnel in accordance with Title 37 United States Code, Section 305(b). Although these efforts may ease some of the staffing challenges discussed above, it is too early to know whether they will fully address them. Another challenge is that NGB provides little guidance to the CSTs on how they should coordinate with state and local emergency responders and officials, potentially lengthening the amount of time it takes new teams to become incorporated into their home state emergency response infrastructure. CST coordination and outreach efforts vary in nature and scope from state to state, and they include practices such as briefing state and local officials and responders on the mission and capabilities of the CST, developing protocols for working with emergency responders and state officials, participating in training with other responders, conducting exercises with other responders, and offering technical advice to other responders. Established CSTs, state and local officials, and state and local responders have identified CST coordination and outreach efforts as being critical to the success of CST operations. Such efforts increase the CSTs’ visibility at the local level, improve responders’ understanding of the CST mission (for example, when they can be legitimately deployed), solidify working relationships and open communication between the CSTs and state and local responders, and increase the CSTs’ familiarity with the vulnerabilities and strategic targets in all areas of their states. Some CSTs reported a learning curve with respect to conducting successful coordination and outreach. For example, a few CSTs initially did not have good relationships with other emergency responders until outreach efforts clarified the role of the CST as working to support local and state emergency responders. One CST we visited coordinated closely with its state and local partners to prepare a clear set of written protocols and coordination mechanisms that it found to be highly successful. Some state officials reported that their CSTs have not yet developed written coordination protocols for state and local emergency responders, even though responders expressed confusion regarding CST capabilities and mission. NGB has not issued any guidance or requirements regarding the development, implementation, or assessment of CST coordination plans and outreach efforts. NGB has not included such outreach efforts in CST regulations as a mission-essential task, there is no formal system in place for sharing coordination best practices across teams, and there are no requirements to develop written protocols with local and state officials and responders. NGB officials told us that they recognize the importance of coordination and outreach to ensure the success of CSTs in their home states. However, they have not yet considered formal guidance for the teams on the subject. CSTs experience other challenges that NGB recognizes as important, and it has efforts under way to address them. Many of these efforts are new or ongoing, and it is therefore not clear how effective they will be in addressing the specific challenges. While these challenges have not yet affected the CSTs’ overall readiness, if the current efforts to address them are unsuccessful, the challenges could threaten the long-term success and sustainability of the program. One of challenges the CSTs face is maintaining and replacing military and commercial equipment at the pace required to sustain CST readiness. CST members told us that they experience varying or poor maintenance support for their military equipment, which is the responsibility of the National Guard in each state. They, as well as state National Guard and NGB officials, told us that the varying degree of state National Guard support stems mostly from the state National Guards’ lack of understanding of the unique nature of the CST as a unit as opposed to a more traditional National Guard military unit. CST members reported that maintenance support for their commercial equipment, which is done through the NGB- managed Consequence Management Support Center in Lexington, Kentucky, tends to be better. They also expressed concern that the pace of equipment replacement and development is too slow to ensure that the CSTs have the most relevant equipment available to accomplish their mission and that their existing equipment is updated to prevent its being worn out. NGB officials report that NGB and DOD have heard these concerns from the CSTs and are taking the following steps to address these equipment-related challenges: NGB is working with the Joint Program Executive Office for Chemical and Biological Defense and the Army Maneuver Support Center to plan for future generations of CST equipment. NGB Logistics is assessing the cost of each piece of CST equipment and developing new items where appropriate. NGB Resource Management is requesting an increase in funds in future years to maintain the CST equipment sets. These efforts may help address some equipment challenges, such as adequate equipment update and re capitalization plans, but it is not yet clear whether they will be successful in the near or long term. NGB’s standardization program may help the state National Guard organizations provide better maintenance support for the CSTs’ military equipment, but it will take time and cooperation between NGB, the CSTs, and their respective state National Guard commands to accomplish this. Another challenge the CSTs face is a lack of oversight and evaluation of exercises required of CSTs each year. Unlike the external evaluations the CSTs undergo before certification and every 18 months thereafter, the 12 or more exercises the CSTs plan for and conduct each year do not follow the same specific set of objectives and criteria and are not evaluated to determine the extent to which those objectives were met. NGB officials told us that they recognize the need for more oversight of these exercises at the NGB and state levels. NGB and DOD have the following efforts under way to help address the lack of exercise oversight and evaluation: NGB and the Army Maneuver Support Center revised the CST Commanders Pre-Command Course to include instruction on training management. NGB is incorporating into its ongoing standardization initiative training management components to ensure teams are adhering to Army training regulation. NGB is bringing a member of the standardization initiative to NGB to assist in training oversight. DOD is consolidating Army-directed external evaluations and related training responsibilities under U.S. Army North to facilitate progress and consistency across the CST program. These efforts should help NGB and the states better oversee and evaluate the effectiveness of the CST program. However, since many of the initiatives are new, particularly the standardization program, it is not clear how effective they will be. The CSTs use two separate systems to report their readiness measures. CST members we interviewed said that one system, the standard Army readiness system (DOD’s Status of Resources and Training System), is ill suited to the unique nature of the CSTs. They also said that while the other system—maintained by NGB—is better suited to the CSTs as a unit, the system requires constant effort by team members to update and involves using secure Internet connections the teams do not always have readily available at their home stations. Many of the CST members we interviewed said that because the two systems overlapped, they should be merged or one should be eliminated. NGB officials explained that the system they maintain is critical for administering the Response Management Plan and is not meant to supplant the standard Army system. They also said that DOD is transitioning to the new Defense Readiness Reporting System. NGB expects the CSTs to replace the two existing systems with the new one in October 2006. This should solve the problem of having two separate readiness reporting systems. However, until the new system is in place and NGB and CST members can evaluate the extent to which it suits the unique nature of the CSTs and helps NGB administer the Response Management Plan, it remains unclear how fully the new system will address this challenge. Finally, some CSTs have reported that their facilities are inadequate in terms of vehicle, storage, and training space. NGB recognizes that some CST facilities are not adequate and has issued revised planning templates for CST facilities to the states. However, as we discuss further below, the varying degree to which states understand how to use these templates and fully meet the needs of their CSTs indicates that the challenge remains to be fully addressed. NGB has made progress in issuing guidance that explains state National Guard roles and responsibilities for overseeing and supporting their CSTs, but this has been insufficient to fully inform the states about the unique nature and requirements of the CSTs and how to integrate such a unit into the state National Guard command structure. The result has been varied oversight of the CSTs at the state level in important administrative areas and varied support to their CSTs in areas such as staffing and equipment augmentation and designing and building the facilities to house the teams. According to NGB officials and the certified teams we visited, DOD established the first CSTs without the benefit of a great deal of direction and guidance that would help create a unique unit from scratch and incorporate that unit into a state National Guard structure that is unaccustomed to such units. Subsequently, NGB issued its CST management regulation, which listed responsibilities for state National Guard headquarters to exercise fiscal and administrative management and oversight of the CSTs in their states or territories. This guidance includes state National Guard responsibility for such CST oversight as issuing training guidance, approving mid- and long-range training plans and objectives, property accountability, and conducting readiness and compliance inspections. While this guidance represents progress in clarifying the roles of NGB and the states in overseeing and supporting the CSTs, it is not as detailed as NGB’s guidance on operational command and control and mission-related topics in explaining roles and responsibilities. Although the CST program has been under way for over 7 years, CST members and state National Guard officials with whom we met said the guidance on how the states should integrate the CSTs into their National Guard structures and how CST oversight and support should be conducted is still lacking. NGB officials told us that they recognized that the states have varied widely in how they have integrated the CSTs into their state National Guard structures. They also said they are planning to issue further guidance to clarify how states should integrate their CSTs into the new state Joint Force Headquarters organizations but that they are waiting for these organizations to be fully in place. Because of the lack of clear guidance from NGB on how state National Guard organizations should oversee and support their CSTs, the level and quality of oversight and support for CSTs varies by state. Some states and territories we visited did not have formal plans in place at their National Guard headquarters or at the CST level for evaluating the effective use of resources, and very few of those states conducted periodic internal reviews of the CSTs. The states set up budget and accounting records to ensure funds for the CSTs were available when and where needed, but they conducted no regular program reviews for the CSTs. Many of the states and territories we visited did not have specific objectives for collective training, and they did not measure accomplishments against previously determined specific mission objectives. Therefore, those states could not identify deficiencies or make command management decisions based on such analyses. As a result, NGB and the states were not in a position to know if they were making the most effective use of CST resources. Again, because NGB has no clear guidance to the states, state National Guard support of the CSTs also varies widely in terms of staffing, equipment, and facilities. One state we visited provides additional administrative support to its CST through the use of three or four regular part-time National Guard members. This arrangement also allows those part-time members access to some CST training and, in the event those individuals apply for vacant permanent CST positions, can cut down on hiring and training delays. Another state hired an additional full-time duty member to support the team’s logistics. Some states provided limited amounts of additional equipment to support their CSTs, such as laptop computers. Other states do not augment their CSTs. Among the reasons some state National Guard officials reported for why their state’s National Guard headquarters did not augment their CSTs were a lack of money and lack of interest by the headquarters in the CSTs because they are small units. NGB officials acknowledged that they need to help the states understand that the CSTs are unique units and should therefore be considered high priority. During our site visits, we found inconsistencies in how states interpret and apply procurement guidance to CST equipment requests. As a result, some states approved equipment for a CST while other states did not. For example, NGB guidance permits the purchase of nonstandard uniforms with state funds only and if necessary for CSTs to accomplish their mission by blending in with other police and first responder personnel. However, some states we visited refused to purchase uniforms for their teams, even though the teams indicated a need. Other states did support the purchase of the nonstandard uniforms. While NGB, state National Guard, and CST officials stated that they believed it was important to have the flexibility to make purchases that best support the CSTs’ mission, some CST commanders however thought this subjectivity sometimes negatively affected the CSTs’ ability to obtain material support. States have also had difficulties designing and renovating or building facilities that meet the needs of the CSTs. State National Guard officials said the unique nature of the CST mission made it more difficult for states to understand the support requirements and expectations placed upon their CSTs. For example, in addition to the need for climate controlled spaces for sensitive equipment, most CST members we interviewed said that there is a need for enclosed bays for all vehicle storage because it facilitates ready-to-roll deployment, improves vehicle security, and provides an all-weather maintenance and training area. However, 78 percent of the CST commanders who responded to our data collection instrument reported that their facilities are not large enough to hold all vehicles and other CST equipment. Approximately half reported that their facilities are not large enough for all personnel to have an adequate workspace. National Guard officials in the states and territories we visited also identified inadequacies with their facilities. They said they followed Army procurement and budgeting guidance, which sometimes affected whether identified changes could be made to the design or construction. Sometimes the state National Guard did not recognize the unique mission of the CST as compared to building an armory, and sometimes the design was set before the CST commander or other members had a chance to review the plans. Because of varying interpretations, some states have constructed new or remodeled facilities that are in need of further remodeling. Other CSTs we visited were satisfied with their facilities, despite believing that such things as vehicle bay space were not completely adequate. These CST members reported that their state National Guard headquarters worked well with the CST to design the most effective facility they could to meet the unique needs of the team. In addition to the CST management regulation, NGB instituted the CST standardization program partially in response to its concerns that states were not adequately monitoring the CSTs’ implementation of key Army management controls in training, logistics, budgeting, and other areas. According to preliminary standardization program reviews, state National Guard headquarters have done few periodic reviews and inspections. NGB officials told us they intend to use these reviews to increase state participation in oversight of the CSTs and will also spell out in greater detail for the states the type of interaction NGB believes is necessary and required by regulation. If pursued consistently, the standardization program should help NGB better coordinate with the states on how to oversee and support the CSTs, though a significant NGB-state National Guard cooperative effort will be needed to facilitate success. In managing the CST program, DOD and NGB have made significant progress toward establishing 55 highly specialized teams in every state and U.S. territory. The focus has thus far been on reaching the goal of certifying all 55 teams. As the CST program seeks to institutionalize its key processes and sustain itself in the long term, we see four areas that could increase the risk to that effort. First, confusion about what types of non-WMD deployments the CSTs can and should use to help them accomplish their mission of preparing for or responding to WMD events could make it more difficult to effectively coordinate efforts at the state and local levels and possibly inhibit regional and national coordination between the states and the federal government. Expanding the CSTs’ mission to encompass natural and man-made disasters may not sufficiently clarify what types of such missions are appropriate for the CSTs to conduct, possibly exacerbating confusion among state and local officials about the mission of the CSTs. Second, some limited overseas missions, such as coordinating with officials from Canada and Mexico or training at live agent facilities, may be beneficial to CST training and operational effectiveness. Though DOD indicates that it is not planning to request that Congress expand the CSTs’ role to encompass more demanding overseas missions, to the extent missions such as regular CST support to overseas combatant commands are considered in the future, they would likely have a detrimental impact on the readiness and availability of the teams to perform their original mission to support domestic WMD response. Third, despite the progress NGB has made in fully establishing the CST program and formalizing institutional sustainment plans for the teams, many areas of the program face significant challenges that require specific guidance and action from NGB. NGB understands these challenges, particularly in the areas of team staffing, coordination guidance, equipment maintenance and acquisition, training and exercise oversight, readiness reporting, and facility adequacy. While individual team readiness has not yet suffered, if current and planned NGB efforts to address these challenges are not successful, the challenges could eventually cause harm to overall CST readiness. Fourth, despite NGB’s progress in establishing such unique and specialized units as the CSTs, there remains a need for additional guidance on the administrative oversight structure for the CSTs at the state level. Small differences between the way each state manages its CST may be expected, given the fact of 54 different military commands. While NGB’s plans for additional guidance on the oversight and support of the CSTs and its standardization program should help states better integrate the CSTs, further guidance and coordination efforts between NGB, the CSTs, and the state National Guard commands is warranted. To help address management challenges and further efforts to sustain the CST program, we recommend that the Secretary of Defense, in concert with the Chief of the National Guard Bureau and the Secretaries of the Army and of the Air Force, take the following three actions: Clarify the types of non-WMD responses that are appropriate for CSTs as part of their mission to prepare for domestic WMD and catastrophic terrorist attacks. Fully incorporate into ongoing management efforts to sustain the CST program a plan with goals, objectives, and evaluation mechanisms to address challenges such as team staffing issues, coordination guidance, equipment maintenance and acquisition, training and exercise oversight, readiness reporting, and facilities requirements. Develop clear guidance for the states on how CSTs should be integrated into state National Guard commands in order to facilitate an effective administrative oversight and support structure for the CSTs in each state that reflects familiarization with the role, mission, and requirements of these specialized units, and work with state adjutants general and federal financial officers at the state level to find appropriate ways to exchange ideas and best practices for ensuring effective NGB-state National Guard partnership in overseeing the CST program. One such method could be to create or modify an existing working group or team to allow state National Guard membership. In comments on a draft of this report, DOD generally agreed with the intent of our recommendations. DOD discussed steps it is currently taking as well as actions it plans to take to address these recommendations. DOD also provided technical comments, which we have incorporated into the report where appropriate. In response to our recommendation that DOD clarify the types of non- WMD responses that are appropriate for CSTs, DOD reported that it has requested that Congress authorize the CSTs to respond to catastrophic events of intentional or unintentional origin and that if this is enacted, DOD will direct the Chief of the National Guard Bureau to develop implementing instructions. DOD reiterated its view that the CSTs have been participating in non-WMD responses as training. Expanding the CSTs’ mission to include both WMD and non-WMD events should help clarify the role of the latter in the CSTs’ overall mission. We continue to believe that as NGB develops implementing instructions, it should provide clear guidance on the types of non-WMD responses that are appropriate for the CSTs. This should help alleviate confusion about the CSTs’ mission and prevent their being overemployed to the detriment of their WMD-related training and mission requirements. In its comments on our recommendation regarding incorporation into ongoing CST management efforts of a plan to address critical challenges to the CST program, DOD highlighted some of the CST management efforts we discussed in our report, such as the CST Working Group and the CST standardization program. DOD further stated that additional management efforts should be deferred until the effectiveness of the standardization program can be assessed. We agree that the program offers the potential of a good evaluation tool for NGB, the CSTs, and the states’ National Guard headquarters and that further information on many of the challenges we highlight in our report may be gleaned from the results of the standardization program. To the extent the program further highlights these and other challenges for which no immediate corrective measures are in place, we would expect the Chief of the National Guard Bureau to take the appropriate management action. In response to our recommendation that NGB develop clear guidance for the states on how CSTs should be integrated into state National Guard commands to facilitate effective administrative oversight and support, DOD indicated that in addition to guidance on state oversight of the CSTs in the recently published CST management regulation, the CST standardization program and NGB-conducted formal training for state National Guard leadership provide additional measures to review and reinforce state National Guard administrative oversight of their CSTs. DOD further recognized the value of currently available venues for coordination between NGB, the CSTs, and the states’ National Guard commands. As we state in our report, we believe that if pursued consistently, the standardization program should help NGB better coordinate with the states on how to oversee and support the CSTs. This should help NGB and the states provide an effective long-term partnership to sustain the CST program. To the extent necessary based on the result of standardization program evaluations, we would expect NGB to expand its efforts to assist state National Guard commands to provide effective oversight and support of their CSTs. DOD’s written comments are reprinted in appendix IV. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Army. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key staff members who contributed to this report are listed in appendix V. To assess the extent to which the Civil Support Teams (CST) are prepared to conduct their mission, we gathered information on the categories and process of the two CST readiness measurement mechanisms; reviewed readiness-related documents for the 14 teams we visited; included similar readiness information in a data collection instrument sent to all 55 CSTs; and discussed CST readiness with local, state, and federal officials who have worked with CSTs. We observed the external evaluation of 1 CST by U.S. Army officials and attended the after action review following the evaluation. We also observed an exercise that included one CST and a number of local and state responders. During our site visits, we discussed operational command and control mechanisms with CST members and their National Guard headquarters officials. We compared the published mission of the CSTs to the types of missions the CSTs are performing and discussed the extent to which their mission is well understood with CST members and local, state, and federal officials. Further, we discussed the potential effect of overseas missions on CST readiness with CST members and civilian emergency management officials in the states and territories we visited. To assess the extent to which effective administrative mechanisms are in place for the CSTs, we compared National Guard Bureau (NGB) regulations and guidance on management of the CSTs with the practices in place at the 14 CSTs we visited. We also discussed operational and administrative issues with CST members in those states and their National Guard headquarters officials. We collected similar information in the data collection instrument sent to all 55 CSTs. During our site visits, we discussed with CST members those challenges they believed could inhibit CST readiness in future. We categorized these challenges, discussed them with NGB officials, and compared the challenges to information on NGB efforts related to those areas. We also collected cost data related to the establishment and sustainment of the CSTs from NGB, state National Guard personnel, and the CSTs themselves. We did not independently verify cost data, but we interviewed NGB officials who manage the data about data quality control procedures. We determined the data were sufficiently reliable for the purposes of this report. To address our objectives, we visited and interviewed officials from the Department of Defense (DOD), including the Office of the Assistant Secretary of Defense for Homeland Defense, NGB, U.S. Army Forces Command, First Army, Fifth Army, and United States Northern Command. During each state site visit we met with members of the CST and officials from the state National Guard headquarters, state emergency management and homeland security officials, representatives from local community emergency response agencies (such as fire and police departments), and representatives of federal agencies and organizations (such as the Federal Bureau of Investigation and Department of Energy). Our site visits to 14 of the 55 CSTs were conducted from August through December 2005. We selected the 14 teams in order to obtain a reasonable sample of CSTs based on a number of criteria, including geographic distribution, age of team, certification status, state size, state population, state government emergency management and homeland security organization, and DOD-related command structure. We visited the following locations: Alabama Alaska Colorado Iowa Massachusetts Montana New Mexico New York North Carolina Puerto Rico Rhode Island Tennessee Texas Washington To supplement the interviews we conducted during the site visits, we collected supporting documents from the CSTs and individuals we interviewed and made physical observations of CST facilities in every state we visited. To further address our objectives, we designed a broad data collection instrument for all 55 CSTs that would collect information regarding CST personnel, equipment, training, certification, costs, coordination, and mission scope. Within these major topic areas, we developed and tested relevant questions based upon previous GAO work, current research, and interviews at both the NGB and CST level. After two formal pretests with the command staff of 2 separate CSTs, we deployed the data collection instrument simultaneously to the National Guard’s state supervisory auditors for all 55 teams and asked that they be forwarded to the CST commanders in each of their respective states or territories. The data collection instrument was administered via e-mail using an ActiveX- enabled Microsoft Word attachment. Although every team received an identical version of the data collection instrument, we advised the team commanders that because of differing experiences, locations, certification statuses, and lengths of service, we recognized that not all teams would be able to respond to every question. Each section of the instrument contained questions that could be answered by both certified and uncertified teams, as well as questions that were applicable to certified teams only. The data collection instrument was addressed to the 55 unit commanders, and while these individuals were explicitly responsible for the overall content of the completed data collection instruments, we permitted them to delegate specific questions or sections to other appropriate members within the CST. To ensure a full and candid response, we noted that individual responses would be attributed neither to individual CSTs nor to their individual members. Further, we requested that the teams transmit their responses over a secure e-mail channel to safeguard any sensitive information. We distributed the data collection instrument via e-mail on September 26, 2005, and it was deployed through December 27, 2005. Out of the 55 deployed, we received 52 completed data collection instrument responses during our 3-month response window. To analyze the results of the completed responses, we noted responses for all questions and highlighted those we deemed significant, such as responses where there was overwhelming agreement among CST commanders. These responses and others were compared with preliminary results from our site visits and used to verify that the GAO site visit teams had not overlooked significant widespread CST issues. Percentage results from the data collection instrument are discussed in the letter. In some cases, there are fewer than 52 respondents for a given question. Because some respondents did not answer all questions, the percentages we report are calculated using the base of respondents who answered the question. In no cases did fewer than 48 of the 52 respondents answer a question whose percentage results appear in the report. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any data collection effort may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or are analyzed, can introduce unwanted variability into the survey results. We took steps in the development of the data collection instrument, the data collection, and the data analysis to minimize these nonsampling errors. For example, GAO staff with subject matter expertise designed the data collection instrument in collaboration with social science survey specialists. Then, the draft questionnaire was pre-tested with the command staff of two CSTs to ensure (1) the questions were relevant, clearly stated, and easy to comprehend; (2) terminology was used correctly; (3) the questionnaire did not place an undue burden on the respondents; (4) the information was feasible to obtain; and (5) the survey was comprehensive and unbiased. Finally, when the data were analyzed, a second, independent analyst checked all computer programs. The entire data collection instrument appears in appendix III. We performed our work from April 2005 through March 2006 in accordance with generally accepted government auditing standards. NGB has focused much of its management on establishing and certifying all 55 of the authorized CSTs. But NGB also recognizes that a significant amount of effort is required to ensure that the CST program is sustained for the long term, while it also provides for the continued improvement of the process for establishing the teams and modifying doctrine, training, equipment, and operational considerations as necessary. Some of the institutional efforts NGB has coordinated or led include a CST standardization program, coordinating Army-directed CST training and evaluations, and establishing working groups to evaluate and recommend improvements to the CST program. The standardization program is scheduled to evaluate each CST every 18 months and is intended to be coordinated with state-level command inspections that the Army requires. The process begins with a precoordination meeting 6 months prior to the scheduled standardization visit that explains the purpose, evaluation method, and desired outcome for the upcoming visit. Ninety days prior to the scheduled visit, a second coordination meeting is held to resolve any remaining administrative details and to allow the standardization team personnel responsible for conducting the evaluation to become familiar with the CST’s location. During the visit itself, these personnel conduct compliance-oriented evaluations using a series of checklists that monitor various subtasks within the evaluated areas. For example, the training checklist assesses 55 items, including whether the CST has an approved Mission Essential Task List, whether the team publishes quarterly training guidance, and whether the team conducts after-action reports for all training. Each checklist item is evaluated as “go,” “no-go,” or “not applicable.” Items that are initially characterized as needing improvement (no-go) may be upgraded to satisfactory (go) as a result of on-the-spot corrections. At the evaluation’s conclusion, standardization team personnel will present the results of their evaluation to the state adjutant general. They must issue a formal report to the adjutant general within 6duty days after the end of their visit. Among the standardization program’s objectives is integrating with state and intermediate command inspections that could reduce the total amount of time committed to the inspection process, as well as imposing CST-specific management controls to assist in the prevention of fraud, waste, and abuse of Army resources. Program personnel have completed a series of test visits to CSTs, and they expect to begin formal evaluation visits in May 2006. As of October 1, 2005, Fifth Army assumed sole responsibility for all CST external evaluations and related training, with the exception of CSTs in Hawaii, Alaska, and Guam, which remain under U.S. Army Pacific. Under Fifth Army, the organization and protocols of all CST training and evaluation teams should be standardized. Army, NGB, and CST officials report that training, education, and experience requirements of trainer/evaluators will also be standardized. They indicate that this standardization should increase the consistency of external evaluations and related collective training across all teams. Consolidation of Army training, readiness, and oversight responsibilities could also promote better information sharing and guidance development both across the Fifth Army training and evaluation teams and the program as a whole. Responsibility for all CST external evaluations and related training was previously divided geographically between the First Army and Fifth Army under the U.S. Army Forces Command, with the exception of Hawaii, Alaska, and Guam. Although both First and Fifth Armies were required to train and evaluate teams to the standards set forth in the CSTs’ Mission Training Plan, Army field manuals and other regulations, each Army organized its CST training and evaluation teams differently and followed different protocols for executing training and external evaluations. In April 2005 NGB formally established the Civil Support Team Working Group to (1) increase the operational effectives of CSTs by providing operationally relevant advice on gaps, shortfalls, and improvements to CST doctrine, organization, training, materiel, leadership and education, personnel, and facilities (DOTMLPF); (2) assist in implementing any resulting plans, and (3) promote standardization and interoperability among CSTs. The working group process had already been working informally since the establishment of the first 10 CSTs. In addition to NGB, working group membership includes the CST commanders and representatives from the Army Maneuver Support Center and the joint service Chemical and Biological Defense Program. The working group is organized to include several technical working groups and subgroups that focus on specific aspects (e.g., equipment, personnel, and training) or components (e.g., operations, survey, medical and science assessment, communications, computer and information systems, and logistics/sustainment) of the CST program. In June 2004 the Army Maneuver Support Center and NGB initiated the Integrated Concept Team to determine the tasks, schedules, milestones, and products required to develop operational concepts and provide DOTMLPF solutions to support the CST program. In addition to directing the efforts of the other CST working groups with regard to DOTMLPF responsibilities, the Integrated Concept Team is also tasked with more broadly addressing CST issues within the larger scope of DOD force management and operational capabilities plans. The United States Government Accountability Office (GAO) is an independent, non- partisan agency that assists Congress in evaluating federal programs. We have been asked to report to Congress on the following aspects of the Civil Support Team (CST) program: readiness and capability to respond to WMD incidents; coordination with other local, regional, state, and federal emergency responders; and costs associated with establishment of teams and continuing operations for both certified and uncertified teams. In order to obtain similar information across all CSTs, we are sending this data collection instrument (DCI) to all 55 unit commanders. The questions in the DCI are grouped into six sections: 1) Threats and responders 2) Coordination and communication 3) Mission readiness and certification 4) Equipment, transport, and medical 5) Training 6) Personnel Each section contains questions that can be answered by both certified and uncertified teams, as well as questions that may be applicable to certified teams only. While it is necessary for methodological reasons for every team to receive the same version of the DCI, it is understood that due to differing experiences, locations, certification statuses, and lengths of service, not all teams will be able to respond to every question. The DCI is addressed to the 55 unit commanders, and while these individuals are responsible for the overall content of the completed DCIs, the unit commander may delegate specific questions or sections to other appropriate members within the CST. There is a blank field at the end of each of the six sections that asks for a name, phone number, and email address for follow-up questions regarding the responses to that section. This field should be used to identify any team member other than the commander who should be contacted about that section’s responses; if the unit commander is the contact point for that section, the field may be left blank. Although there is a possibility that sources both within and outside the CST may be contacted to validate responses, it is important to note that responses will be attributed neither to individual CSTs nor to their individual members. Data from the DCI will be presented in larger groupings for summary purposes only and will not identify the responses from any one CST. 1. For each of the potential threats in your state that are listed across the top of the table below, please identify the potential responders in the left-hand column that you would expect to encounter at the corresponding incident scene. (Please check all that apply.) LOCAL & REGIONAL RESPONSE Local/Regional police depts. Local/Regional fire depts. Other local/regional response (please identify) Other local/regional response (please identify) State Office of Homeland Security or similar State Office of Environmental Management or similar State Health Dept Other state response (please identify) Other state response (please identify) FEDERAL RESPONSE Joint Task Force-Civil Support (JTF-CS) Federal Bureau of Investigation (FBI) Hazardous Materials Response Team (HMRT) Other FBI agent(s) or team(s) U.S. Coast Guard’s National Strike Force (NSF) teams U.S. Secret Service Federal Incident Response Support Team (FIRST) Nuclear Incident Response Team (NIRT) Other federal response (please identify) Other federal response (please identify) 2. Does the team consider itself to be a CBRNE responder, a WMD responder, both, or neither? Follow-up to #2: Is there a practical difference between being a CBRNE responder and being a WMD responder? 3. Other than those possessed by CSTs, what response capabilities exist for CBRNE incidents at the local/regional, state, and federal levels? Who possesses these capabilities at each level? at the local level? If YES, who possesses this capability? at the state level? If YES, who possesses this capability? at the federal level? If YES, who possesses this capability? 4. Approximately how many times has the CST been formally deployed for the following types of missions other than training exercises? (contingency ops, capabilities briefs, (deployed in response to a (pre-positioned, deployed for a special event, VIP, request) etc.) exercises, etc.) – excluding training exercises -- for your CST in the past two fiscal years (FY 2004-2005) and then provide the information requested in each column. (must be in yyyy-mm-dd format, e.g. 2005-07-25 to deploy the CST’s the whole team, if no 2004-12-15) Bronze) Black) ADVON)? once? 6. Do any of the following potential issues adversely affect the ability of CSTs to perform their mission? (Please check all that apply.) CSTs are not considered a “first responder” like police, fire, etc. CSTs limited to specific role and capabilities: Identify, Assess, Advise, Assist. CSTs have mobility constraints. Other federal, state, regional, and/or local organizations have capabilities that are similar to the CST’s capabilities. Other federal, state, regional, and/or local organizations have unrealistic expectations for the CST. Geographic location of CST facilities within the state makes wide-area response difficult. There are not enough personnel available (because of training, leave, etc.) within the CST. Other / Please identify: 7. What changes, if any, would you suggest to make the CSTs better able to respond to incidents? If you have further comments in response to any of the questions in this section, you may use this space to provide them. Please identify your comments by preceding them with the number of the earlier question to which they refer (e.g., 1, 3, 7, etc.). 9. Please provide a name, phone number, and email address for follow-up questions regarding the responses to this section. (Field may be left blank if the unit commander is the contact point.) Section 2: COORDINATION & COMMUNICATION 1. How familiar is the unit’s leadership with each of the following emergency planning documents? (Please check only one box per row.) National Response Plan (NRP) National Incident Management System (NIMS) State Emergency Response Plan(s) State Terrorism Response Plan(s) (Please select the best response.) Fully integrated – CST’s roles/responsibilities are specifically outlined and CST participates in emergency response training exercises. Partially integrated – CST is not directly mentioned, but National Guard’s responsibilities are outlined. Not integrated – Emergency response plan has been updated since CST establishment, but neither CST nor National Guard is mentioned. Not applicable – State’s emergency response plan has not been updated since CST establishment. State has no emergency response plan. 3. To what extent is your CST integrated into your state’s primary terrorism response plan? (Please select the best response.) Fully integrated – CST’s roles/responsibilities are specifically outlined and CST participates in terrorism response training exercises. Partially integrated – CST is not directly mentioned, but National Guard’s responsibilities are outlined. Not integrated – Terrorism emergency response plan has been updated since CST establishment, but neither CST nor National Guard is mentioned. Not applicable – State’s terrorism response plan has not been updated since CST establishment. State has no terrorism response plan. 4. Are you aware of your CST’s inclusion in any local emergency response plans or local terrorism response plans within your state? 5. Which of the following mutual aid agreements or compacts, if any, are in place in your state? (Please check all that apply.) Follow-up to #6: If YES, please identify the name of the consortium/task force (up to five), contact information for someone within the organization, and your frequency of contact. LOCAL & REGIONAL RESPONSE Local/Regional police depts. Local/Regional fire depts. Local/Regional HAZMAT teams Local/Regional bomb squads Local/Regional EMTs Local/Regional utilities Other local/regional response (please identify) participate on the same task forces participate in the same conference 8. Are there teams, agencies, or governments that have been problematic in coordinating with your CST? Follow-up to #8: If YES, please identify up to five of those entities as well as the nature and frequency of the coordination problems that you have faced in the past or are currently facing. (Reminder: survey answers will not be attributed to individual CSTs or to their personnel.) In general, how would you rate your local, regional, and state authorities’ understanding of your CST’s overall capabilities and duties? Local/Regional HAZMAT teams Local/Regional police depts. Follow-up to #9: If you reported a lack of understanding, how have you attempted to address this? 10. In general, how would you rate the capabilities of the local HAZMAT teams in the metropolitan areas of your state? (Please check only one.) Robust Presence - Fully capable, staffed, and equipped Significant Presence – Generally capable, staffed, and equipped Presence – Staffed and equipped but with some weaknesses Weak Presence – Scattered capabilities and staff, out-of-date equipment, and/or other serious deficiencies Not applicable – There are no local HAZMAT teams in our state 11. Outside of the metropolitan areas, how would you rate the capabilities of the local HAZMAT teams across your state, in general? (Please check only one.) Weak Presence – Scattered capabilities and staff, out-of-date equipment, and/or other serious deficiencies Not applicable – There are no local HAZMAT teams in our state 12. Do you see a need for your CST to provide support to the local HAZMAT teams across your state? (Please check only one.) YES, but mostly outside the metropolitan areas YES, but mostly inside the metropolitan areas YES, both inside and outside the metropolitan areas equally Cannot generalize; it varies too greatly by local team Not applicable – There are no local HAZMAT teams in our state Follow-up to #12: If you do see a need for your CST to provide support to local HAZMAT teams, in which of the following areas do these teams need your support? (Please check all that apply.) Operational support prior to deployment Operational support during a response Not applicable – I do not see a need to support local HAZMAT teams in my state, or these teams do not exist in my state 13. If there are state-supported teams with HAZMAT capability (other than the CST) in your state, how would you rate their capabilities, in general? (Please check all that apply.) Cannot generalize; it varies too greatly by local team Not applicable – There are no other state-supported teams with HAZMAT capability in my state Follow-up to #14: If you do see a need for your CST to provide support to other state-supported HAZMAT teams, in which of the following areas do these teams need your support? (Please check all that apply.) Operational support prior to deployment Operational support during a response Not applicable – I do not see a need to support other state-supported teams with HAZMAT capability in my state, or these teams do not exist in my state 15. Do you feel that your CST has HAZMAT capabilities that overlap with other state, regional, or local emergency responders in your state? Follow-up to #15: If YES, to what extent to do they overlap? 16. What capabilities does the CST possess that are not shared by other state, regional, or local emergency responders in your state? CERFP bring to that incident scene that would not already be provided by the CST? 18. Which states, if any, have formal mutual aid agreements with your state with regard to sharing CST resources or responding to incidents in other states? Follow-up to #18: What is your assessment of these formal arrangements’ effectiveness? 19. Which states, if any, have informal mutual aid agreements with your state with regard to sharing CST resources or responding to incidents in other states? (Informal agreements include TAG-to-TAG and CST-to-CST.) Follow-up to #19: What is your assessment of these informal arrangements’ effectiveness? 20. If you have further comments in response to any of the questions in this section, you may use this space to provide them. Please identify your comments by preceding them with the number of the earlier question to which they refer (e.g., 1, 3, 7, etc.). 21. Please provide a name, phone number, and email address for follow-up questions regarding the responses to this section. (Field may be left blank if the unit commander is the contact point.) 1. What is the CST’s mission, and which documents or guidance do you use to define it? 2. Considering the National Guard Bureau’s expectations of your CST, how does this differ from your CST’s mission, if at all? 3. Considering your state’s expectations of your CST, how does this differ from your CST’s mission, 4. What documents or guidance do you use to define the mission and readiness of the CST? 5. Do you consider it a part of your CST’s mission to respond to CBRNE incidents that are known to be the result of accidents or acts of nature (i.e., that are NOT attacks)? Follow-up to #5: Whether it is part your CST’s mission or not, do you think that this type of response should be part of your mission? Please explain your answers to #5. Unit Status Report (USR) Operational Reporting System (ORS) Other / Please identify: Additional comments for #6, if any: 7. Were there (or have there been) any significant obstacles for your CST in achieving initial obstacle? Equipment on hand (S-level) 9. After the EXEVAL that supported your CST’s certification, what extra training (if any) was recommended by the Adjutant General and implemented by the CST commander before the certification package was sent to the Department of the Army? 10. Please identify which readiness component is the most challenging to sustain with regard to each of the following CST capabilities. (Please check only one capability per row.) requirements for the CST beyond what is required by the National Guard? Follow-up to #11: If YES, what are these additional requirements, and what guidance is provided by the state for achieving these requirements? 12. Does the state government (i.e., non-military) provide any of the following additional resources for the CST beyond what is provided by the National Guard? 13. Please describe the process by which a request for CST deployment is approved or denied: How many decisionmakers must approve the request, and what positions (or titles) do these decisionmakers hold? 14. Who receives the following mission information from your CST? (Please check all that apply.) TAG? officials? Reported to NGB? Army? Army? officials? Reported to NGB? TAG? Army? Army? 15. If you have further comments in response to any of the questions in this section, you may use this space to provide them. Please identify your comments by preceding them with the number of the earlier question to which they refer (e.g., 1, 3, 7, etc.). 16. Please provide a name, phone number, and email address for follow-up questions regarding the responses to this section. (Field may be left blank if the unit commander is the contact point.) 1. Which documents or guidance do you use to develop your equipment list? 2. Has your team experienced any problems in acquiring CST equipment? Follow-up to #2: If YES, please describe these problems and discuss their impact. 3. Please provide the following financial information for the requested years. If your state has a CERFP, did it receive any of this funding? (NGPA) (OMNG) (YES, NO, or N/A) provided to your CST (by either the state or the National Guard Bureau) that you would like to see provided, if any. (Please list in order of importance, with ‘1’ being the most important item or capability.) EQUIPMENT ITEM or EQUIPMENT CAPABILITY 1. Intended Use? In your opinion, what are the primary advantages and disadvantages of having non-military (i.e., commercial off-the-shelf) equipment? Is the facility that houses the CST large enough to hold all vehicles and other CST equipment? Follow-up to #6: What improvements to your facility could you suggest, if any? Follow-up to #7: If YES, please describe the most frequent equipment issues. 8. Please respond to the following questions about your CST’s formulary: Has the state augmented the standardized CST formulary? Has the State Surgeon approved the CST formulary? Has the NGB Surgeon approved the CST formulary? Does the CST carry any medications that are not listed on the CST formulary? 9. Does your CST have a Delegation of Services Agreement (DSA) from the State Surgeon? Follow-up to #9: If YES, what was the date of this DSA? (Date must be in yyyy-mm-dd format, e.g. 2005-07-25 or 2004-12-15) Follow-up to #9: What problems, if any, did you encounter in securing the DSA? CST rely on land transport? On air transport? On water transport? Geography of incident setting (urban, rural, coastal, inland, etc) Additional comments about transport strategy, if any: 11. For each of the following types of transport, please identify what the CST has access to and whether or not this access is a dedicated asset (i.e., owned by the CST). If it is borrowed from other agencies, please indicate the process by which the CST would gain access to a specific means of transport: (Land/Air/Water) If borrowed, how would the CST helicopter?) other forces? gain access to it? 12. If you have further comments in response to any of the questions in this section, you may use this space to provide them. Please identify your comments by preceding them with the number of the earlier question to which they refer (e.g., 1, 3, 7, etc.). responses to this section. (Field may be left blank if the unit commander is the contact point.) 1. Which documents or guidance do you use to develop your training plan? 2. Please provide the requested information about training and certification for the following duty qualified? (YES, NO, or N/A) training was completed? of collective training was completed? (e.g. fully trained, currently certified, experienced, etc.) qualified? (YES, NO, or N/A) training was completed? of collective training was completed? (e.g. fully trained, currently certified, experienced, etc.) qualified? (YES, NO, or N/A) training was completed? of collective training was completed? (e.g. fully trained, currently certified, experienced, etc.) In addition to the EXEVALs and annual lanes training, what other tests, certifications, or proofs of competence do CST members complete, if any? 4. Should anything more be done to ensure technical and duty-specific expertise among CST members? 5. Do members of the CST receive any training regarding chain-of-evidence and other evidence collection Follow-up to #5: If YES, who provides this training? 6. What supplementary training, if any, does the state require that is in addition to what is already required by the National Guard Bureau? 7. At the present time, what are the training strengths of your CST? 8. At the present time, what are the training weaknesses of your CST? If weaknesses exist, how could training be improved to address them? 9. What is the impact of personnel turnover on training? space to provide them. Please identify your comments by preceding them with the number of the earlier question to which they refer (e.g., 1, 3, 7, etc.). 11. Please provide a name, phone number, and email address for follow-up questions regarding the responses to this section. (Field may be left blank if the unit commander is the contact point.) 1. For each of the following sections of the CST, is the number of duty positions adequate to perform all of the CST’s missions? If not, what would be the ideal number? Administration/Logistics (2) Medical/Analytical (4) Communications (2) Follow-up to #1: Which additional specialties, if any, would you add if your authorized strength were increased? Will this position likely be vacant in the next months? Will this position likely be vacant in the next 7- 12 months? months? Average fill time for this position (i.e. how many days, weeks, position? team) team) team) team) months) Will this position likely be vacant in the next months? Will this position likely be vacant in the next 7- 12 months? months? Average fill time for this position (i.e. how many days, weeks, position? team) team) team) team) months) 4. Do you ever perform CST missions (training or live response) with fewer personnel than you need? Follow-up to #4: If YES, what is the impact of responding with fewer personnel than you need? (number of positions, including promotions within the team) estimated? 6. What are the primary factors that affect these personnel trends? 7. What have been the effects of your turnover rate on team operations? 8. Have any CST members left to become members of state, regional, or local fire departments, HAZMAT teams, or other emergency response agencies? Follow-up to #8: If YES, approximately how many? space to provide them. Please identify your comments by preceding them with the number of the earlier question to which they refer (e.g., 1, 3, 7, etc.). 16. Please provide a name, phone number, and email address for follow-up questions regarding the responses to this section. (Field may be left blank if the unit commander is the contact point.) In addition to those named above, Ann Borseth, Assistant Director; Bari L. Bendell; Jaclyn A. Bowland; David A. Brown; Carole F. Coffey; Lee Cooper; Joseph W. Kirschbaum; David A. Mayfield; Walter K. Vance; and Tamika S. Weerasingha made key contributions to this report.
|
To prepare for potential attacks in the United States involving weapons of mass destruction (WMD), Congress approved the development of National Guard Civil Support Teams (CST) tasked to identify chemical, biological, radiological, nuclear, or high-yield explosive weapons; assess consequences; advise civil authorities on response measures; and assist with requests for additional support. Thus far, 36 of the 55 approved teams have been fully certified to conduct their mission. The National Guard Bureau (NGB) is in the process of establishing, certifying, and planning for the long-term sustainment of the CSTs. GAO was asked to address the extent to which (1) the CSTs are ready to conduct their mission and (2) effective administrative mechanisms are in place for the CSTs. The established CSTs have thus far been trained, equipped, and staffed and have command and control mechanisms in place to conduct their domestic mission. However, confusion resulting from a lack of guidance on the types of non-WMD missions the CSTs can conduct to prepare for their WMD terrorism mission could impede coordination between state authorities and local emergency management officials on the appropriate use of the CSTs. CSTs were created to focus on assisting civil authorities in domestic WMD events. Based on its review of the CSTs' training, equipment, and staffing criteria; analysis of CST readiness data; site visits to 14 CSTs; and discussions with state, local, and federal responders, GAO found the certified teams visited to be ready to conduct their mission. NGB and the states have a clear structure for operational command and control of the CSTs. Though current NGB guidance and the CSTs' message to state and local officials emphasize the CST mission as being focused on WMD events, some CSTs have responded to non-WMD events, such as providing emergency assistance to the Gulf Coast states after the 2005 hurricanes. While NGB views such missions as useful preparations for WMD events, guidance has not been clarified to reflect the type of non-WMD missions that would be appropriate. This lack of clarity has caused confusion among state, local, and NGB officials, potentially slowing coordination efforts. Also, DOD is proposing a limited role for the CSTs to coordinate and operate with Mexican and Canadian officials in the event of a cross-border WMD incident. DOD and NGB are informally considering limited overseas missions for the teams, though they have no plans to request a further expansion of the CSTs' mission to encompass overseas operations. According to NGB and the CST commanders, some overseas missions could provide valuable experience and have a positive effect on CST readiness, while other, more demanding missions, such as supporting the warfighter, could be detrimental to the readiness and availability of the CSTs. Although NGB continues to develop a long-term sustainment plan for the CST program, going forward, it faces challenges to the administration and management of the CSTs that could impede both the progress of newer teams and the long-term sustainment of the program. NGB has made progress in establishing an administrative management structure for the CSTs, including issuing a broad CST management regulation and initiating a standardization and evaluation program. However, the CSTs face challenges in personnel, coordination plans, equipment acquisition and planning, training objectives, readiness reporting and facilities. Further, insufficient NGB guidance on state National Guard roles and responsibilities for overseeing and supporting their CSTs has resulted in varied support at the state National Guard level. NGB is aware of the challenges and has efforts under way to address them. While these challenges have not yet undermined CST readiness, if NGB efforts are unsuccessful, the progress of newer teams could be impeded and the long-term sustainment of the CST program put at greater risk.
|
Recent research on private sector companies indicates that many companies are using ASD, generally referred to in the private sector as “outsourcing,” as an integral and permanent part of their human capital strategies. Along with extensive use of technology and consolidation of service delivery units, outsourcing accompanies the desire of many human capital offices to move their focus from transaction-based activities toward becoming more of a strategic partner. A 2002 Conference Board study, based on responses from 125 surveyed companies, found that two-thirds currently outsource a major human capital activity and most of these companies are seeking to expand their outsourcing activities. The study reported that pressure to cut costs, improve the quality of human capital services, gain access to specialist expertise and technology, and free staff to concentrate on core business activities drove the companies’ outsourcing decisions. Slightly more than 50 percent of survey respondents reported that they had fully achieved their outsourcing objectives, 42 percent had partially achieved them, and less than 1 percent of outsourced human capital functions had been brought back in-house. A December 2003 study from the Corporate Leadership Council found, from a survey of 162 of its member organizations, that most human capital activities continue to be largely performed in-house, although aspects of almost every activity are outsourced. The research on the private sector’s use of outsourcing also indicates that the range of human capital activities outsourced is increasing. According to a 2003 report from the University of Southern California, the large corporations they surveyed were most likely to outsource employee assistance and benefits administration. This report noted that compensation, benefits, employee training, human resource information systems, recruitment, performance appraisal, affirmative action, and legal affairs all showed statistically significant increases in the use of outsourcing from 1995 to 2001. No activity was less likely to be outsourced in 2001 than it was in 1995. In addition, some organizations are following a path where they transfer the majority of their human capital activities to a single contractor. Research on the federal government’s use of human capital ASD includes a 1997 NAPA report that was intended to provide federal managers and human capital staff with a practical guide to the issues that must be addressed in approaching ASD for human capital functions. The report recommended that because of the risks involved with ASD, including a potentially negative effect on the general workforce, agencies must recognize that its use requires careful planning. It maintained, however, that as in the private sector, federal government executives were in a position of managing a decrease in resources along with increased performance expectations and that ASD was a viable approach to help meet this challenge. Federal agencies have a number of ASD options available to them. Examples include human capital services offered by other federal agencies, contracts with private sector and nonprofit organization providers, and partnerships with other organizations. USDA’s National Finance Center is an example of an interagency service provider, supporting a number of other federal agencies, including GAO, with automated information systems services for personnel and payroll. Private sector providers of human capital services have increased in both their number and the range of their services geared toward the federal human capital community. For instance, in 2000 the General Services Administration (GSA) introduced a new schedule of contracts from more than 50 different contractors for activities such as recruitment and position classification. Another ASD option includes the use of partnerships with other organizations, which may not necessarily involve exchanges of funds. The Bureau of the Census, for example, partnered with national, state, and local organizations to help the agency recruit census takers for the 2000 Census. Appendix II has more detail on ASD options available to federal agencies. Human capital officials from the selected agencies reported using ASD for a variety of specific human capital activities that we grouped into “tiers,” a construct we created to discuss how agencies use ASD for similar types of human capital activities; they are not discrete categories, but rather groups of activities that overlap. All of the agencies used ASD for at least some tier I activities, such as payroll and employee assistance programs, and tier II human capital activities involving the implementation of human capital policy and strategy. Most of the agencies had contracted for assistance, generally with the private sector, for tier III activities, such as special projects involving workforce planning and organizational assessments. Of the eight agencies, the Mint was the only agency currently engaged in a competitive sourcing initiative involving most of its human capital functions. Similar to private sector experience, agency officials regarded the use of ASD for some of the tier I activities involving transactional human capital functions, the acquisition and maintenance of technology, and specialized services as a way to reduce or avoid costs. Agencies have been using ASD for these activities for a number of years, and updated cost savings estimates were not available. In general, however, using ASD for more standardized, transactional activities allows human capital offices to make use of high-volume providers’ investments and capabilities that realize economies of scale. For instance, OPM is leading the effort to collapse the operations of 22 executive branch agencies that currently run payroll systems into what will eventually be only two systems at a projected savings of $1.1 billion through fiscal year 2012. We reported that it is evident that cost savings can be found by reducing the number of payroll systems operated and maintained by the federal government and avoiding the costs of updating or modernizing those systems, but have noted the significant challenges in realistically estimating the financial savings from this initiative. Likewise, although cost savings estimates were not available, agency officials regarded consolidating the purchase of human resource information systems and specialized services that would be expensive to duplicate internally, such as purchasing commercial-off-the- shelf software or using a specialized provider of employee assistance programs, as a way to reduce individual costs to the agency. In prior work on how companies were taking strategic approaches to acquiring services, we noted one tactic involved using a companywide approach to procuring services. When the companies analyzed their spending on services, they realized that individual units of the company were buying similar services from numerous providers, often at greatly varying prices. In some cases, after this analysis, thousands of suppliers were reduced to a few, enabling the companies to negotiate lower rates. Common examples of the types of tier I activities for which the eight agencies used ASD are components of human resource information technology, health screening and wellness services, employee fitness programs, and drug and alcohol testing. As previously noted, federal agencies have used ASD for tier I human capital activities for a number of years. NAPA reported in 1997 that human capital outsourcing by federal agencies was already substantial in these areas. All of the agencies used ASD for some of their tier I activities, and most of the agencies reported using ASD for their payroll administration and at least some component of their information technology. NGA, for example, partnered with another agency to share contracts for human capital information technology development and maintenance. NGA said that the arrangement allowed it to access expertise not resident in-house and promoted knowledge transfers between the two agencies. Using ASD for traditional employee services was also common among the selected agencies. Many of them used ASD for their employee assistance programs, wellness and fitness centers, health units, or drug and alcohol testing, often using interagency services to provide these functions. By going to outside providers for these specialized services, agency officials believed that they were able to focus more on core activities in addition to gaining efficiencies by joining other agencies’ efforts. A DOE official, for example, said that the department used ASD for its fitness centers to avoid liability issues so that, for example, if an employee were injured using the center it would not be the responsibility of the department. DOE also reported joining another department’s large contract for drug and alcohol testing to reduce its workload by not having to commit resources to contracting for the service itself. Officials also said they gained the benefit of having a neutral third- party provider, which was believed to be important because employees may be less likely to use services such as employee assistance programs when internally provided due to confidentiality issues. All of the selected agencies used ASD for at least one of their tier II activities, which involve the implementation of human capital policy and strategy, including advisory services. Common examples of the agencies’ ASD tier II activities are training development and delivery, classification and staffing support, classification appeals and reviews, equal employment opportunity (EEO) and administrative investigations, mediation. Many of these activities entail services dealing with recruiting, developing, and retaining employees, and they occupy the middle ground between the primarily technical work in tier I and the increased strategic focus needed for tier III activities. Drivers for this tier of activities included freeing staff to focus on core activities and supplementing a lack of staff to perform the activity. Tier II activities often involve partial outsourcing, using ASD for only a component of the human capital function, whereas a tier I activity such as drug testing may be completely outsourced. NGA’s Training and Doctrine Directorate, for example, used OPM’s TMA program to select and evaluate providers for its Leadership Program. The agency used a combination of in-house expertise and contractors to design and deliver the leadership training. Within tier II activities, components of training development and delivery were the most frequently cited human capital activities for which the agencies used ASD. NAPA’s 1997 report also noted outsourcing of training by federal agencies as substantial. As one example, USDA turned to a private sector contractor to help develop the design for a corporate leadership development program to prepare upper-level managers for future leadership roles at USDA. One of the rationales for relying on a contractor was that the contractor had the research edge on best practices gleaned from completing needs assessments with other organizations. In addition to using the private sector, several agencies used the training services of providers such as the USDA Graduate School and the Federal Executive Institute. OPM is also working on another training tool for federal agencies to use. E-Training, one of OPM’s e-government initiatives, is designed to create a governmentwide e-Training environment to support the development of the federal workforce and provide a single source for on-line training and strategic human capital development for all federal employees. OPM expects that its initiative will allow agencies to focus their own training efforts on unique needs, thus maximizing the effectiveness of their expenditures on workforce performance. Agencies also used ASD for tier II activities such as investigations, mediation, classification and staffing, and recruiting. FWS, for example, contracted for classification appeals and studies, EEO and administrative investigations, and mediations. The agency maintained that ASD was useful in this case because, given the sporadic nature of some of these activities, it could contract for services only when it needed them. MMS contracted with a retired employee to perform staffing, classification, and employee relations functions. Two of the agencies used ASD for some component of their recruitment function. For example, although the contract is new and NGA has not yet directly tracked changes due to this initiative, the agency anticipates that contracting for some of its recruitment activities will provide better customer service and help confront reduced human capital staffing. Tier III activities, which involve the formulation of human capital strategy and policy support, represent a more recent application of ASD for human capital activities. Examples of the agencies’ tier III ASD activities are strategic human capital management planning, benchmarking. These activities involved expanding their base of expertise and gaining access to new ideas and methodologies. All but one of the agencies reported using ASD for some activities within tier III, often using private sector providers. Several agencies noted that the use of ASD for tier III activities enabled their human capital offices to obtain access to the right mix of skills quickly in order to meet critical deadlines, thereby providing the agency with new tools and capabilities. USITC, for example, through OPM’s TMA program, contracted for initiatives in strategic workforce planning. The agency used contractors to help define its human capital vision and models and to develop occupation guides and a human capital plan. USDA teamed with a contractor to conduct a skills gap analysis to identify critical workforce skills and analyze skills gaps. USDA reported that the contractor provided third-party objectivity in retrieving and assessing information, used its own technology to analyze data, and produced a model based on its own scientific expertise that assisted USDA managers in determining workforce skills needs for closing the gaps in the next 5 years. USCG contracted for the use of OPM’s Organizational Assessment Survey after sporadic, unsatisfactory in-house attempts to manage the survey development, administration, data collection, analysis, and required reporting. Instead of investing in three full-time employees supplemented by six part-time employees that USCG believed would be needed to manage an annual survey, it reduced the resources needed to manage the survey effort to one full-time employee supplemented by two part-time employees. According to an agency official, the estimated annual cost for the project was reduced by approximately $300,000. All of the above examples of ASD for the three tiers of activities concerned were specific activities that were outsourced to a variety of different providers. Within private sector human capital offices, however, there is a beginning trend toward aggregating multiple human capital activities into one ASD contract. The 2002 Conference Board report on human resources outsourcing trends found that although most of the companies they surveyed used more than one source provider, 12 percent of the companies surveyed outsourced the bulk of their human capital functions to a single provider and 9 percent were in the process of doing so or plan to over the next 3 years. Aggregating activities into one contract can result in better contracting leverage. This is riskier, however, in terms of the complexity of the arrangement and the assumption that one vendor can deliver and maintain the same level of service previously provided in-house or by a variety of different providers. The Mint was the only one of the eight selected agencies currently considering using one ASD provider for the majority of its human capital activities through a competitive sourcing initiative governed by the Office of Management and Budget’s (OMB) Circular No. A-76. The initiative involves all of the Mint’s human capital functions except employee and labor relations and policy, and the agency expects to complete the competitive sourcing study no later than February 2005. Although a Mint official reported challenges maintaining morale and staff during the formal cost comparison, the agency expects that the study will eventually result in reduced costs. Our work looking at the progress selected agencies were making in establishing competitive sourcing programs also found that ensuring and maintaining morale was a challenge for those agencies. NAPA reported that trust between agency leaders and employees can be shaken by the consideration of nontraditional staffing. In addition, employees may suffer stress-induced illness, increased absenteeism, hostility, and depression, other symptoms of changed organizations. The report noted that providing an authoritative source for employees to get accurate information minimizes the unknown and helps control rumors and miscommunication. We examined the agencies’ management of ASD by looking at their approaches to three phases of contract management. The phases included (1) making the sourcing decision, (2) developing the contract and selecting the provider, and (3) monitoring the provider’s performance. Our review also identified some of the lessons the agencies learned and the role that OPM plays in assisting agencies with their management of ASD. To make a sourcing decision, organizations need to determine whether internal capability or external expertise can more effectively meet their needs. The Commercial Activities Panel, chaired by the Comptroller General of the United States, noted that determining whether the public or the private sector would be the most appropriate provider of the services the government needs is an important, and often highly charged, question. The report also stated that determining whether internal or external sources should be used has proved difficult for agencies because of systems and budgeting practices that (1) do not adequately account for total costs and (2) inhibit the government’s ability to manage its activities in the most effective manner possible. In prior work examining the competitive sourcing initiatives of selected agencies, we reported that several agencies had developed strategic and transparent sourcing approaches. The approaches included the comprehensive analysis of factors such as mission impact, potential savings, risks, current level of efficiency, market conditions, and current and projected workforce profiles. To make good human capital sourcing decisions, NAPA’s ASD report also suggested identifying constraints on the process, such as the lack of capacity within the organization to manage the ASD contract and the legal, regulatory, and ethical issues related to the governmental nature of the work. The selected agencies reported similarities on a conceptual level in how they made their sourcing decisions. Officials generally agreed about which human capital activities were suitable candidates for ASD. Their considerations were consistent with the Commercial Activities Panel sourcing principles. For example, agency officials recognized that some activities are inherently governmental or are functions that should be performed by federal workers and that both quality and cost factors should be considered. The general consensus was that virtually any activity could be an ASD candidate as long as it did not require an intimate knowledge of the agency or involve oversight or decision-making authority that should belong with the agency. There was also general consensus that ASD should be considered in situations where it could improve quality without increasing costs or keep the same quality at a lower cost and in situations where activities cannot be accomplished with the agency’s current skills and resources. Some of the agencies excluded from ASD any activity directly related to policy, while one official maintained that policy development, as opposed to policy decision making, was appropriate for ASD. Notwithstanding the broad conceptual agreement among the agencies, they showed differences in their choices of human capital ASD activities. This may be partially due to differences in the activities they deemed to be essential to the agency or to the human capital office. The USITC Human Resources Director, for example, noted that USITC staffing was a function that required intimate knowledge of the agency and one that it would not consider for ASD. Private sector research also indicates that some companies are reluctant to outsource activities such as employee communications, assessment, and recruiting because they are critical to the company’s corporate culture and provide a “personal touch.” The differences may also be due to variations in existing capacity and in how ASD was used in the agencies’ overall human capital strategy. FWS, for example, noted that as the agency continues to identify areas for consolidation and efficiency, it sees its use of ASD increasing as a means to provide better customer service and supplement human capital skills not present in the current workforce. Several of the agencies, NGA and USITC in particular, remarked that ASD was integral to their overall human capital strategy. In fact, an NGA official said that the agency was established in 1996 with a design that encouraged the use of ASD. On the other hand, USDA stated that it used ASD primarily to meet critical deadlines. Lesson learned: Understand the complexity and requirements of the activity prior to making an ASD decision. In order to strategically and objectively make a sourcing decision, several agency officials emphasized the importance of laying out ASD requirements and goals and letting these expectations guide the process. In order to do this and to manage for results, they underscored the importance of knowing as much as possible about the complexity and requirements of the activity before making an ASD decision. As a USCG human capital official expressed it, throwing a “problem” over the transom to a provider and waiting for a “solution” to be thrown back is not a viable model. Similarly, a human capital official from MMS said that in cases where ASD did not work well, there was a lack of a clear vision about the work to be done, and a NAPA panel report examining human capital outsourcing experiences noted that from the contractor’s viewpoint, poorly defined requirements are a major flaw in government management of outsourcing. To help solve this problem, one of the leading commercial practices for outsourcing of information technology (IT) services includes incorporating lessons learned from peers who have engaged in similar sourcing decisions. The ASD contract defines the legal terms of the relationship between the agency and the provider and sets the expectations for service levels and delivery of essential services. These critical requirements are captured in the contract as fundamental expectations. The development of the contract is the foundation on which the relationship with the provider is built, and once the agency understands the essential contractual requirements, it can begin to identify providers that can meet its needs. According to the NAPA human capital ASD report, the scope of the activity being converted to ASD and its relative criticality to the agency mission should determine the level of effort needed to develop the contracts and select the providers. Human capital officials from the agencies reported using similar methods to develop their ASD contracts and select their providers. Officials said that they followed the guidance provided by the contract and procurement office representative who solicited the bids and awarded the contract. NGA stated that its general strategy was to rely on agency subject matter experts who created detailed statements of work. For example, the agency expert in the interpreting field provided the expertise needed for cost comparison, evaluation, and program management for NGA’s interpreting services. Officials listed reputation and experience of the provider as important factors in the selection process. Some agencies noted using the panel award approach to select providers. To select its employee assistance program provider, for example, NGA assembled a panel comprised of agency officials who conducted interviews with each of the candidates and required the finalists to make presentations. Some officials stressed the importance of using established contract vehicles, such as GSA’s contract schedule or OPM’s TMA program, because it made the procurement process easier. Agencies also noted that joining other agencies’ contracts reduced the administrative effort needed on their part in terms of contract development. Lesson learned: Articulating ASD contract terms that are flexible but include identified outcomes and measurable performance standards is an essential requirement for meeting ASD objectives. After determining what the use of ASD should accomplish, several agencies shared the importance of translating these expectations in the ASD contract into flexible terms with measurable outcomes. Accordingly, an essential part of the contract is to define the level and quality of service required of the ASD provider as well as specific evaluation criteria. A Mint official said that performance-based contracts with metrics and quality assurance plans helped the agency ensure that expectations were met. Congress and OMB have also encouraged greater use of performance- based contracting, which emphasizes spelling out the desired end result, while leaving the manner in which the work is to be performed up to the contractor. Other attributes of performance-based contracting include measurable performance standards; quality assurance plans that describe how the contractor’s performance will be evaluated; and positive and negative incentives, when appropriate. In developing contracts and selecting providers, leading commercial practices for acquiring IT services also suggest that the contract must be flexible enough to adapt to changes. The practices note that the contract should include clauses for issues such as resolving disputes promptly, conducting regularly scheduled meetings, and declaring a significant event that can lead to a change in the contract. A Mint contract, for example, specified how the contract would be changed if access to desired data was not an option. The monitoring phase of ASD management involves ensuring that the ASD provider is meeting performance requirements. The previous phases addressed the extensive preparation that must precede the ASD provider’s assuming responsibility for an activity. Monitoring includes examining performance data for specific activities and making sure that the overall objectives for using ASD are being met. According to commercial practices, organizations need to examine internal service levels as well as maintain an external view of the performance of other ASD providers to make certain that their current relationship is still advantageous to the organization. The agencies reported both formal and informal ways of monitoring their ASD contracts. A contracting officer’s technical representative (COTR) or a designee generally performed the formal oversight on an ongoing basis with line managers being in position to perform the informal monitoring. DOE, for example, described monitoring its human capital processing functions by having a COTR work in conjunction with the program or technical monitors, DOE’s office of procurement, and the direct customers to ensure that problems were resolved and needs and expectations met. NGA looked at the measures built into its quality assurance plans, which included descriptions of the deliverables, performance standards, acceptable quality levels, and methods used to assure quality, such as random testing. The agency also noted that it periodically checks prices with outside service providers to make sure it is not paying more than the market rate for the contracted services. An MMS human capital official said that accountability for monitoring the overall success of the ASD strategy for a particular function belongs to the line manager responsible for that function, who determines if program goals are being met. Many of the agencies said that they used performance measures as part of their ASD monitoring process. The types of metrics used varied with the types of ASD human capital activities, but generally included elements of quality or timeliness. For projects dealing with human capital strategy and policy support, agencies mentioned that along with quality, their measures included timely completion and evaluation of interim deliverables during the project. The USITC Human Resources Director stressed that when using ASD for a specific project, it was important to incorporate ongoing milestones into the contract as markers for how well the project is progressing. Agencies using ASD for training and development activities also reported using similar measures to monitor the success of the activities. For example, NGA and DOE stated that they used a multilevel training evaluation model to assess the effectiveness of the methodology, media, and delivery mechanisms used by their ASD providers. Several of the agencies used client satisfaction surveys to gauge the quality of their employee services provided through ASD. USCG, for example, used surveys and had one-to-one contact with members who used its employee assistance program. Lesson Learned: Creating a relationship with the ASD provider is key to resolving issues that may arise in addressing concerns and directing work. Human capital officials emphasized that smooth and constructive interaction between the agency and the ASD provider at an operational level is crucial to achieving the expectations of the ASD arrangement. Relationship management goes beyond the structure of the contract and if a good relationship exists between the agency and the ASD provider, many problems that may arise can be worked out. As the USITC Human Resources Director remarked, the agency needs to have the capacity to manage relationships, not just contracts, with ASD providers. In looking at leading commercial practices for outsourcing IT services, we included relationship management as one of three critical success factors contributing to successful outsourcing, a capability that must be present to implement good outsourcing practices. The Director of OPM also emphasized the importance of program managers’ ability to work inside partnerships and relationships to help develop a new paradigm of government-contractor relationships. She said that OPM plans to analyze human capital contracts that were poorly managed and use those lessons to improve the process. OPM has a central role in agencies’ management of ASD by providing assistance and guidance in operating human capital programs. As the President’s agent and adviser for human capital activities, OPM’s overall goal is to aid federal agencies in adopting human resources management systems that improve their ability to build successful, high-performance organizations. The agency’s five e-government initiatives are examples of this effort. In addition, several agencies used OPM’s TMA program to help them manage their ASD efforts. The TMA contracting vehicle assists government agencies with training and human capital technical assistance projects. (See fig. 2 for more details.) OPM’s TMA program may be appropriate when agencies have a need for (1) outside expertise to help define human capital needs and frame requirements, (2) help doing something the agency has never done before, (3) short-term help to get a specific task accomplished because internal resources are not available, (4) long-term supplemental assistance to accomplish ongoing, mission- critical objectives and activities, and (5) plans to competitively source certain learning or human capital activities or functions. USITC, for example, used TMA to screen and qualify a select group of contractors to assist the agency in its workforce planning initiatives. The agency’s Human Resources Director said TMA facilitated USITC’s ability to appropriately identify a contractor that could work best in the agency’s culture. She also noted that the TMA program assists smaller agencies in gaining clout with contractors because of the program’s large volume of contracts. The Training and Management Assistance (TMA) Program, a fee-based interagency contract service program, provides comprehensive, end-to-end service that agencies can use for their training and human capital needs. Products and Services: Agencies can receive customized and integrated services in areas such as knowledge management, training, compensation, and performance management. Process: TMA project managers work with agencies on: 1. developing statements of work, 2. completing interagency agreements, 3. selecting providers, 4. attending project kick-off meetings, 5. reviewing and approving management plans, and 6. monitoring projects to completion. OPM also plays a role in assisting agencies’ management of ASD through its authority to oversee management of human capital activities. The Director of OPM has called for more rigorous oversight of federal contracts used to acquire personnel management services for agencies and their employees. In addition, to ensure professional oversight of contracts, OPM has instructed the Federal Executive Institute and the management development centers to begin to train and retrain a new cadre of program managers with the skills necessary to manage relationships and establish partnerships with their peers in the procurement industry. The CHCO Academy, created by OPM to educate chief human capital officers about human capital management issues, included outsourcing human resource services as one of its agenda topics. While OPM has made efforts to help agencies with their human capital ASD initiatives, there are additional opportunities to assist the agencies in compiling, analyzing, and disseminating information on federal agencies’ use of ASD for human capital activities. Several agency officials noted that having a clearinghouse of ASD information, such as posting information on ASD projects and providers, and more communication sharing in general would help them manage their ASD projects. They observed that joining other agencies with existing contracts can be an effective strategy and that communication among agencies about the reputation of ASD providers plays a role in their selection process. An NGA official noted that (1) partnering with other federal agencies could provide a venue to learn from each other versus developing individually and (2) agencies could learn more from each other’s ASD accomplishments and mistakes. OMB, for example, is developing a competitive sourcing data-tracking system to facilitate the sharing of competitive sourcing information by allowing agencies to identify planned, ongoing, and completed competitions across the government. The agency plans to use the system to generate more consistent and accurate statistics, including those on costs and related savings. The importance of sharing information about human capital ASD efforts has recently gained attention as a few agencies have signed large contracts for human capital services. Legislation creating the CHCO Council also highlighted the importance of this activity by detailing that one of the responsibilities of the Council is to advise and coordinate agency activities for improving the quality of human resources information. Recent studies looking at private sector organizations suggest that ASD use for human capital activities is being leveraged to achieve a variety of strategic and tactical objectives within human capital offices. The range of human capital activities and the reported objectives for the selected agencies’ use of ASD indicated the same. Although more evaluation needs to be done, the agencies’ use of ASD for activities such as strategic human capital management and workforce planning showed that ASD provides access to new skills, expertise, and technology that can facilitate implementation of new human capital initiatives. Likewise, freeing human capital staff from transactional and administrative tasks such as payroll administration and training delivery pointed to cost savings and an improved ability to focus on mission-critical activities. Given its potential benefits, it appears that, similar to its use in the private sector, the use of ASD for human capital activities will increase among federal agencies. There currently is not, however, a widely shared body of knowledge about federal agencies’ use of ASD for human capital activities. By sharing experiences and lessons learned, agencies may be able to tap into the benefits of using ASD while avoiding some of the problems. Although OPM’s TMA program appears to help agencies manage their use of ASD, OPM could supply another necessary link to the agencies by providing comprehensive information about how to use ASD for human capital activities. The CHCO Council could be an excellent vehicle to assist in this area. Given the need expressed by agency officials about the importance of sharing data and lessons learned concerning the use of ASD for human capital activities and consistent with OPM’s ongoing efforts in this regard, we recommend that the Director of OPM take the following action: Work with the CHCO Council to create additional capability within OPM to research, compile, and analyze information on the effective and innovative use of ASD and strengthen its role as a clearinghouse for information about when, where, and how ASD is being used for human capital activities and how ASD can be used to help agency human capital offices meet their requirements. OPM should work with the CHCO Council to disseminate the type of spending data that human capital offices could use to leverage their buying power, reduce costs, and provide better management and oversight of their ASD providers. Such data would include the types of human capital services being acquired, which ASD providers are being used for specific services, how results are being measured, and how much is being spent on specific ASD activities. We provided a draft of this report to the Director of OPM, the Secretary of Agriculture, the Secretary of Defense, the Secretary of Energy, the Secretary of Homeland Security, the Secretary of the Interior, the Chairman of the International Trade Commission, and the Director of the Mint. We received written comments from OPM and the Department of the Interior, which are included in appendixes III and IV. OPM stated that the report contained a good model for looking at human capital ASD use and that the recommendation was consistent with the agency’s concern for human capital contracting, for which OPM has the lead. OPM expressed concern, however, that we had not addressed the role of its human capital officers in helping agencies improve their human capital practices or how agencies ensure that their ASD providers comply with regulatory and statutory requirements. We did not assess the actions of the OPM human capital officers because their role did not surface in our interviews with agency officials about their use of ASD for human capital activities. Regarding the concern of ensuring functions provided through ASD meet appropriate federal regulatory and statutory requirements, we agree with OPM’s concern and believe our recommendation can help address this important issue OPM raises. In addition, the Department of the Interior suggested that it would be helpful if GAO or OPM followed this report with a further study that would examine the quality and value of various ASD products and providers to allow for comparisons of similar services. We believe that our recommendation will also help address this concern. Based on comments from DOE that we received by e-mail, we clarified our definition of core activities. DOE also suggested an alternative way to group human capital activities. We believe that the framework is adequate for the discussion and summary for which it was intended. The Department of Defense, USCG, and USDA noted that they had no comments on the report. USITC and the Mint had several technical comments that we incorporated. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after its date. At that time, we will provide copies of this report to other interested congressional parties, the Director of OPM, and the federal agencies and offices discussed in this report. We will also make this report available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or William Doherty on (202) 512-6806 or at [email protected] or [email protected]. Other contributers are acknowledged in appendix V. The objectives of this report were to identify the human capital activities selected agencies are accomplishing through the use of alternative service delivery (ASD) options and the basis on which they decided to use it and describe how the use of ASD is being managed and the lessons learned by the selected agencies. To address these objectives, we first synthesized information from a literature review including articles, studies, and reports on the use of ASD for human capital activities in both public and private sector organizations. We also gathered information from a variety of sources, such as our past work on agencies’ contracting efforts and other reports on federal agencies’ use of ASD, to characterize the ASD options currently being used by federal agencies to accomplish their human capital activities. On the basis of this work, we identified a set of federal agencies varying in size and mission that were using ASD for at least some of their human capital activities. We consulted with human capital experts from George Washington University, the National Academy of Public Administration, and a private sector consultant for federal contract management to assess whether they thought particular agencies in this set would yield examples of ASD use for human capital activities. On the basis of their suggestions and our previous research, we focused on ASD practices in eight federal agencies: the Department of Energy (DOE), the Department of the Interior’s U.S. Fish and Wildlife Service (FWS) and Minerals Management Service (MMS), the National Geospatial-Intelligence Agency (NGA), the U.S. Coast Guard (USCG), the U.S. Department of Agriculture’s (USDA) headquarters, the U.S. International Trade Commission (USITC), and the U.S. Mint’s headquarters. The agency selection process was not designed to produce findings that could be considered representative of the use of ASD for human capital activities in the federal government as a whole, but rather to provide illustrative examples of how the selected agencies were using ASD. We conducted semistructured interviews with human capital officials from the selected agencies to gather information on (1) the human capital activities for which the agencies were using ASD, (2) the basis of their decisions, (3) how they were managing the use of ASD, and (4) the lessons they had learned. Some agencies provided documents such as final ASD projects, project plans, interagency service agreements, and contracts, which we reviewed. We did not verify the agencies’ cost savings estimates. After reviewing and analyzing the agencies’ material and responses to our interview questions, we developed a framework for organizing and discussing their use of ASD for human capital activities. (See fig. 1.) As shown in our framework, the activities are grouped into three overlapping tiers based on whether the activity had more of a technical or a strategic focus. We then identified the primary drivers and the primary ASD options used for each tier. Our work was conducted from August 2003 through February 2004 in accordance with generally accepted government auditing standards. Federal agencies have a variety of types of ASD options available to help them accomplish their human capital activities. The options include mechanisms that provide reimbursable services from one agency to another and contracting with the private sector. Agencies also provide reimbursable services that help other agencies gain access to private sector contracts. The options listed below are some examples of ASD mechanisms used by federal agencies to accomplish their human capital activities. Intragovernmental revolving (IR) funds provide common support services required by many federal agencies. An IR fund conducts continuing cycles of businesslike activity within and between government agencies. It charges for the sale of products or services and uses the collections to finance its operations, usually without a requirement for annual appropriations. Each IR fund is established by law. Generally, the specific legal authorities creating IR funds authorize these funds to enter into intragovernmental transactions and provide flexibility by allowing the client agency’s fiscal year funds to remain obligated, even after the end of the fiscal year, to pay for the goods or services when delivered. One businesslike entity providing human capital services is the U.S. Department of Agriculture’s National Finance Center (NFC) in New Orleans. NFC provides a variety of other federal agencies with automated information systems services for personnel, payroll, and voucher and invoice payment systems and services. The Government Management Reform Act of 1994 authorized the Office of Management and Budget (OMB) to designate six franchise fund pilots to provide common administrative services on a fully reimbursable basis. Franchise funds are a type of intragovernmental revolving fund that were created to be fully self-supporting competitive businesslike entities within the federal government. The franchise fund pilots are located in the Departments of Commerce, Health and Human Services, the Interior, the Treasury, Veterans Affairs, and at the Environmental Protection Agency. The six pilots provide a variety of common services, such as acquisition management, financial management services, and employee assistance programs. The legal authorities creating the franchise funds are similar to those of other IR funds. However, most of the franchise funds have the specific authority to carry over into the next fiscal year up to 4 percent of the annual income of the fund for capital equipment and financial management improvements. Most other IR funds do not have this authority. The Treasury franchise fund service, for example, contains multiple business units operating under the brand name FedSource. FedSource offers various human capital services such as recruitment, employee assistance, position classification, and alternative dispute resolution through contracts with multiple vendors experienced in providing human capital services in the federal sector. Cooperative administrative support units (CASU) have provided services since 1986 and most operate under the authority of the Economy Act of 1932 as amended. CASUs are entrepreneurial organizations that provide the full range of support services on a reimbursable basis to federal agencies. Federal agencies in a local community identify services that they would like to share under the leadership of one or more host agencies. The host agency is reimbursed for all costs incurred in providing the services to customer agencies. Several CASUs provide services in conjunction with a franchise fund and operate under the authority of the franchise fund, which allows them to make use of provisions more expansive than those of the Economy Act, including permitting the customer agency’s fiscal year funds to remain obligated to pay for services when delivered, even after the end of the fiscal year. The Southeast Regional CASU (SER-CASU) is an example of a chartered unit within the National CASU Network. SER-CASU offers human capital services, such as employee assistance program support and training services. Federal agencies also use fee-for-service interagency contract service programs. The programs are being used in a wide variety of situations, from those in which a single agency provides limited contracting assistance to an all-inclusive approach in which the provider agency’s contracting office handles all aspects of the procurement. The increased use of interagency contract service programs has come about as a result of reforms and legislation passed in the 1990s, which allowed agencies to streamline the acquisition process, operate more like businesses, and offer increasing types of services to other agencies. The Office of Personnel Management’s Training and Management Assistance (TMA) program is an example of an interagency contract service program. The TMA program operates under the IR fund established by 5 U.S.C. § 1304(e). It is an expedited contracting process for federal agencies seeking human capital management and development in areas such as knowledge management, training, and workforce planning. For a fee, clients access project managers, technology, and prequalified contractors with the intended result of time and cost savings compared to the agency undertaking its own procurement actions. Contracting can be defined as the hiring of private sector firms or nonprofit organizations to provide a good or service for the government. In contrast to the use of IR funds, CASUs, and interagency contract service programs, the agency uses its own contracting authority to enter into a contract with a company and manages the contract. For example, the Department of Homeland Security has awarded a contract to a company to help design a human capital strategic plan, which would assist the department in aligning its human capital requirements with its mission needs. Partnerships can be defined as voluntary alliances with other organizations. They do not necessarily involve the exchange of funds. For example, the Census Bureau’s Partnership and Data Services program continues and expands upon more than 140,000 organizational partnerships established during Census 2000. During the census, the Bureau relied on its extensive network of partners at the national, state, and local levels to help recruit employees for more than half a million temporary jobs. Judith Kordahl and Caroline Villanueva also made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
|
Human capital offices have traditionally used alternative service delivery (ASD)--the use of other than internal staff to provide a service or to deliver a product--as a way to reduce costs for transaction-based services. GAO was asked to identify which human capital activities agencies were selecting for ASD, the reasons why, how they were managing the process, and some of the lessons they had learned. Eight agencies were selected to provide illustrative examples of ASD use. The selected agencies were using ASD for the full range of their human capital activities. Agencies generally approached their management of ASD in similar ways. They conceptually agreed that human capital activities that did not require an intimate knowledge of the agency, oversight, or decision-making authority could be considered for ASD, although in practice they showed differences in their choices of ASD activities. GAO identified several lessons the agencies had learned about ASD management, such as the importance of understanding the complexity and requirements of an activity before making an ASD decision. As the President's agent and adviser for human capital activities, OPM also has a central role in assisting agencies' management of ASD. Several agencies noted that they used OPM's Training and Management Assistance program, which provides human capital contract assistance. However, the officials also cited the need for sharing information about specific ASD efforts, useful metrics, and lessons learned.
|
We reviewed reconstruction contracts that had been funded, in whole or in part, with U.S. appropriated funds. We focused our review on new contracts, modifications, task orders under existing contracts, and contract actions using the General Services Administration’s (GSA) federal supply schedule program as of September 30, 2003. We did not review contracts that were funded entirely with international or Iraqi national funds, such as funds seized after the 1991 Gulf War or funds that were discovered during Operation Iraqi Freedom in 2003. We also did not review contracts or task orders that were used only for support of military operations or grants and cooperative agreements awarded to international or nongovernmental organizations. We continue to evaluate various issues related to military operations and the progress in rebuilding Iraq under separate reviews. To determine the number of reconstruction contract actions, the types of contract actions, the procedures used to make the awards, and the funding sources, we requested information from each of the principal organizations responsible for rebuilding activities in Iraq: the CPA, the Office of the Secretary of Defense, the Department of the Army, the Army Corps of Engineers, USAID, and the Departments of State and Justice. To verify the information provided, we requested copies of each contract action issued as of September 30, 2003, and corrected the information provided as appropriate. Agency officials could not provide the contract files for a limited number of small-dollar contracts awarded during the early stages of the reconstruction effort. To determine the amount obligated for reconstruction, we primarily used the obligation data recorded in the contracts. We also reviewed the data maintained by the agencies’ budget offices and information reflected in the Office of Management and Budget’s (OMB) quarterly status reports. To obtain information on contract activities since September 2003, we interviewed CPA and agency officials, attended industry day conferences, and reviewed solicitations and other relevant agency documents. To determine whether agencies had complied with applicable laws and regulations governing competition when awarding contracts and issuing task orders, we reviewed the requirements of the Competition in Contracting Act (CICA) of 1984 and other relevant laws and regulations. We judgmentally selected 25 contract actions, consisting of 14 new contracts awarded using other than full and open competition and 11 task orders issued under existing contracts. These 25 contract actions represented about 97 percent of the total dollars obligated for reconstruction through September 30, 2003. New contracts accounted for nearly 80 percent of this spending. We selected the 25 contracts or task orders based on various factors. We focused on high-dollar value contracts and task orders, and on contracts awarded using other than full and open competitive procedures. We also considered whether audits by the DOD or USAID Inspectors General were under way. Overall, the 25 contracts or task orders consisted of the following: the largest contract awarded and the 4 largest task orders, by dollar value, issued to support CPA operations; 9 contracts awarded and 1 task order issued by USAID, as well as 1 task order issued under an Air Force contract to provide logistical support for USAID-managed efforts; 2 contracts awarded and 4 task orders issued by the Army Corps of Engineers and the Army Field Support Command to help restore Iraq’s oil or electrical infrastructure; 1 contract awarded and 1 task order issued by the Army to train or equip the New Iraqi Army; and 1 contract awarded by the Department of State to support Iraqi law enforcement efforts. For new contract awards, we determined whether agency officials followed appropriate procedures in using other than full and open competition and assessed the agency’s justification for its contracting approach. For task orders issued under existing contracts, we determined whether the task orders were within the scope of the existing contracts, and if not, whether the agencies had followed proper procedures to add the work. To do so, we obtained the contracts or task orders and associated modifications, justification and approval documentation, negotiation memoranda, audit reports, and other relevant documents. We discussed the award and issuance process with agency procurement personnel, including contracting officers, program managers, and, in some cases, agency counsel. We also reviewed audit reports on various procurement issues prepared by the DOD and USAID Inspectors General and the Defense Contract Audit Agency (DCAA). To assess agencies’ initial contract administration efforts, we interviewed procurement officials to determine how contract administration for their contracts was initially staffed, including the use of support contracts to assist in administering the contracts. We obtained information on plans for reaching agreement on key contract terms and conditions. We also reviewed the 25 contracts or task orders to determine whether they included provisions related to contract administration, such as quality assurance plans, requirements for monthly status reports, and subcontractor management plans. As part of our monitoring of reconstruction activities, we conducted field visits in October 2003 in Baghdad and in other areas in Iraq, including Al Hillal and Al Basrah. During these visits, we held discussions with officials and visited project sites, including power plants, oil wells, oil processing facilities, water and sewage systems, schools, and many other reconstruction activities. During these visits, we observed the challenges faced in carrying out reconstruction efforts, including the hostile security environment, poor communications, and unsettled working conditions. Appendix I lists the agencies visited during our review. We conducted our work between May 2003 and April 2004 in accordance with generally accepted government auditing standards. During the latter part of 2002, as diplomatic efforts to convince the former Iraqi regime to comply with United Nations Security Council resolutions continued, discussions took place within the administration about the need to rebuild Iraq should combat operations become necessary. In October 2002, OMB established a senior interagency team to establish a baseline assessment of conditions in Iraq and to develop relief and reconstruction plans. According to an OMB official, the team developed plans for immediate relief operations and longer-term reconstruction in 10 sectors: health, education, water and sanitation, electricity, shelter, transportation, governance and the rule of law, agriculture and rural development, telecommunications, and economic and financial policy. Though high-level planning continued through the fall of 2002, most of the agencies involved in the planning were not requested to initiate procurement actions for the rebuilding efforts until early in 2003. Once assigned the responsibilities, agency procurement personnel were instructed to be ready to award the initial contracts within a relatively short time period, often within weeks. During 2003, several agencies played a role in awarding or managing reconstruction contracts, most notably USAID and the Army Corps of Engineers. Various agencies awarded contracts on behalf of the CPA and its predecessor organization, the Office of Reconstruction and Humanitarian Assistance. Table 1 shows the principal areas of responsibility assigned to the CPA and other agencies. As of September 30, 2003, the agencies had obligated nearly $3.7 billion on 100 contracts or task orders for reconstruction efforts (see table 2). These obligations came from various funding sources, including U.S. appropriated funds and Iraqi assets. The Army Corps of Engineers and USAID together obligated about $3.2 billion, or nearly 86 percent of this total. The majority of these funds were used to rebuild Iraq’s oil infrastructure and to fund other capital-improvement projects, such as repairing schools, hospitals, and bridges. This spending reflects a relatively small part of the total amount that may be required to rebuild Iraq, with estimates ranging from $50 billion to $100 billion. Appendix II lists the 100 reconstruction contracts and task orders we identified and the associated obligations as of September 30, 2003. In November 2003, Congress appropriated an additional $18.4 billion for rebuilding activities. The CPA’s projected uses for the funds reflect a continued emphasis on rebuilding Iraq’s infrastructure and on providing improved security and law enforcement capabilities (see table 3). In appropriating these funds, Congress required that the CPA Administrator or the head of a federal agency notify Congress no later than 7 calendar days before awarding a contract, with these funds, valued at $5 million or more for reconstruction using other than full and open competition procedures. U.S.-funded reconstruction efforts were undertaken through numerous contracts awarded by various U.S. agencies. CICA generally requires that federal contracts be awarded on the basis of full and open competition— that is, all responsible prospective contractors must be afforded the opportunity to compete. The process is intended to permit the government to rely on competitive market forces to obtain needed goods and services at fair and reasonable prices. Within this overall framework, agencies can use various procurement approaches to obtain goods and services. Each approach, as listed in table 4, involves different requirements with which agencies must comply. In some cases, agency officials may determine that a contractor working under an existing contract may be able to provide the required goods or services through issuance of a task order, thus obviating the need to award a new contract. Before awarding a task order under an existing contract, however, the agency must determine that the work to be added is within the scope of that contract (i.e., that the work fits within the statement of work, performance period, and maximum value of the existing contract). In making this determination, the contracting officer must decide whether the new work is encompassed by the existing contract’s statement of work and the original competition for that contract. Agencies generally complied with applicable laws and regulations governing competition when using sole-source or limited competition approaches to award the initial reconstruction contracts we reviewed. The exigent circumstances that existed immediately prior to, during, and following the war led agency officials to conclude that the use of full and open competitive procedures for new contracts would not be feasible. We found these decisions to be within the authority provided by law. We found several instances, however, in which agencies had issued task orders for work that was outside the scope of existing contracts. Such task orders do not satisfy legal requirements for competition. In these cases, the out-of-scope work should have been awarded using competitive procedures or supported with a Justification and Approval for other than full and open competition in accordance with legal requirements. Given the urgent need for reconstruction efforts, the authorities under the competition laws for using noncompetitive procedures provided agencies ample latitude to justify other than full and open competition to satisfy their needs. The agencies responsible for rebuilding Iraq generally complied with applicable requirements governing competition when awarding new contracts. While CICA requires that federal contracts be awarded on the basis of full and open competition, the law and implementing regulations recognize that there may be circumstances under which full and open competition would be impracticable, such as when contracts need to be awarded quickly to respond to unforeseen and urgent needs or when there is only one source for the required product or service. In such cases, agencies are given authority by law to award contracts under limited competition or on a sole-source basis, provided that the proposed actions are appropriately justified and approved. We reviewed 14 new contracts that were awarded using other than full and open competition: a total of 5 sole-source contracts awarded by the Army Corps of Engineers, the Army Field Support Command, and USAID and 9 limited competition contracts awarded by the Department of State, the Army Contracting Agency, and USAID (see table 5). Because of the limited time available to plan and commence reconstruction efforts, agency officials concluded that the use of full and open competitive procedures in awarding new contracts would not be feasible. For 13 of these new contracts, agency officials adequately justified their decisions and complied with the statutory and regulatory competition requirements. In the remaining case, the Department of State justified and approved the use of limited competition under a unique authority that, in our opinion, may not be a recognized exception to the competition requirements. State took steps to obtain competition, however, by inviting offers from four firms. State could have justified and approved its limited competition under recognized exceptions to the competition requirements. We found a lesser degree of compliance when agencies issued task orders under existing contracts. When issuing a task order under an existing contract, the competition law does not require competition beyond that obtained for the initial contract award, provided the task order does not increase the scope of the work, period of performance, or maximum value of the contract under which the order is issued. The scope, period, or maximum value may be increased only by modification of the contract, and competitive procedures are required to be used for any such increase unless an authorized exception applies. Determining whether work is within the scope of an existing task order contract is primarily an issue of contract interpretation and judgment by the contracting officer (in contrast to the contract’s maximum value and performance period, which are explicitly stated in the contract). Other than the basic requirement that task orders be within scope, there are no statutory or regulatory criteria or procedures that guide a contracting officer in making this determination. Instead, guiding principles for scope of contract determinations are established in case law, such as bid protest decisions of the Comptroller General. These decisions establish that the key factor is whether there is a material difference between the new work and the contract that was originally awarded—in other words, whether new work is something potential offerors reasonably could have anticipated in the competition for the underlying contract. Of the 11 task orders we reviewed, 2 were within the scope of the underlying contract and 7 were, in whole or part, not within scope; we have reservations concerning whether 2 others were within scope (see table 6). The seven instances in which agencies issued task orders for work that was, in whole or in part, outside the scope of an existing contract are described on the following pages. In each of these cases, the out-of-scope work should have been awarded using competitive procedures or supported with a Justification and Approval for other than full and open competition in accordance with legal requirements. Given the urgent need for reconstruction efforts, the authorities under the competition laws for using noncompetitive procedures provided agencies ample latitude to justify other than full and open competition to satisfy their needs. DCC-W improperly used a GSA schedule contract to issue two task orders with a combined value of over $107 million for work that was outside the scope of the schedule contract. Under GSA’s federal supply schedule program, GSA negotiates contracts with multiple firms for various commercial goods and services and makes those contracts available for other agencies to use. In March 2003, DCC-W placed two orders with Science Applications International Corporation (SAIC) under SAIC’s schedule contract. One order involved development of a news media capability—including radio and television programming and broadcasting—in Iraq. The other required SAIC to recruit people identified by DOD as subject matter experts, enter into subcontracts with them, and provide them with travel and logistical support within the United States and Iraq. The schedule contract, however, was for management, organizational, and business improvement services for federal agencies. In our view, the statements of work for both task orders were outside the scope of the schedule contract, which typically would encompass work such as consultation, facilitation, and survey services. The period of performance for the media services task order has expired, and the task order for subject matter experts was extended through April 30, 2004. Over $91 million was obligated under an Air Force Contract Augmentation Program contract for delivery of commodities to USAID for reconstruction activities and logistical support for USAID’s mission in Iraq. The contract is intended primarily to provide base-level logistical and operational support for Air Force deployments. Under an interagency agreement, the Air Force used the contract to provide USAID a variety of support tasks related to storage, inventory control and management, and other logistical and operational support. Some of these funds, however, had been obligated for services such as building materials for Iraqi schools and planning for fixing electrical power generation for Baghdad water treatment plants. Because these types of services—though related to USAID’s foreign assistance mission—are not related to support for a deployment, they appear to be outside the scope of the contract. When we brought this issue to the attention of Air Force officials, they agreed that some of the work was outside the scope of the contract, and they are issuing guidance to ensure that logistical support for USAID does not go beyond the scope of the contract. The Army Field Support Command issued a $1.9 million task order for contingency planning for the Iraqi oil infrastructure mission under its LOGCAP contract with Kellogg Brown & Root. The task order was not within the scope of that contract. This task order, issued in November 2002, required the contractor to develop a plan to repair and restore Iraq’s oil infrastructure should Iraqi forces damage or destroy it. Because the contractor was knowledgeable about the U.S. Central Command’s planning for conducting military operations, DOD officials determined the contractor was uniquely positioned to develop the contingency support plan. DOD determined that planning for the missions was within the scope of the LOGCAP contract, but it also determined that the actual execution of the Iraq oil mission, including prepositioning of fire-fighting equipment and teams, was beyond its scope. We agree with the DOD conclusion that repairing and continuing the operations of the Iraqi oil infrastructure are not within the scope of the contract. But unlike DOD, we conclude that preparation of the contingency support plan for this mission was beyond the scope of the contract. We read the LOGCAP statement of work as contemplating planning efforts for missions designated for possible contractor execution under the contract. Consequently, the Army Field Support Command should have prepared a written justification to authorize the work without competition. The resulting contingency plan was used as justification for subsequently awarding a sole-source contract to Kellogg Brown & Root for restoring the oil infrastructure, for which nearly $1.4 billion was obligated during fiscal year 2003. As noted in table 5, we found that the award of this contract generally complied with applicable legal standards. In March 2003, the Army Corps of Engineers conducted a limited competition resulting in multiple-award contracts with three firms— Washington International, Inc., Fluor Intercontinental, Inc., and Perini Corporation—for construction-related activities in the Central Command’s area of responsibility. These contracts had a maximum value of $100 million each. In the latter part of August 2003, as efforts to restore electricity throughout Iraq lagged and amid concerns that the electrical shortages presented social unrest and security threats to the CPA and the military forces, the Central Command tasked the Army Corps of Engineers with taking steps to rebuild the electrical infrastructure as quickly as possible. In response, the Army Corps of Engineers issued task orders under each of these contracts causing them to exceed their maximum value. Consequently, the orders are outside the scope of the underlying contracts. The Army Corps of Engineers prepared a justification for award of the underlying contracts in August 2003 and a subsequent justification in September 2003 to increase the maximum value of each contract from $100 million to $500 million. Neither justification had been approved as of March 31, 2004. Finally, we note that section 803 of the National Defense Authorization Act for Fiscal Year 2002 (Pub. L. No. 107-107) requires that an order for services in excess of $100,000 issued under a multiple-award contact by or on behalf of a DOD agency be made on a competitive basis, unless a contracting officer justifies an exception in writing. The Army Corps of Engineers did not compete these task orders among the three multiple- award contractors. Rather, the agency and the contractors collectively decided to allocate the electrical infrastructure work based on geographical sectors and the capabilities of the contractors in the theater. We found that the contracting officer had not prepared a justification for these noncompetitive task orders. After we raised this issue with agency officials, the contracting officer prepared the required documentation in April 2004. As described in table 6, we also have reservations about whether work ordered under two other Army task orders was within the scope of an underlying contract for combat support. These task orders were issued by the Army Field Support Command for the CPA’s logistical support and for a base camp used in training the New Iraqi Army. In these, as in the other cases, the competition laws provided agencies ample latitude to justify using other than full and open competition to satisfy their needs. The need to award contracts and begin reconstruction efforts quickly—the factors that led agencies to use other than full and open competition—also contributed to initial contract administration challenges. Faced with uncertainty as to the full extent of the rebuilding effort, agencies often authorized contractors to begin work before key terms and conditions, including the statement of work to be performed and the projected cost for that work, were fully defined. Until agreement is reached, contract incentives to control costs are likely to be less effective. Staffing constraints and security concerns posed further challenges. Agencies have made progress in addressing these issues, but there remains a backlog of contracts for which final agreement has not yet been reached. The CPA has created a new office to better manage and coordinate reconstruction efforts to be conducted over the next year. To meet urgent operational needs, as is the case in Iraq’s reconstruction, agencies are permitted to authorize contractors to begin work before contracts or task orders have been definitized—that is, before key terms and conditions, including price, have been defined and agreed upon. While this approach allows agencies to initiate needed work quickly, it also can result in potentially significant additional costs and risks being imposed on the government. Agencies generally are required to definitize contractual actions within 180 days. For many of the contracts we reviewed, agencies authorized the contractors to begin work before terms were fully defined, and later reached final agreement on the scope and price of the work. There remain six DOD contracts or tasks orders, however, that had yet to be definitized as of March 2004, two involved work that had been completed more than a year earlier (see table 7). In total, nearly $1.8 billion had been obligated on these contracts or task orders as of September 30, 2003. These contracts or task orders had been awarded or issued by either the Army Corps of Engineers or the Army Field Support Command, and they include efforts to restore Iraq’s oil and electrical infrastructures and to provide logistical support to the CPA. Agency officials attribute much of the delay in reaching agreement to continued growth in reconstruction efforts, which in turn have required numerous revisions to contract statements of work. The continued growth in requirements has resulted in an increase in both contractor costs and administrative workload on both contractor and agency procurement personnel. For example, the Army Corps of Engineers’ contract to restore Iraq’s oil infrastructure had individual task orders placed in March and May 2003 that were supposed to be definitized within 180 days. Similarly, the Army Field Support Command has four task orders that have to be definitized. For example, the Army Field Support Command’s task order to support the CPA was originally issued in March 2003, at an estimated cost of $858,503. As of September 30, the Army had obligated $204.1 million, and the statement of work had been modified a total of nine times. With each change, the contractor had to revise its cost and technical proposals, which also increased the workload for agency procurement personnel. The Army Field Support Command’s revised schedule now calls for definitizing the task orders between June and October 2004. Some of the delays reflect concerns over the adequacy of the contractors’ proposals. For example, on the task order awarded to restore Iraq’s electrical infrastructure, DCAA found a significant amount of proposed costs for which the contractor had not provided adequate support. Consequently, DCAA believed that the proposal was inadequate for the purposes of negotiating a fair and reasonable price. As of March 2004, negotiations between the contractor and the Army Corps of Engineers were still ongoing. To reduce risks, the Army Corps of Engineers has proposed paying the contractor only 85 percent of incurred costs until the contractor has adequately fulfilled its contract closeout responsibilities and acceptable business systems were in place. The lack of timely contract definitization potentially can have a significant impact on total contract costs and related risks. Specifically, the major reconstruction efforts have used cost-reimbursement type contracts under which the government has agreed, subject to cost ceilings, to reimburse the contractor for all reasonable and allowable costs incurred in performing the work. In two of the largest contract actions—the contract to repair and maintain Iraq’s oil infrastructure and the task order to support the CPA operations—the agencies have included an award fee provision under which the contractor can earn additional profit for meeting set targets in specified areas, such as cost control. As long as work continues to be performed under an undefinitized contract, however, the award fee incentive is likely to be less effective as a cost control tool since there is less work remaining to be accomplished and therefore less costs to be controlled by the contractor. Given the high cost involved, particularly for the Iraq oil mission (over $2.5 billion), any reduction in cost control incentives potentially involves a significant contract cost risk. The lack of adequate staffing presented challenges to several agencies involved in reconstruction efforts and, at times, resulted in inadequate oversight of the contractors’ activities. While agencies have taken actions, some of these early contract administration issues have yet to be fully resolved. When the CPA’s predecessor organization—Office of Reconstruction and Humanitarian Assistance---was established in mid-January 2003, it lacked an in-house contracting capability. It was not until February 27, 2003, that the Defense Contract Management Agency (DCMA) was asked to provide contracting support, including providing acquisition planning assistance and awarding and administering contracts. DOD officials noted that this tasking was unusual for DCMA, as it is typically responsible for administering, rather than awarding, contracts. According to DOD officials, they found that the Office of Reconstruction and Humanitarian Assistance did not have an official responsible for authorizing contract actions and supervising contracting officers and others performing procurement-related duties. Further, DOD had authorized positions for only two contracting officers, who had yet to arrive. In addition, DCMA officials reported that the lack of an organizational structure led to contractors providing draft statements of work and cost estimates to the contracting officers so that contracts could be awarded more quickly. Normally, it is the government’s responsibility to provide statements of work and develop independent cost estimates. We found that there were not always sufficient in-country personnel to administer the contracts or task orders when they were initially awarded or issued. For example, for the federal supply schedule order issued in March 2003 by DCC-W to establish an Iraqi media capability, contractor personnel purchased property that was not part of the task order, including purchases that may not have been necessary or appropriate. According to DOD officials, contractor personnel purchased about $7 million in equipment and services not authorized under the contract, including a H-2 Hummer and a pickup truck, and then chartered a flight to have them delivered to Iraq. According to DCMA officials, these actions were primarily due to inadequate government property management to control or monitor the contractor’s purchases. DCMA officials decided in May 2003 that it was in the best interests of the government to modify the approved equipment list, and include the materials purchased by the contractor. The lack of in-country procurement staff proved problematic in another task order issued by the DCC-W to help recruit and support subject matter experts to assist the CPA and Iraqi ministries. According to DCC-W and DCMA officials, there was initially neither contractor staff nor government officials to monitor the subject matter experts once they arrived in Iraq. DCMA officials indicated that some experts failed to report to duty or perform their responsibilities as expected or were no longer performing work under the task order. Staffing concerns affected other agencies as well. For example, USAID recognized early that its resources were insufficient to administer and oversee the contracts it expected to award. Consequently, USAID arranged for the Army Corps of Engineers to provide oversight on its $1.0 billion infrastructure contract, arranged to have DCAA audit contractors, and made plans to augment its mission in Iraq. As of January 2004, however, a senior USAID procurement official stated that its Iraq mission remained understaffed to provide adequate contract oversight in Iraq. USAID stated it has four full-time procurement staff that will be assigned to work in Iraq for 3 years. According to the senior official, this long-term commitment is essential to establishing the institutional knowledge needed to monitor and administer the contracts effectively. However, USAID indicated that given the workload, providing an appropriate degree of oversight would require at least seven additional personnel. Consequently, USAID found it necessary to augment the mission staff with personnel on temporary assignment from other USAID missions, who will serve between 1 and 3 months. Similarly, State Department officials noted that the Bureau of International Narcotics and Law Enforcement Affairs—the bureau responsible for monitoring State’s law enforcement support contract—is understaffed. For example, the department official responsible for contract oversight had multiple, time-consuming roles. This official currently serves as both the program manager and the contracting officer’s representative for the law enforcement support contract. As such, the official approves the contractor’s monthly vouchers along with carrying out other detailed procurement tasks. The same official also had responsibilities for the department’s efforts to recompete a $1.3 billion effort to provide worldwide law enforcement support and for law enforcement support efforts in Liberia and Haiti. To address the workload issue, the bureau has assigned two additional staff to assist in overseeing contract activities in Iraq and is exploring options for reorganizing the bureau to use resources more efficiently. Providing adequate oversight on reconstruction efforts is challenging given the uncertain security environment and harsh working conditions. During site visits to Iraq in October 2003, we observed the considerable degree to which these factors were affecting reconstruction efforts. For example, travel outside secure compounds occurred only in convoys of armored vehicles with armed security forces. Flak jackets and helmets were required to be worn or, at a minimum, carried. Communications were generally difficult and unreliable. In addition, the living and working environment afforded individuals little privacy or time to rest. We observed that personnel generally worked 12 to 15 hour days and often shared cramped living and working quarters. In Al Hillah, for example, five USAID personnel shared two small offices with their security team. To better coordinate and manage the $18.4 billion in reconstruction funding provided for fiscal year 2004, the CPA established a program management office that is responsible for infrastructure-related programs. The office, which includes representatives from USAID and the Army Corps of Engineers, is responsible for coordinating the efforts of the CPA, the Iraqi ministries, and other coalition partners. The office’s acquisition strategy reflects a plan to award 1 program management support contract to support the program management office and to oversee reconstruction efforts of specific sectors—electricity, oil, public works and water, security and justice, transportation and communications, and buildings and health; 6 program management contracts to coordinate reconstruction efforts specific to each sector; and 15 to 20 design-build contracts to execute specific tasks. In March 2004, various DOD components, on behalf of the CPA, awarded 17 contracts—the program management support contract, the 6 sector- specific program management and the 10 design-build contracts. These contracts were awarded pursuant to a DOD decision to limit competition to firms from the United States, Iraq, coalition partners, and force contributing nations. In addition to these contracts, other agencies will continue to award and manage contracts for areas within their assigned area of responsibility. For example, in January 2004, USAID competitively awarded a $1.8 billion contract to enable further reconstruction efforts, while the Army Corps of Engineers competitively awarded two contracts with a combined value of $2.0 billion to further repair and rehabilitate Iraq’s oil infrastructure. USAID announced its intent to solicit bids on at least seven new contracts. One of these contracts is intended to provide USAID with an enhanced capability to carry out data collection, performance monitoring, and evaluation of USAID’s ongoing work in Iraq. The United States, along with its coalition partners and various international organizations and donors, has undertaken an enormously complex, costly, and challenging effort to rebuild Iraq. At the early stages of these efforts, agency procurement officials were confronted with little advance warning on which to plan and execute competitive procurement actions, an urgent need to begin reconstruction efforts quickly, and uncertainty as to the magnitude of work required. Their actions, in large part, reflected proper use of the flexibilities provided under procurement laws and regulations to award new contracts using other than full and open competitive procedures. With respect to several task orders issued under existing contracts, however, some agency officials overstepped the latitude provided by competition laws by ordering work outside the scope of the underlying contracts. This work should have been separately competed, or justified and approved at the required official level for performance by the existing contractor. Given the war in Iraq, the urgent need for reconstruction efforts, and the latitude allowed by the competition law, these task orders reasonably could have been supported by justifications for other than full and open competition. In some cases, such as the task order for the Iraqi media capability, the work has been completed so there is no practical remedy available. In several other cases, however, the opportunity exists to bring task orders into compliance with requirements, as well as to ensure that future task orders are issued properly. Providing effective contract administration and oversight remains challenging, in part due to the continued expansion of reconstruction efforts, the staffing constraints, and the need to operate in an unsecure and threatening environment. Indeed, the magnitude of work that remains undefinitized is symptomatic of changing requirements and the lack of sufficient agency and contractor resources. Nevertheless, timely definitization of outstanding contracts and task orders is needed to promote effective cost control. More broadly, these challenges suggest the need to assess the lessons learned from the contract award and administration processes in Iraq to identify ways to improve similar activities in the future. It is too early to gauge whether the CPA approach to improving its ability to monitor and coordinate reconstruction efforts through the use of a new program management office and the planned award of various types of construction and management support contracts will be effective. However, recent congressional action requiring the CPA Administrator and heads of federal agencies to report on contracts awarded using other than full and open competition will provide more transparency and accountability in the award of new Iraq reconstruction contracts. To ensure that task orders issued to rebuild Iraq comply with applicable requirements, and to maximize incentives for the contractors to ensure effective cost control, we recommend that the Secretary of the Army take the following four actions: Review the out-of-scope task orders for Iraqi media and subject matter experts issued by the Defense Contracting Command-Washington and take any necessary remedial actions. Ensure that any future task orders under the LOGCAP contract for Iraq reconstruction activities are within the scope of that contract. Address and resolve all outstanding issues in connection with the pending Justifications and Approvals for the contracts and related task orders used by the Army Corps of Engineers to restore Iraq’s electricity infrastructure. Direct the Commanding General, Army Field Support Command, and the Commanding General and Chief of Engineers, U.S. Army Corps of Engineers, to definitize outstanding contracts and task orders as soon as possible. To improve the delivery of acquisition support in future operations, we recommend that the Secretary of Defense, in consultation with the Administrator, U.S. Agency for International Development, evaluate the lessons learned in Iraq and develop a strategy for assuring that adequate acquisition staff and other resources can be made available in a timely manner. DOD and the Department of State provided written comments on a draft of this report. Their comments are discussed below and are reprinted in appendixes III and IV. USAID concurred with the draft report as written. USAID’s response is reprinted in appendix V. GSA also provided comments regarding its efforts to ensure that agencies properly use the federal supply schedule program. GSA’s comments are reprinted in appendix VI. DOD generally concurred with our recommendations. DOD noted that it is in the process of taking appropriate remedial actions on the task orders issued by the Defense Contracting Command-Washington, and is resolving outstanding issues related to the task orders issued by the Army Corps of Engineers to restore Iraq’s electrical infrastructure. As part of its efforts to definitize contracts, DOD noted that the Army Field Support Command has, among other things, established firm dates for the submission of contractor proposals and to complete negotiations. DOD also noted that progress on these efforts is being reviewed by senior Command officials on at least a weekly basis. DOD did not indicate, however, what steps the Army Corps of Engineers is taking to definitize the actions for which they are responsible. As we noted in the report, the Army Corps of Engineers had two undefinitized contracts on which they had obligated more than $1.5 billion as of March 2004. Lastly, DOD reported that efforts are already underway to conduct a study to evaluate the lessons learned in Iraq and develop a strategy for assuring that adequate staff and other resources can be made available. DOD partially concurred with our recommendation to ensure that future task orders issued on the LOGCAP contract are within the scope of that contract. DOD noted that the LOGCAP contracting officer reviews each proposed scope of work and determines whether the action is within the scope of the contract, and obtains legal advice as needed. DOD also noted that the recommendation appeared to be based on only one action, namely the task order for contingency planning for the Iraq oil infrastructure mission. We also expressed concern, however, about whether the task orders to provide logistical support for the CPA and to the New Iraqi Army training program were within the scope of the underlying LOGCAP contract. Consequently, the steps taken by the contracting officer—while necessary and appropriate—may not be sufficient to ensure that work outside the scope of the LOGCAP contract is either competed or properly justified. DOD provided two comments on our findings. First, DOD took exception to our observations on the manner by which the Deputy Secretary limited competition for contracts awarded in fiscal year 2004 to firms from the United States, Iraq, coalition partners and force contributing nations. DOD noted that the Deputy Secretary has broad authority from the Secretary to act on his behalf, which we do not dispute. We note, however, that the plain language of the law provides that authority to approve public interest exceptions may not be delegated. While the Deputy Secretary may have broad authority to act on the Secretary’s behalf, he was not authorized to do so in this case. Second, regarding our conclusion that the LOGCAP contingency planning order was not within the scope of the contract, DOD commented that our conclusion should be couched in terms of opinion. While legal analysis by its nature reflects opinion, we remain convinced of our conclusion and emphasize the need for more analytical rigor in the review of LOGCAP task orders. The Department of State disagreed with our assessment that the authority it cited to limit competition may not be a recognized exception to competition requirements. The department believed that the authority it cited—section 481 of the Foreign Assistance Act of 1961, as amended— was used appropriately. The specific section of the Act cited by the department—section 481(a)(4)—speaks to the authority of the President to furnish assistance to a country or international organization, but does not provide relief from statutory competition requirements. In its comments and in earlier discussions, State did not provide us with a persuasive basis to conclude that the authority is a recognized exception to the competition requirements. However, we did not need to resolve the issue because State appears to have maximized competition under the circumstances, and we believe State could have used other recognized exceptions, such as 40 U.S.C. §113(e), to meets its requirements. This authority permits the waiver of competitive contracting procedures when use of those procedures would impair foreign aid programs. GSA recognized that it has a responsibility to ensure that agency personnel are adequately trained in the proper use of the federal supply schedule program. GSA noted that it has been working with DOD and other federal agencies to ensure that their contracting officers are fully trained on the proper use of the program and identified some of its ongoing and planned efforts toward this objective. We are sending copies of this report to the Director, Office of Management and Budget; the Secretaries of Defense and State; the Administrator, U.S. Agency for International Development; the Commanding General and Chief of Engineers, U.S. Army Corps of Engineers; the Director, Defense Contract Management Agency; and the Director, Defense Contract Audit Agency. We will make copies available to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. The major contributors to this report are listed in appendix VII. If you have any questions about this report, please contact me on (202) 512-4841 or Timothy DiNapoli on (202) 512-3665. During the course of the review, we contacted the following organizations: Office of Management and Budget, Washington, D.C.; Office of Reconstruction and Humanitarian Assistance, Washington, Coalition Provisional Authority, Washington, D.C., and Baghdad, Iraq; Department of Defense, the Comptroller, Pentagon, Washington, D.C. Department of the Army, Pentagon, Washington, D.C; Washington Headquarters Services, Pentagon, Washington, D.C.; Defense Contracting Command-Washington, Pentagon, Washington, Southern Region Contracting Center, Army Contracting Agency, Fort Northern Region Contracting Center, Army Contracting Agency, Fort Army Field Support Command, Rock Island, Illinois; Headquarters, U.S. Army Corps of Engineers, Washington, D.C.; U.S. Army Engineer Division, Southwestern, Dallas, Texas; U.S. Army Engineer District, Fort Worth, Fort Worth, Texas; Engineering and Support Center, Huntsville, Alabama; Transatlantic Program Center, Winchester, Virginia; U.S. Army Engineer District, Philadelphia, Philadelphia, Pennsylvania; and Vicksburg Consolidated Contracting Office, Alexandria, Virginia; Defense Information Systems Agency, Arlington, Virginia, and Scott Air Force Base, Illinois; Department of Defense, Inspector General, Arlington, Virginia; Defense Contract Management Agency, Alexandria, Virginia; Defense Contract Audit Agency, Fort Belvoir, Virginia; U.S. Agency for International Development, Washington, D.C.; U.S. Department of State, Washington, D.C.; and U.S. Department of Justice, Washington, D.C. The following are GAO’s comments on the Department of Defense’s letter dated May 13, 2004. 1. The Department of Defense (DOD) incorrectly noted that the recommendation was based on only one instance. In addition to the example cited by DOD, we also expressed concern about whether the task orders to provide logistical support for the Coalition Provisional Authority (CPA) and to the New Iraqi Army training program were within the scope of the underlying Logistics Civil Augmentation Program (LOGCAP) contract. 2. The actions being taken by the Army Field Support Command are positive steps to monitor progress in reaching agreement on the contracts’ key terms and conditions. DOD did not indicate, however, what steps the Army Corps of Engineers was taking to definitize the actions for which the Corps is responsible. As noted in table 7, the Army Corps of Engineers had two undefinitized contracts on which had obligated more than $1.5 billion as of March 2004. 3. DOD asserts that the determination and finding was not made on a class basis because it included a common justification for 26 specifically identified “particular procurements.” The Federal Acquisition Regulation (FAR) provides for determination and findings for individual contract actions (FAR 1.702) and a class of contract actions (FAR 1.703). Because the determination and finding encompasses 26 contract actions, we conclude that it is a class determination and finding. Specifically enumerating members of the class does not alter the fundamental fact that it is for more than one action. Class determination and findings are specifically prohibited by FAR 6.302-7(c)(4). As to the question of authority to execute the determination and finding, we do not dispute that the Deputy Secretary has broad authority to act on behalf of the Secretary. We note, however, that the plain language of the law provides that authority to approve public interest exceptions may not be delegated and conclude that the Deputy Secretary did not have authority in this instance. 4. Legal analysis by its nature reflects opinion. In the opinion of GAO, the Army, and DOD, the actual restoration of Iraqi oil infrastructure was not within the scope of the LOGCAP contact. We also noted that the LOGCAP contract anticipates contingency planning for work that can be executed under the contract. In other words, contingency planning is within the scope of the contract only if the actual work is also within the scope of the contract. In this instance, all parties agree that actual restoration of the oil infrastructure was not within the scope of the contract. Consequently, we conclude that planning the oil infrastructure restoration was also not within the scope of the contract. We would encourage the contracting officer to continue to obtain legal assistance given the complexity of the LOGCAP contract, but we also believe that DOD needs to ensure analytical rigor in its review of task orders. Major contributors to this report were Robert Ackley, Ridge Bowman, Carole Coffey, Muriel Forster, Glenn D. Furbish, Charles D. Groves, John Heere, Chad Holmes, John Hutton, Ronald Salo, Karen Sloan, Lillian Slodkowski, Steve Sternlieb, Susan Tindall, Adam Vodraska, and Tim Wilson.
|
Congress has appropriated more than $20 billion since April 2003 to support rebuilding efforts in Iraq. This complex undertaking, which is occurring in an unstable security environment and under significant time constraints, is being carried out largely through contracts with private-sector companies. As of September 2003, agencies had obligated nearly $3.7 billion on 100 contracts or task orders under existing contracts. Given widespread congressional interest in ensuring that reconstruction contracts are awarded properly and administered effectively, GAO reviewed 25 contract actions that represented about 97 percent of the obligated funds. GAO determined whether agencies had complied with competition requirements in awarding new contracts and issuing task orders and evaluated agencies' initial efforts in carrying out contract administration tasks. Agencies used sole-source or limited competition approaches to issue new reconstruction contracts, and when doing so, generally complied with applicable laws and regulations. Agencies did not, however, always comply with requirements when issuing task orders under existing contracts. For new contracts, the law generally requires the use of full and open competition, where all responsible prospective contractors are allowed to compete, but permits sole-source or limited competition awards in specified circumstances, such as when only one source is available or to meet urgent requirements. All of the 14 new contracts GAO examined were awarded without full and open competition, but each involved circumstances that the law recognizes as permitting such awards. For example, the Army Corps of Engineers properly awarded a sole-source contract for rebuilding Iraq's oil infrastructure to the only contractor that was determined to be in a position to provide the services within the required time frame. The Corps documented the rationale in a written justification, which was approved by the appropriate official. The U.S. Agency for International Development properly awarded seven contracts using limited competition. The Department of State, however, justified the use of limited competition by citing an authority that may not be a recognized exception to competition requirements, although a recognized exception could have been used. There was a lesser degree of compliance when agencies issued 11 task orders under existing contracts. Task orders are deemed by law to satisfy competition requirements if they are within the scope, period of performance, and maximum value of a properly awarded underlying contract. GAO found several instances where contracting officers issued task orders for work that was not within the scope of the underlying contracts. For example, to obtain media development services and various subject matter experts, the Defense Contracting Command-Washington placed two orders using a management improvement contract awarded under the General Services Administration's schedule program. But neither of the two orders involved management improvement activities. Work under these and other orders should have been awarded using competitive procedures or, due to the exigent circumstances, supported by a justification for other than full and open competition. The agencies encountered various contract administration challenges during the early stages of the reconstruction effort, stemming in part from inadequate staffing, lack of clearly defined roles and responsibilities, changing requirements, and security constraints. While some of these issues have been addressed, staffing and security remain major concerns. Additionally, the Army and its contractors have yet to agree on key terms and conditions, including the projected cost, on nearly $1.8 billion worth of reconstruction work that either has been completed or is well under way. Until contract terms are defined, cost risks for the government remain and contract cost control incentives are likely to be less effective.
|
A number of human capital management tools and flexibilities are available to assist agencies in their recruitment and hiring processes. The competitive service and excepted service hiring approaches provide different and complementary ways to acquire employees in the general schedule (GS) grades of GS-15 and below. In addition, agencies may also appoint individuals to SES positions through a competitive process or make noncareer SES appointments without competition. While recognizing the need for flexibility in hiring employees, the federal government also seeks to assure that appointments are based on merit. The Civil Service Reform Act of 1978 (P.L. 95-454) set out a number of merit system principles and required that federal personnel management be implemented in a manner consistent with those principles. The principle pertaining to appointments states that “ecruitment should be from qualified individuals from appropriate sources in an endeavor to achieve a work force from all segments of society, and selection and advancement should be determined solely on the basis of relative ability, knowledge, and skills, after fair and open competition which assures that all receive equal opportunity.” OPM is responsible for implementing the Civil Service Reform Act and other personnel-related laws and for developing regulations to ensure that the intent of merit system principles is implemented. OPM delegated examining authority to Treasury—on behalf of Customs—on December 20, 1995. The delegated examining authority requires Customs to conduct competitive examinations that comply with merit system laws and regulations as interpreted by OPM’s Delegated Examining Operations Handbook. The majority of the federal civilian workforce obtained their positions by competing against others through the competitive service examination process. The examination process is one of the processes intended to assure merit system principles are complied with and includes notifying the public that the government will accept applications for a job, rating applications against minimum qualification standards, and assessing applicants’ relative competencies or knowledge, skills, and abilities (KSA)against job-related criteria to identify the most qualified applicants. The law provides for necessary exceptions to the competitive service process, however, when conditions of good administration warrant. OPM has been delegated broad authority for excepting positions from the competitive service when it determines that competitive examination for those positions is impracticable. “Impracticable to examine” means that it is impractical or unreasonable to apply the qualification standards and procedural and other requirements established for the competitive service. If OPM decides a position should be in the excepted service, OPM authorizes the positions to be filled by excepted appointment under Schedules A, B, or C. The Schedule A authority is for positions in the excepted service, other than those of a confidential or policy-determining character, for which it is impracticable to hold a competitive examination and the appointments are not subject to the basic qualification standards established by OPM. While there are many types of Schedule A positions excepted by OPM governmentwide, such as chaplains, attorneys, and certain positions for which a critical hiring need exists, agencies may also petition OPM to establish Schedule A appointing authority specifically applicable to their agency. OPM decides on such single-agency requests on a case-by-case basis. Treasury’s Schedule A authority for Customs Service positions has been approved and amended several times over a period of almost 35 years. For example, in 1963, OPM approved Treasury’s request for single- agency Schedule A appointing authority for 25 criminal investigator positions at Customs. In 1991 and 1997, OPM amended Customs’ Schedule A authority by increasing the number of criminal investigator positions at Customs to 200 and 300 positions, respectively. Individuals appointed to Schedule A positions may be converted to publicly announced competitive service positions by competing for those positions. In addition, employees in certain types of Schedule A positions—such as positions filled by mentally retarded and severely physically handicapped persons—may be converted without competition to competitive service positions upon the completion of 2 years of satisfactory service. Employees in Schedule A positions may also compete for career appointments to SES positions, or be given noncareer or limited-term SES appointments noncompetitively. Depending on the position and the status of the individual, the noncompetitve appointments may require OPM authorization and approval from the White House’s Office of Presidential Personnel. The agency appointing official determines whether an individual meets the qualification requirements of the SES position. Treasury requested, on August 6, 1998, that OPM amend Customs’ Schedule A authority to include 10 positions for oversight policy and direction of sensitive law enforcement activities. Treasury’s request letter—on behalf of Customs—explained that Customs had almost reached its limit of 300 criminal investigator positions previously authorized by its Schedule A authority. Treasury requested an amendment to the authority to authorize Customs to fill an additional 10 positions and to broaden the authority so that the positions included providing oversight and direction to sensitive law enforcement projects and coordinating such initiatives with other federal agencies at the national level, including undercover and intelligence work. The justification for including the additional 10 positions under a Schedule A authority was that “due to the sensitive nature of the operations, these positions require a unique blend of special characteristics, skills and abilities that cannot be announced to the general public, and for which it is not practicable to examine.” OPM approved the authority for the 10 positions on August 21, 1998. In making the approval, OPM officials said that they had evaluated Treasury’s request and had based their decision on (1) Treasury’s assessment of whether the positions were impracticable to examine for and (2) prior Schedule A approvals that OPM had granted for circumstances similar to Treasury’s newly requested positions. Neither statute nor regulations define detailed criteria for determining which positions are impracticable to examine for. OPM officials told us they use their knowledge of the duties and responsibilities of the positions and the examination process to make a judgment as to which positions are impracticable to examine for. The OPM official who reviewed the request and recommended its approval said that her determination that it was impracticable to examine for such positions was based primarily on Customs’ assertion that the positions were sensitive in nature and involved law enforcement activities. In addition, an OPM official said that the positions, as generally described in the request, were similar to law enforcement type positions that OPM had approved in Treasury’s Office of the Under Secretary for Enforcement. In amending the Schedule A authority, OPM did not restrict its use to any specific job or occupational series. OPM said that granting general authority—without specific positions or occupational series—is a standard OPM practice. For example, OPM approved Schedule A authority at Treasury for no more than 20 positions supplementing permanent staff studying international financial, economic, trade, and energy policies and programs, with employment not to exceed 4 years. Customs appointed nine individuals to Schedule A positions using the amended authority granted by OPM between the time of OPM’s approval on August 21, 1998, and the most recent appointment on January 14, 2001. These appointments were two identical law enforcement specialists, a public affairs specialist, a law enforcement appropriations officer, a strategic trade adviser, a special assistant in the Office of Internal Affairs, a program manager for air interdiction, a deputy executive director of air/marine interdiction, and a senior adviser on various aspects of oversight policy and direction of law enforcement activities. Customs can currently create 4 new Schedule A positions under the authority for the 10 positions because 1 position was not filled and the positions—of the 3 appointees that subsequently converted to competitive service positions— were terminated and no longer exist. Although the circumstances surrounding the initial appointments of six individuals did not appear inconsistent with the authority, events subsequent to three appointments were apparently inconsistent with the justification Treasury used in first requesting the authority and may have provided two of the three appointees with an unfair competitive advantage. In addition, while the six initial Schedule A appointments did not appear inconsistent with the authority, the timing of two of the six appointments could give the appearance of political favoritism. The circumstances resulting in two Schedule A appointees eventually being hired into identical competitive service positions appear to have been inconsistent with the justification used in the original request for the Schedule A authority. The appointees—a law enforcement specialist and a public affairs specialist—were initially placed in Schedule A positions that Treasury’s request to OPM asserted could not be announced to the public and were not practicable to examine for. After the appointees performed the duties and responsibilities of these positions for over a year, Customs created identical competitive service positions, advertised the positions, examined applicants, and filled the positions with the two Schedule A appointees. Customs’ ability to ultimately hold a competition for these positions appears to have conflicted with Customs’ original justification when requesting the Schedule A authority. Further, the appointees may have gained an unfair competitive advantage while serving in the Schedule A positions. Customs’ officials stated that when it was decided that the law enforcement and public affairs specialist positions were needed, no such positions existed at Customs. Accordingly, to facilitate hiring, given the unique combination of skills required for the positions, they believed that using the Schedule A authority was appropriate and the best mechanism. About 13 months after appointing one of the law enforcement specialists and about 19 months after appointing the public affairs specialist, Customs determined that these positions should be established as competitive service positions identical to their former Schedule A positions. Customs officials justified competitively recruiting for the law enforcement position because the former Commissioner had determined that there was a permanent need for the position to improve Customs’ efforts in combating various types of crimes. Similarly, the public affairs position was created in the competitive service because the Assistant Commissioner for Public Affairs determined there was a permanent need for the position (1) to establish and maintain effective working relationships with top officials of other federal, state, and local agencies and members of the media and (2) to oversee Customs’ antidrug enforcement public affairs program. While not disagreeing that the positions could have been advertised competitively originally, Customs officials stated they acted in good faith in initially making the appointments under the Schedule A authority to meet their goal of hiring needed expertise as quickly as possible. The plans that Customs ultimately developed for examining the applicants for each of the two competitive positions contained criteria or standards for measuring the relative qualifications of the applicants, including the KSAs considered essential for successful or enhanced performance of the position’s duties and responsibilities. For example, in the law enforcement specialist plan, one KSA on which applicants were evaluated was “knowledge of federal laws, regulations, and procedures and, specifically, criminal laws enforced by Customs.” Customs advertised the law enforcement specialist and public affairs specialist positions to the public on February 4 and August 14, 2000, respectively. Customs officials said that both positions were announced to the public to attract and retain the most highly qualified candidates. The selection processes for both positions were completed in about 2 months, and the law enforcement and public affairs incumbents—who had originally been hired as Schedule A appointees—were appointed on April 23, 2000, and October 8, 2000, respectively. Although the circumstances suggest that the two appointees may have received an unfair competitive advantage by serving in positions with the same duties for more than a year prior to the competitive positions being announced publicly, we could not determine that they would not ultimately have been selected for the competitive service positions based on their education and work experiences prior to coming to Customs. Customs’ officials acknowledged that the duties and responsibilities of the Schedule A law enforcement and public affairs specialist positions, and the KSAs needed to perform those positions, had not changed so as to provide a basis for Customs to announce the positions to the public and examine the applicants. Rather, Customs’ Director of Executive Services Staffing said that the use of the Schedule A appointment authority for the law enforcement specialist and public affairs specialist positions originally was appropriate because, in addition to the unique combination of KSAs required, there was an urgent need to fill such positions in an expeditious manner. However, neither Treasury’s request for the appointment authority nor OPM’s approval of the authority cited urgency as a reason. Furthermore, OPM officials stated that urgency was not a factor they had considered in granting the approval. While the first appointment was made within 1 month of the authority’s being granted, the next two appointments were not made until January 1999—about 5 months after the authorization was granted. The three original Schedule A positions no longer exist at Customs, and Customs has stated that it has no plans to convert any of the remaining six positions or any future positions created under the authority to the competitive service. Two other Customs’ appointments were questioned at the department level. Treasury officials told us they were concerned about whether the positions complied with Schedule A authority and whether the timing of the appointments of two former noncareer SES appointees—just before the new Administration took office—gave the appearance of political favoritism. While appointments of noncareer individuals to permanent positions late in an Administration do not violate merit system principles, they could give the appearance of political favoritism. The effect of these appointments was to move two employees from noncareer SES appointments—from which incumbents are usually asked to resign upon the advent of a new Administration—to Schedule A appointments in positions which have some similar duties and indefinite tenure. Customs’ officials stated that the two employees would not have appeal rights until they have served 2 years under their current Schedule A appointments, which will not occur until January 2003. Treasury’s Deputy Assistant Secretary for Human Resources said that she was contacted on two occasions in January 2001 by Customs’ Assistant Commissioner for Human Resources regarding the pending appointments of two noncareer SES employees to the Schedule A positions of law enforcement appropriations officer and strategic trade adviser. As a result of these discussions, the Deputy Assistant Secretary said that she was concerned about whether the appointments would “withstand the scrutiny inherent in the procedures in place for the review of conversions of political employees,” given that the former positions were political appointments and a new Administration would be taking office on January 20, 2001. Customs’ officials said they believed the appointments—made on January 14, 2001—were in full conformance with civil service laws, rules, and regulations. The newly created Schedule A positions included some similar—but not identical—duties to those of the SES positions of the political, noncareer appointees. For example, the law enforcement appropriations officer’s responsibilities, which included establishing and maintaining effective working relationships with congressional staff, were similar but not identical to responsibilities of the appointee’s former SES position to maintain contact with congressional staff whose actions have a direct bearing on Customs’ programs and policies. Similarly, the responsibilities of the strategic trade adviser’s Schedule A position included advising the Assistant Commissioner and other Customs’ executives regarding trade enforcement strategies and programs, while the responsibilities of the former SES position included providing executive-level advice and counsel in planning long-range regulatory programs. In early February 2001, the Deputy Assistant Secretary expressed her concerns to the Associate Director of OPM’s Employment Service about whether the appointments complied with the Schedule A authority and whether the timing of the appointments could give the appearance of political favoritism. Subsequently, the Deputy Assistant Secretary referred the appointments to OPM and asked OPM to determine whether the appointments were an appropriate use of the authority. One month later, on March 12, 2001, OPM informed the Treasury that both appointments were within the scope of the authority. OPM officials said they based their determination on justifications provided by Customs that described several special skills and abilities required for each position that were impracticable to examine and that OPM did not consider the political nature of the employees’ former positions or the appointments. Notwithstanding the issues discussed above, the conversions of the three Schedule A employees—who had been in Schedule A positions identical to two newly created competitive service positions—to competitive service positions complied with merit system principles and the appointment of another Schedule A appointee to a limited-term SES position complied with OPM guidance. Two of the appointees were converted to positions— a public affairs specialist and a law enforcement specialist—with identical duties and responsibilities as those they had filled under the Schedule A authority. The third appointee—who also had previously been a law enforcement specialist—converted to a different position, chief of staff. OPM regulations do not address the conversion of excepted service appointees to the identical positions in the competitive service if conducted in compliance with merit system principles. Records in Customs’ merit-staffing files and other agency documents indicated that merit-staffing procedures had been followed and the principal requirements had been met for each of the three competitive selections. For example, each of the job announcements for the three positions was open for 5 business days—meeting OPM’s minimum requirement for an open period. In addition, Customs made the competitive area of consideration for each of the three job announcements worldwide and open to all qualified candidates. Customs also complied with OPM regulations for the selection of eligible applicants, including the regulation that the selectee must be from among the three highest-rated applicants. Otherwise, the agency must provide a justification. For all three positions, the selectees were the top-rated applicants for those positions. As also required by OPM regulations, the selecting officials for each position did not participate in rating or ranking the applicants for that position. The establishment of an SES position—senior adviser—to be filled by a limited-term appointment and the selection of a Schedule A appointee also complied with OPM requirements for such SES appointments. OPM regulations require the position’s term to be limited to 3 years or less and the position’s duties and responsibilities to be primarily for project-type activities that will expire in 3 years or less. Customs limited the term of the position to 3 years, and most of the position’s duties appeared likely to be completed within 3 years or less. For example, the incumbent is responsible for implementing programs to ensure integrity and credibility within the Office of Internal Affairs. The performance of such activities could be completed within a limited period of time. OPM’s guidance governing selection of an appointee does not require competition for such appointments, but it does require the appointee to be qualified for the position. In compliance with these regulations, Customs’ appointing official determined that the individual’s KSAs met the qualifications necessary to perform the duties and responsibilities of the position and made the selection. OPM conducts oversight of federal agencies’—including Customs’—single- agency Schedule A appointments and determines whether those appointments comply with Schedule A authority. However, OPM’s sampling of Customs’ appointments did not include any of the appointments made under the Schedule A authority. The occasional OPM survey that addresses, in part, the continuing need for Schedule A authority was last conducted at Customs in 1998, prior to OPM’s approval of Customs’ authority for 10 positions. Only the review conducted at Treasury’s request addressed any of the nine positions. OPM found the two positions to be within the scope of Customs’ authority. The Civil Service Reform Act of 1978 requires OPM to carry out an oversight program to ensure that agencies exercise their personnel management authorities in accordance with merit system principles and with the law and regulations that implement those principles. OPM’s Office of Merit Systems Oversight and Effectiveness (OMSOE) performs periodic oversight reviews of each agency’s human capital practices, including the use of excepted service appointment authorities. Each of the departmental agencies and independent agencies with larger numbers of employees—including the Department of the Treasury—is subject to review every 4 years, and each of the smaller independent agencies is reviewed every 5 years. However, each office within an agency—such as Customs—is not necessarily reviewed each time the agency is reviewed. The Assistant Director for OMSOE said that each audit team determines which offices within an agency will be reviewed based on a pre-site assessment of prior audit reports and other sources. OMSOE conducted an oversight review of Customs’ appointments made during 1999 and issued a report in November 2000. The overall objective of the review was to examine how managers, supervisors, and human capital specialists work together to make decisions that support the mission of the agency, contribute to public policy objectives, and are consistent with merit system principles. The review covered three broad areas—staffing, workforce management, and human capital management accountability. As part of the staffing review, OMSOE reviewed appointment authorities granted to Customs by OPM. In reviewing these authorities, OMSOE performed its standard audit procedure of selecting a judgmental sample of the appointments for review. The two appointments that Customs had made under its Schedule A authority at the time of the review of 1999 appointments were not selected as part of OPM’s sample. The Assistant Director of OMSOE stated that OMSOE uses “problem oriented” sampling to select appointments. That means that if OMSOE officials have identified problems with a specific type of appointment through such sources as employee complaints and periodic employee attitude surveys, the audit team will include some of those appointments in the sample of appointments it reviews. For example, during the Customs review, OMSOE officials said that they randomly selected 9 of 174 Veterans Readjustment Act appointments for that reason. The audit team’s decision to limit its review to nine cases, according to an OMSOE auditor, was based on the time and resources available. During the entire review of Customs, OMSOE sampled 54 (or about 3 percent) of approximately 2,061 appointments, including 15 Schedule A appointments, made during 1999. The Assistant Director said that because of the limited sample, any conclusions developed from the analysis of appointments sampled could not be projected as being representative of all the appointments for the organization as a whole. The Assistant Director believes the judgmental sampling technique is adequate because OMSOE is looking for systemic problems. OPM’s oversight of appointment authorities also includes occasional surveys by OPM’s Employment Service. These surveys are not conducted regularly and, in the case of Customs, were conducted most recently in 1982 and 1998. The surveys primarily consisted of OPM’s requesting that Treasury justify the continuing need for each of its appointment authorities. Treasury’s response to OPM’s survey in July 1998 addressed the continuing need for Customs’ single-agency authority for criminal investigators. The response did not apply to the 10 positions because the authority for those positions was not granted to Customs until August 21, 1998, after the survey. As discussed previously, at Treasury’s request, OPM reviewed the use of Customs’ Schedule A authority for two of the nine appointments made. That review was limited to an assessment of whether the positions’ duties and responsibilities appeared to comply with the criteria for the authority. We obtained comments on this report from the Director, OPM, and the Deputy Assistant Secretary for Human Resources, Treasury, responding on behalf of the Secretary of the Treasury. The Director said that OPM agreed with the report’s conclusions that the appointments complied with merit system principles and OPM guidance. The Director also expressed concern about the appearance of political favoritism that surrounded two appointments and consequently planned to conduct a review. In addition, because our report presented information from Customs’ representatives that appeared inconsistent with Customs’ original justification for the Schedule A authority, OPM indicated it plans to review the basic justification for that authority. Treasury’s Deputy Assistant Secretary for Human Resources provided technical comments that were incorporated in the report where appropriate. OPM’s comments are reprinted in appendix II. We performed our work in Washington, D.C., from October 2000 through May 2001 in accordance with generally accepted government auditing standards. Additional information on the scope and methodology of our review is presented in appendix I. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of the Treasury and the Director, OPM. We will also make copies available to others on request at that time. Major contributors to this report were Richard W. Caradine, Assistant Director; Thomas C Davies, Jr., Project Manager; and John Ripper, Senior Analyst. If you or your staff have any questions about this report, please contact Mr. Davies or me at (202) 512-9490. In order to determine the nature of and reasons for Treasury’s request for Customs’ Schedule A appointment authority for 10 positions, we asked Customs’ officials to explain their justification for requesting the authority. We also reviewed the request letters and supporting documentation that the Treasury Department submitted to OPM on behalf of Customs. To determine OPM’s process for reviewing Treasury’s request, we interviewed OPM officials, obtained and reviewed documents related to OPM’s review and justification for approving the request, and obtained and reviewed the pertinent laws and Code of Federal Regulations (C.F.R.) governing the granting of Schedule A authorities. To identify the circumstances surrounding the appointments Customs made under the Schedule A authority and whether Customs used the authority appropriately, we obtained from OPM and Customs listings of all Schedule A appointments made under the single-agency Schedule A authority (5 C.F.R. 213.3105 (b)(6) (1964)) from August 21, 1998—the date the authority was approved—until May 31, 2001, and the related personnel files. Based on our reviews of these files and interviews with Customs’ officials, we identified the dates that the positions were created and appointments were made, the titles and grades of the positions, and the duties and responsibilities of each position. At our request Customs provided justifications explaining why each of these positions met the criteria for its Schedule A authority. We also reviewed laws, regulations, and procedures that govern the use of Schedule A authority and determined that the regulations do not specify criteria for determining when it is impracticable to examine for a position. We compared the general requirements to the position’s duties and responsibilities and exercised professional judgment in assessing whether appointments appeared to be an appropriate use of the single-agency Schedule A authority. In addition, we met with Treasury Department officials to discuss Customs’ appointment of two of the agency’s noncareer SES employees to the Schedule A positions of strategic trade adviser and appropriations officer. We did this because Treasury officials had expressed concern regarding the two appointments and had contacted OPM regarding whether these appointments would be appropriate under Customs’ single- agency authority. We also discussed with OPM officials their decisions regarding these two appointments. To determine whether Customs complied with merit system principles in converting three of its nine Schedule A appointees to competitive service positions, we applied the merit system principles established in Title 5 of the United States Code (5 U.S.C. 2301) and OPM’s interpretation of those principles in its Delegated Examining Operations Handbook. The handbook describes the procedures and requirements intended to ensure compliance with the merit system objective of fair and open competition. We compared the handbook’s requirements to actions taken by Customs in announcing the positions, evaluating the applicants, and making the selections. In addition, to assess Customs’ appointment of a Schedule A appointee to an SES limited-term appointment, we applied the SES appointment criteria in OPM’s Guide to the Senior Executive Service to the circumstances surrounding the appointment. To describe OPM’s oversight process and the extent of its oversight of Customs’ Schedule A appointments, we interviewed OPM officials and obtained copies of OPM’s Oversight Evaluation Handbook, audit reports of Customs, and OPM’s correspondence with Customs concerning the Schedule A authority, from the date the authority was approved through May 31, 2001.
|
The Treasury Department, on behalf of the Customs Service, requested Office of Personnel Management (OPM) approval for Schedule A appointment authority for 10 positions for oversight policy and direction of sensitive law enforcement activities. Treasury's request stated that "due to the sensitive nature of the operations, these positions require a unique blend of special characteristics, skills and abilities that cannot be announced to the general public, and for which it is not practical to examine." According to OPM officials, no detailed criteria are applied when OPM considers such requests. OPM approved the request primarily because Treasury argued that the positions were sensitive in nature, involved law enforcement activities, and were impracticable to advertise and examine for. In using the Schedule A authority between September 1998 and January 2001, Customs made nine appointments to various positions. GAO found that circumstances surrounding five of the nine appointments can give the appearance of inconsistency in the application of the Schedule A appointment authority or possible favoritism toward former political employees. OPM reviews agencies' use of appointment authorities, including Schedule A and other excepted appointments, every four to five years. The most recent review of Customs was for appointments made in 1999. OPM also conducts occasional surveys that require agencies to justify the continuing need for each of its appointment authorities.
|
Since 1992, physicians in Medicare have been paid under a national fee schedule in conjunction with a system of spending targets. Under the design of the fee schedule and target system, annual adjustments (updates) to physician fees depend, in part, on whether actual spending has fallen below or exceeded the target. Fees are permitted to increase at least as fast as the costs of providing physician services as long as the growth in volume and intensity of physician services remains below a specified rate—currently, a little more than 2 percent a year. If spending associated with volume and intensity grows faster than the specified rate, the target system reduces fee increases or causes fees to fall. The target system in place today, called the sustainable growth rate (SGR) system, was implemented in 1998. This system acts as a blunt instrument in that all physicians are subject to the consequences of excess spending—that is, downward fee adjustments—that may stem from the excessive use of resources by some physicians relative to their peers. Medicare spending on Part B physician services has grown rapidly in recent years. From 2000 through 2005, program spending for Part B FFS physician services grew at an average annual rate of 9.8 percent, outpacing average annual Medicare aggregate spending growth of 8.7 percent for this period. Since 2002, actual Medicare spending on physician services has exceeded SGR targets, and the SGR system has called for fee cuts to offset the excess spending. However, the cuts were overridden by administrative action or the Congress five times during this period. In a 2004 report on the SGR system, we found that possible options to modify or eliminate the system would increase the growth in cumulative spending over a 10-year period, usually by double-digit percentages. The difficulty of stabilizing physician fees in the face of the need to maintain fiscal discipline has spurred congressional interest in other ways to restrain spending growth. As concern about the long-term fiscal sustainability of Medicare has grown, so has the recognition that some of the spending for services provided and ordered by physicians may not be warranted. For example, the wide geographic variation in Medicare spending for physician services—unrelated to beneficiary health status or outcomes—provides evidence that health needs alone do not determine spending. Furthermore, several studies have shown that in some instances growth in the number of services provided may lead to medical harm. Payments under the Medicare program, however, generally do not foster individual physician responsibility for quality, medical efficacy, or efficiency. In recognition of this, the Institute of Medicine has recently recommended that Medicare payment policies should be reformed to include a system for paying health care providers differentially based on how well they meet performance standards for quality or efficiency or both. In April 2005, CMS initiated a demonstration mandated by the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA) to test this approach. Under the Physician Group Practice demonstration, 10 large physician group practices, each comprising at least 200 physicians, are eligible for bonus payments if they meet quality targets and succeed in keeping the total expenditures of their Medicare population below annual targets. Several studies have found that Medicare and other purchasers could realize substantial savings if a portion of patients switched from less efficient to more efficient physicians. The estimates vary according to assumptions about the proportion of beneficiaries who would change physicians. In 2003, the Consumer-Purchaser Disclosure Project, a partnership of consumer, labor, and purchaser organizations, asked actuaries and health researchers to estimate the potential savings to Medicare if a small proportion of beneficiaries started using more efficient physicians. The Project reported that Medicare could save between 2 and 4 percent of total costs if 1 out of 10 beneficiaries moved to more efficient physicians. This conclusion is based on information received from one actuarial firm and two academic researchers. One researcher concluded, based on his simulations, that if 5 to 10 percent of Medicare enrollees switched to the most efficient physicians, savings would be 1 to 3 percent of program costs—which would amount to about $5 billion to $14 billion in 2007. The Congress has also recently expressed interest in approaches to constrain the growth of physician spending. The Deficit Reduction Act of 2005 required the Medicare Payment Advisory Commission (MedPAC) to study options for controlling the volume of physicians’ services under Medicare. One approach for applying volume controls that the Congress directed MedPAC to consider is a payment system that takes into account physician outliers. In each of the 12 metropolitan areas studied, we found physicians who treated a disproportionate share of overly expensive patients. Using 2003 Medicare claims data, we identified overly expensive beneficiaries in the 12 areas and computed the percentage they represented in each generalist physician’s Medicare FFS practice. We then identified outlier generalist physicians as those with practices that, relative to their peers, had a percentage of overly expensive patients that was unlikely to have occurred by chance. We concluded that such physicians are likely to practice an inefficient style of medicine. The proportion of generalist physicians found to be outliers varied across the 12 areas. In two areas, they accounted for more than 10 percent of the areas’ generalist physician population. We classified beneficiaries as overly expensive if their total Medicare expenditures—for services provided by all health providers, not just physicians—ranked in the top fifth of their health status cohort for 2003 claims. We developed 31 health status cohorts of beneficiaries based on the diagnoses appearing on their Medicare claims and other factors. Within each health status cohort, we observed large differences in total Medicare spending across beneficiaries. For example, in one cohort of beneficiaries whose health status was about average, overly expensive beneficiaries—the top fifth ranked by expenditures—had average total expenditures of $24,574, as compared with the cohort’s bottom fifth, averaging $1,155. (See fig. 1.) This variation may reflect differences in the number and type of services provided and ordered by these patients’ physicians as well as factors not under the physicians’ direct control, such as a patient’s response to and compliance with treatment protocols. Overly expensive beneficiaries accounted for nearly one-half of total Medicare expenditures even though they represented only 20 percent of beneficiaries in our sample. Based on 2003 Medicare claims data, our analysis found outlier generalist physicians in all 12 metropolitan areas we studied. Our methodology assumed that, if overly expensive beneficiaries were distributed randomly across generalists, no more than 1 percent of generalists in any area would be designated as outliers. Across all areas, the actual percentage of outlier generalists ranged from 2 percent to over 20 percent. To identify outlier generalist physicians, we compared the percentage of overly expensive beneficiaries in each physician’s Medicare practice to a threshold value—the percentage of overly expensive beneficiaries in a physician’s Medicare practice that would be expected to occur less than 1 time out of 100 by chance. We classified those who exceeded the threshold value for their metropolitan area as outliers. That is, all physicians had some overly expensive patients in their Medicare practice, but outlier physicians had a much higher percentage of such patients. The Miami area had the highest percentage—almost 21 percent—of outlier generalists, followed by the Baton Rouge area at about 11 percent. (See table 1.) Across the other areas, the percentage of outliers ranged from 2 percent to about 6 percent. In 2003, outlier generalists’ Medicare practices were similar to those of other generalists, but the beneficiaries they treated tended to experience higher utilization of certain services. Outlier generalists and other generalists saw similar average numbers of Medicare patients (219 compared with 235) and their patients averaged the same number of office visits (3.7 compared with 3.5). However, after taking into account beneficiary health status and geographic location, we found that beneficiaries who saw an outlier generalist, compared with those who saw other generalists, were 15 percent more likely to have been hospitalized, 57 percent more likely to have been hospitalized multiple times, and 51 percent more likely to have used home health services. By contrast, they were 10 percent less likely to have been admitted to a skilled nursing facility. Consistent with the premise that physicians play a central role in the generation of health care expenditures, some health care purchasers use physician profiling to promote efficiency. The 10 health care purchasers in our study profiled physicians—that is, compared physicians’ performance to an efficiency standard to identify those who practiced inefficiently. To measure efficiency, the purchasers we spoke with generally compared actual spending for physicians’ patients to the expected spending for those same patients, given their clinical and demographic characteristics. Most of the 10 we spoke with also evaluated physicians on quality. The purchasers linked their efficiency profiling results and other measures to a range of physician-focused strategies to encourage the efficient provision of care. The 10 health care purchasers we examined used two basic profiling approaches to identify physicians whose medical practices were inefficient. One approach focused on the costs associated with treating a specific episode of an illness—for example, a stroke or heart attack—and assessing the physician’s performance based on the resources used during that episode. The other approach focused on costs, within a specific time period, associated with the patients in a physician’s practice. Both approaches shared common features. That is, both used information from medical claims data to measure resource use and account for differences in patients’ health status. In addition, both approaches assessed physicians (or physician groups) based on the costs associated with services that they may not have provided directly, such as costs associated with a hospitalization or services provided by a different physician. Although the method used by purchasers to estimate expected spending for patients varied, all used patient demographics and diagnoses. The programs generally computed efficiency measures as the ratio of actual to expected spending for patients of similar health status. Ratios greater than 1.0 (indicating that actual equals expected spending) suggest relative inefficiency while ratios below 1.0 suggest efficiency, although purchasers were free to set their own threshold. For example, one purchaser scrutinized physicians with scores above 1.2 for inefficient delivery of care. Some purchasers also took account of additional information before making a final judgment. For example, two purchasers told us that they reexamined the results for physicians who exceeded the threshold for inefficiency to see if there were factors, such as erroneous data, that made an otherwise efficient provider appear inefficient. While our focus was on purchasers who profile for efficiency, purchasers in our study included quality measures as part of their profiling programs. For example, most purchasers evaluated physicians on one or more quality measures, such as whether patients with congestive heart failure were prescribed beta blockers. Some purchasers included factors related to patient access in their evaluations of physicians, such as whether the physician was in a specialty that was underrepresented within the network or within a particular geographic area covered by the network. Purchasers varied with respect to the types of physicians profiled for efficiency. All of the purchasers we interviewed profiled specialists and all but one also profiled primary care physicians. Several purchasers said they would only profile physicians who treated a minimum number of cases; for example, one did not profile psychiatrists because it felt the volume of data was not sufficient to do statistical profiling. Typically such analyses require a minimum sample size to be valid. Purchasers differed on the inclusion of physician groups and individual practitioners. Four of the purchasers profiled physician group practices exclusively, three profiled individual physicians exclusively, and the remaining three profiled both. To perform their profiling analyses, eight of the purchasers used episode- grouping models, which group claims into clinically distinct episodes of care—such as stroke—adjusted for case severity or patient health status. This approach can assign one physician primary responsibility for the episode even if the patient sees multiple physicians. Two purchasers used a population-based model, which aggregated patient claims data to classify a patient’s health status score for patients in the population to estimate expected expenditures for the patients a physician treats. The health care purchasers we examined directly tied the results of their profiling methods to incentives that encourage physicians to practice efficiently. In some cases, purchasers implemented these incentives directly, while in other cases, incentives were implemented at the discretion of their clients. We found that the incentives varied widely in design, application, and severity of consequences—from steering patients toward the most efficient providers to excluding a physician from the purchaser’s provider network because of inefficient practice patterns. The following were commonly reported incentives: Physician education: Some health care purchasers told us that they shared their profiling results with physicians to encourage more efficient care delivery or to foster acceptance of the purchaser’s physician evaluation methods. For example, one purchaser’s profiling report compared a physician’s utilization patterns to a benchmark measure derived from the practice patterns of the physician’s peer group, such as cardiologists compared with other cardiologists in the network or primary care physicians compared with other primary care physicians in the network. No purchaser employed education as the sole method of motivating physicians to change their practice patterns. Publicly designating physicians based on efficiency or quality: Some purchasers encouraged enrollees to get their care from certain physicians by designating in their physician directories those physicians who met quality or quality and efficiency standards. Other purchasers offered financial incentives to their enrollees to encourage them to patronize such physicians. The incentives may generate higher patient volume for the designated physicians, thereby achieving savings for the purchaser or their clients. Using tiered arrangements to promote efficiency: Several purchasers used profiling results to group physicians in tiers—essentially groups of physicians ranked by their level of efficiency. Enrollees selecting physicians in the higher tiers compared with those in lower tiers will obtain financial advantages—such as lower deductibles or copayments. From the purchaser’s point of view, tiering has the advantage of affording enrollees freedom of choice within the purchaser’s network, while making it advantageous for them to seek care from the network’s most efficient physicians. Several reported that a portion of their enrollees or employers of enrollees responded to the incentives offered by the tiered arrangements to switch to more efficient physicians. Bonuses and penalties: Two of the purchasers in our study used bonuses or financial penalties to encourage efficient medical practices. They awarded bonuses to physicians based on their efficiency and quality scores. To finance bonuses, one purchaser withholds 10 percent of each physician’s total reimbursement amount and with those funds pays bonuses to only those physicians who have high quality and efficiency scores. The amount withheld from physicians who did not meet standards serves as an implicit financial penalty. Network exclusion: One purchaser terminated its contractual relationship with physicians in its network when it determined that the physicians were practicing inefficiently. In an effort to control costs, the purchaser stated that it excluded about 3 percent of the physicians in its network in 2003. Although the purchaser has not ruled out similar actions in the future, it had not excluded additional physicians for reasons of inefficiency at the time of our interview. Evidence from our interviews with the health care purchasers in our study suggests that physician profiling programs may have the potential to generate savings for health care purchasers or their clients. Three of the 10 purchasers provided us with estimates of savings attributable to their physician-focused efficiency efforts. One placed more efficient physicians in a special network and reported that premiums for this network were 3 to 7 percent lower than premiums for the network that includes the rest of its physicians. Another reported that growth in spending fell from 12 percent to about 1 percent in the first year after it restructured its network as part of its efficiency program. By examining the factors that contributed to the reduction, an actuarial firm hired by the purchaser estimated that about three-quarters of the reduction in expenditure growth was most likely a result of the efficiency program. The third purchaser reported a “sentinel” effect—the effect of being scrutinized—resulting from its physician profiling efforts. This purchaser estimated that the sentinel effect associated with its physician efficiency program reduced spending by as much as 1 percent. Three other purchasers suggested their programs might have achieved savings for themselves or their clients but did not provide us with their savings estimates, while four said they had not yet attempted to measure savings at the time of our interviews. Medicare’s data-rich environment is conducive to conducting profiling analyses designed to identify physicians whose medical practices are inefficient compared with their peers. CMS has a comprehensive repository of Medicare claims data and experience using key methodological tools. However, CMS may not have legislative authority to implement some of the incentives used by other health care purchasers to encourage efficiency. Fundamental to profiling physicians for efficiency is the ability to make statistical comparisons that enable health care purchasers to identify physicians practicing outside of established norms. CMS has the resources to make statistically valid comparisons, including comprehensive medical claims information, tools to adjust for differences in patient health status, and sufficient numbers of physicians in most areas to construct adequate sample sizes. As with the development of any new system, however, CMS would need to make choices about its design and implementation. Among the resources available to CMS are the following: Comprehensive source of medical claims information: CMS maintains a centralized repository (database) of all Medicare claims that provides a comprehensive source of information on patients’ Medicare-covered medical encounters. The data are in a uniform format, as Medicare claim forms are standardized. In addition, the data are relatively recent: CMS states that 90 percent of clean claims are paid within 30 days and new information is added to the central database weekly. Using claims from the central database, each of which includes the beneficiary’s unique identification number, CMS can identify and link patients to the various types of services they received—including, for example, hospital, home health, and physician services—and to the physicians who treated them. Data samples large enough to ensure meaningful comparisons across physicians: The feasibility of using efficiency measures to compare physicians’ performance depends on two factors—the availability of enough data on each physician to compute a reliable efficiency measure and numbers of physicians large enough to provide meaningful comparisons. In 2005, Medicare’s 33.6 million FFS enrollees were served by about 618,000 physicians. These figures suggest that CMS has enough clinical and expenditure data to compute reliable efficiency measures for most physicians billing Medicare. Methods to account for differences in patient health status: Because sicker patients are expected to use more health care resources than healthier patients, patients’ health status needs to be taken into account to make meaningful comparisons among physicians. The 10 health care purchasers we examined accounted for differences in patients’ health status through various risk adjustment methods. Medicare has significant experience with risk adjustment. Specifically, CMS has used increasingly sophisticated risk adjustment methodologies over the past decade to set payment rates for beneficiaries enrolled in managed care plans. To conduct profiling analyses, CMS would likely make methodological decisions similar to those made by the health care purchasers we interviewed. For example, the health care purchasers we spoke with made choices about, among other things, whether to profile individual physicians or group practices; which risk adjustment tool was best suited for the purchaser’s physician and enrollee population; whether to measure costs associated with episodes of care or the costs, within a specific time period, associated with the patients in a physicians’ practice; and what criteria to use to define inefficient practices. CMS would also likely want to take steps similar to those of other purchasers to supplement its efficiency assessments with additional information before using the results to do more than share information with physicians. For example, some purchasers in our study reviewed their profiling results for physicians who did not meet the efficiency standard to validate the accuracy of their assessments. Such validation of profiling results would be appropriate if CMS were to institute financial incentives for physicians to improve the efficiency of the care they provide and order for Medicare beneficiaries. Some of the actions health care purchasers take as a result of their physician profiling may not be readily adaptable to Medicare, given the program’s structural underpinnings, but they may be instructive in suggesting future directions for Medicare. Although Medicare has extensive experience with physician education efforts, the implementation of other strategies to encourage efficiency would likely require legislation providing authority to the Secretary of Health and Human Services. Educational outreach to physicians has been a long-standing and widespread activity in Medicare as a means to change physician behavior based on profiling efforts to identify improper billing practices and potential fraud. Outreach includes letters sent to physicians alerting them to billing practices that are inappropriate. In some cases, physicians are given comparative information on how the physician varies from other physicians in the same specialty or locality with respect to use of a certain service. A physician education effort based on efficiency profiling results would therefore not be a foreign concept in Medicare. For example, CMS could provide physicians a report that compares their practice’s efficiency with that of their peers. This would enable physicians to see whether their practice style is outside the norm. In its March 2005 report to the Congress, MedPAC recommended that CMS measure resource use by physicians and share the results with them on a confidential basis. MedPAC suggested that such an approach would enable CMS to gain experience in examining resource use measures and identifying ways to refine them while affording physicians the opportunity to change inefficient practices. Another application of profiling results used by the purchasers we spoke with entailed sharing comparative information with enrollees. CMS has considerable experience comparing certain providers on quality measures and posting the results to a Web site. Currently, Medicare Web sites posting comparative information exist for hospitals, nursing homes, home health care agencies, dialysis facilities, and managed care plans. In its March 2005 report to the Congress, MedPAC noted that CMS could share results of physician performance measurement with beneficiaries once the agency gained sufficient experience with its physician measurement tools. Several structural features of the Medicare program would appear to pose challenges to the use of other strategies designed to encourage efficiency. These features include a beneficiary’s freedom to choose any licensed physician permitted to be paid by Medicare; the lack of authority to exclude physicians from participating in Medicare unless they engage in unlawful, abusive, or unprofessional practices; and a physician payment system that does not take into account the efficiency of the care provided. Under these provisions, CMS would not likely be able—in the absence of additional legislative authority—to designate preferred providers, assign physicians to tiers associated with varying beneficiary copayments, tie fee updates of individual physicians to meeting performance standards, or exclude physicians who do not meet practice efficiency and quality criteria. Regardless of the use made of physician profiling results, the involvement of, and acceptance by, the physician community and other stakeholders of any actions taken is critical. Several purchasers described how they had worked to get physician buy-in. They explained their methods to physicians and shared data with them to increase physicians’ familiarity with and confidence in the purchasers’ profiling. CMS has several avenues for obtaining the input of the physician community. Among them is the federal rule-making process, which generally provides a comment period for all parties affected by prospective policy changes. In addition, CMS forms federal advisory committees—including ones composed of physicians and other health care practitioners—that regularly provide it with advice and recommendations concerning regulatory and other policy decisions. The health care spending levels predicted to overwhelm the Medicare program call for action to be taken promptly. To address this looming problem, no single action or reform is likely to suffice, and policymakers are seeking solutions among an array of reform proposals. Our findings suggest that physician profiling is one promising, targeted approach toward curbing excessive spending both for physician services and for the services that physicians order. Our profiling of generalist physicians in 12 metropolitan areas found indications of inefficient physician practices occurring in areas with low spending per beneficiary as well as in areas with high spending. To ensure that our estimates were fair, we adjusted them to account for the fact that some physicians have sicker patients than others; in addition, our efficiency standards were based on actual practices by local physicians rather than on a single measure applied to all physicians, regardless of geographic area. Notably, two areas—Miami and Baton Rouge—had particularly large proportions of outlier physicians compared with the other areas. Some health care purchasers seek to curb inefficient practices through physician education and other measures directed at physicians’ income— such as discouraging patients from obtaining care from physicians whom the purchaser, through profiling, ranks as inefficient. If similar approaches were adopted in Medicare—that is, profiling physicians for efficiency and strategically applying the results—the experience of other purchasers suggests that reductions in spending growth could be achieved. The adoption of a profiling system could require the modification of certain basic Medicare principles. For example, if CMS had the authority to rank- order physicians based on efficiency and tier beneficiary copayments accordingly, beneficiaries could retain the freedom to choose among providers but would be steered, through financial incentives, toward those identified as most efficient. CMS would likely find it desirable to base the tiers on both quality and efficiency. It would also be important to develop an evaluation component to measure the profiling system’s impact on program spending and physician behavior. In addition, a physician profiling system in Medicare could work in ways that would be complementary to the SGR system. That is, if Medicare instituted a physician profiling system that resulted in gains in efficiency, over time the rate of growth in volume and intensity of physician services could decline and the SGR targets would be less likely to be exceeded. At the same time, under a profiling system that focused on total program expenditures, Medicare could experience a drop in unnecessary utilization of other services, such as hospitalizations and home health care. Although savings from physician profiling alone would clearly not be sufficient to correct Medicare’s long-term fiscal imbalance, it could be an important part of a package of reforms aimed at future program sustainability. Given the contribution of physicians to Medicare spending in total, we recommend that the Administrator of CMS develop a profiling system that identifies individual physicians with inefficient practice patterns and, seeking legislative changes as necessary, use the results to improve the efficiency of care financed by Medicare. The profiling system should include the following elements: total Medicare expenditures as the basis for measuring efficiency, adjustments for differences in patients’ health status, empirically based standards that set the parameters of efficiency, a physician education program that explains to physicians how the profiling system works and how their efficiency measures compare with those of their peers, financial or other incentives for individual physicians to improve the efficiency of the care they provide, and methods for measuring the impact of physician profiling on program spending and physician behavior. We obtained written comments on a draft of this report from CMS (see app. IV). We obtained oral comments from representatives of the American College of Physicians (ACP) and the American Medical Association (AMA). CMS stated that our recommendation was very timely and that it fits into efforts the agency is pursuing to improve the quality and efficiency of care paid for by Medicare. CMS also found our focus on the need for risk adjustment in measuring physician resource use to be particularly helpful. CMS noted that its current measurement efforts involve evaluation of “episode grouper” technology, which examines claims data for a given episode of care, and called it a promising approach. We do not disagree, but we also believe that approaches involving the measurement of total patient expenditures are equally promising. CMS said that the agency would incur significant recurring costs to develop reports on physician resource use, disseminate them to physicians nationwide, and evaluate the impact of the program. While our report notes that CMS is familiar with key methodological tools needed to conduct such an effort, we agree that any such undertaking would need to be adequately funded. CMS was silent on a strategy for using profiling results beyond physician education. We believe that the optimal profiling effort would include financial or other incentives to curb individual physicians’ inefficient practices and would measure the effort’s impact on Medicare spending. AMA and ACP raised three principal concerns about physician profiling: the relative importance of quality and efficiency, the adequacy of risk adjustment methods, and the ways profiling results would be used. Both said that quality standards should be the primary focus of a physician profiling system. AMA said including incentives that promote the provision of high-quality care might increase costs initially but could reduce costs in the long term. Although we agree that quality is an important measure of physician performance, given growing concern about Medicare’s fiscal sustainability, we believe that a focus on the efficient delivery of care is essential. With regard to the use of risk adjustment methods in assessing physician efficiency, both AMA and ACP said that this technique has significant shortcomings. For example, AMA said that diagnostic information included in the claims data used in risk adjustment may not adequately capture differences in patient health status. AMA also said that these data lack information on other factors that affect health status and spending, such as differences in patient compliance with medical advice. ACP echoed this concern. We believe that these claims data limitations are not of sufficient importance to preclude their use for profiling physicians treating Medicare patients. As our report notes, risk adjustment methods using claims information are now used by many private payers in measuring physician resource use. Moreover, Medicare currently uses one such risk adjustment method to set payment rates for managed care plans. Finally, both AMA and ACP expressed reservations about linking the results of profiling to physician reimbursement. The AMA stated that it was acceptable to use profiling results for the purpose of physician education, but an exclusive focus on costs was not. Although all of the purchasers we interviewed included physician education in their profiling programs, none of them relied on it as the sole means for encouraging physicians to practice efficiently. Similarly, we believe that, to restrain the growth in Medicare expenditures, a physician profiling system would need financial or other incentives to motivate physicians to practice medicine efficiently. We are sending a copy of this report to the Administrator of CMS. We will also provide copies to others on request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7101 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. We developed a methodology to identify physicians whose practices were composed of a disproportionate number of overly expensive beneficiaries—that is, beneficiaries whose costs rank them in the top 20 percent when compared to the costs of other beneficiaries with similar health status. We focused our analysis on generalists—physicians who described their specialty as general practice, internal medicine, or family practice—in the following 12 metropolitan areas: Albuquerque, N.M.; Baton Rouge, La.; Des Moines, Iowa; Phoenix, Ariz.; Miami, Fla.; Springfield, Mass.; Cape Coral, Fla.; Riverside, Calif.; Pittsburgh, Pa.; Columbus, Ohio; Sacramento, Calif.; and Portland, Maine. We selected these metropolitan areas to obtain a sample of physicians that was geographically diverse and represented a range in average Medicare spending per beneficiary. We assigned physicians to a particular metropolitan area based on where the plurality of their Medicare expenditures was generated. Our results are not statistically generalizable. To conduct our analysis, we obtained 2003 Centers for Medicare & Medicaid Services (CMS) data from the following sources: (1) the Standard Analytic Files, a repository of Medicare claims information that include data on physician/supplier, durable medical equipment, skilled nursing, home health, hospice, and hospital inpatient and outpatient services and (2) the Denominator File, a database that contains enrollment and entitlement status information for all Medicare beneficiaries enrolled and/or entitled in a given year. To assess beneficiary health status, we used commercially available software developed by DxCG, Inc. This software uses beneficiary characteristics—age, sex, and Medicaid status—and diagnosis codes included on medical claims to assign each beneficiary a single health “risk score”—a summary measure of the beneficiary’s current health status corresponding to the beneficiary’s expected health care costs relative to the costs of the average Medicare beneficiary. We analyzed the Medicare practices of 7,105 physicians who provided services to 1,283,943 beneficiaries. Because our method for identifying overly expensive beneficiaries requires comparable information on total beneficiary costs, we developed a slightly different methodology for two groups of beneficiaries—survivors (beneficiaries who did not die in 2003) and decedents (beneficiaries who died in 2003). Decedents typically have annualized costs that are much higher than survivors but usually have less than 12 months of Medicare enrollment in their last year of life. We included survivors in our analysis if they had (1) 12 months of Medicare fee-for-service (FFS) enrollment in 2003 and (2) were not covered by other health insurance for which Medicare was determined to be a secondary payer. Decedents were included if they were continuously enrolled in Medicare FFS as of January 2003 and met the second criterion. Beneficiaries included in our analysis had at least one office visit with a generalist physician in one of the selected metropolitan areas. Using DxCG software, we examined the diagnosis codes on survivors’ 2003 hospital inpatient, outpatient, and physician claims and generated a separate health risk score for each beneficiary. The risk scores reflect the level of a beneficiary’s relative health status, and in our analysis, ranged from .01 (very healthy) to 30.84 (extremely ill). Next, using their risk scores, we assigned survivors into 1 of 31 discrete risk categories. The categories were ordered in terms of health status from very healthy (category 1) to extremely ill (category 31). Finally, we calculated each survivor’s total 2003 Medicare costs from all types of providers (hospital inpatient, outpatient, physician, durable medical equipment, skilled nursing facility, home health, and hospice). We included costs from all Medicare claims submitted on survivors’ behalf, including claims from locations outside the selected metropolitan areas. Within each risk category, we ranked survivors by their total costs. Survivors who ranked in the top 20 percent of their assigned risk category were designated as overly expensive. Figure 2 and figure 3 show the range of costs in the 31 risk categories for survivors in our sample. The methodology we used to identify decedents who were overly expensive was identical to that used for survivors, with one exception. Before ranking decedents by their total costs, we further divided them within each risk category by the number of months they were enrolled in Medicare FFS during 2003. This was necessary because decedents varied in the number of months they incurred health care costs. For example, decedents who died in October had up to 10 months to incur costs while those who died in January had 1 month or less to incur costs. The proportion of overly expensive beneficiaries varied across the areas we examined. We identified overly expensive beneficiaries within health status cohorts that spanned all 12 of the metropolitan areas. As a consequence, it was possible that some areas would have proportionately more overly expensive beneficiaries than others. For example, the Miami Fort Lauderdale-Miami Beach, Fla., Core-Based Statistical Area (CBSA) had the highest proportion of overly expensive beneficiaries, .28, and the Des Moines, Iowa, CBSA had the lowest proportion with .13. The remaining areas had proportions that ranged from .13 to .21. For each generalist physician, we determined the proportion of his or her Medicare patients that were overly expensive. Physicians’ proportions of overly expensive beneficiaries varied substantially both across and within metropolitan areas. For example, in Miami, where the overall proportion of overly expensive patients was .28, individual physicians’ proportions ranged from .08 to .98. Similarly, in Sacramento, the overall proportion was .16, with individual physicians’ proportions ranging from .05 to .60. To ensure that our estimate of each physician’s proportion of overly expensive beneficiaries was statistically reliable, we excluded physicians with small Medicare practices. We classified generalists as outliers if their practice was composed of such a high proportion of overly expensive beneficiaries that the proportion would only be expected to occur by chance no more than 1 time out of 100. In order to determine this proportion (threshold value) we conducted separate Monte Carlo simulations for each area. In each simulation, which we repeated 200 times for each metropolitan area, we randomly classified each of a generalist’s patients into one of two categories—overly expensive or other. The probability of a beneficiary being randomly assigned to the overly expensive category was equal to the proportion of physician-patient pairings in the metropolitan area in which the patient was an overly expensive beneficiary. We then determined the percentage of generalists for each proportion of overly expensive patients. The results generated by each of the 200 simulations were averaged to determine an expected percentage of generalists at each proportion of overly expensive beneficiaries. We defined the outlier threshold value as the point in the expected distribution where only 1 percent of physicians would have a proportion of overly expensive beneficiaries that large or larger. To illustrate our method, we present in figure 4 the actual and expected distributions of generalists in a hypothetical metropolitan area. The dotted line represents the distribution of generalists by their proportion of overly expensive beneficiaries that would be expected if such patients were randomly distributed among generalists. The solid line shows the actual distribution of generalists by their proportion of overly expensive patients. The vertical line (outlier threshold value) denotes the 99th percentile of the expected distribution—.25. That is, by chance, only 1 percent of generalists would be expected to have a proportion of overly expensive beneficiaries greater than .25. As shown by the area under the solid line and to the right of the vertical line, about 11 percent of generalists in this hypothetical example had actual proportions of overly expensive beneficiaries that exceeded .25—these generalists would be classified as outliers in our analysis. Table 2 shows that the proportion of overly expensive beneficiaries and the outlier threshold value varied across metropolitan areas. In general, areas that had higher proportions of overly expensive beneficiaries also had higher outlier threshold values. (See table 2.) In 2005 and 2006 we interviewed representatives of 10 health care purchasers who had implemented a physician profiling program. We also conducted some follow-up contacts to ensure the data were current. We had at least one purchaser from each major geographic area of the country as well as one Canadian province. These purchasers represented a mix of traditional health insurance plans and organizations that arrange care for select groups of patients. Five were commercial health plans, three were government agencies, one was a provider network that contracts with several insurance companies to provide care to their enrollees, and one was a trust-fund jointly managed by employers and a union. Table 2 presents the basic characteristics of each purchaser’s profiling program and includes, among other things, (1) the approximate number of covered lives and physicians profiled; (2) the year the purchaser began profiling physicians; (3) whether the purchaser profiled individual or group practices or both; (4) whether the purchaser also used quality measures, such as adherence to clinical practice guidelines, to evaluate physicians; and (5) the unit of resource use employed to measure efficiency. The purchasers with the classification of “Episode” used an episodic grouper, which links claims into an episode of care that may span multiple encounters and multiple providers. By adjusting for the severity of like illnesses, episode groupers allow purchasers to measure payments to a particular physician or physician group relative to their peers. The purchasers with the classification “Patient” used a person-based method of categorizing illness severity. This method allows the purchaser to compare actual expenditures relative to an estimate of what was expected to have been spent given the level of “sickness” of the patients in a particular practice. This appendix displays the distribution of generalist physicians by the proportion of overly expensive beneficiaries in their Medicare practice for each of the 12 metropolitan areas in our study. The vertical line in each chart represents the outlier threshold value for that area. If the proportion of overly expensive beneficiaries in a physician’s practice exceeded this value, then the physician was designated an outlier physician. In addition to the contact above, James Cosgrove and Phyllis Thorburn, Assistant Directors, and Todd Anderson, Hannah Fein, Gregory Giusto, Richard Lipinski, and Eric Wedum made key contributions to this report.
|
The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) directed GAO to study the compensation of physicians in traditional fee-for service (FFS) Medicare. GAO explored linking physician compensation to efficiency--defined as providing and ordering a level of services that is sufficient to meet a patient's health care needs but not excessive, given the patient's health status. In this report, GAO (1) estimates the prevalence in Medicare of physicians who are likely to practice inefficiently, (2) examines physician-focused strategies used by health care purchasers to encourage efficiency, and (3) examines the potential for the Centers for Medicare and Medicaid Services (CMS) to profile physicians for efficiency and use the results. To do this, GAO developed a methodology using 2003 Medicare claims data to compare generalist physicians' Medicare practices with those of their peers in 12 metropolitan areas. GAO also examined 10 health care purchasers that profile physicians for efficiency. Based on 2003 Medicare claims data, GAO's analysis found outlier generalist physicians--physicians who treat a disproportionate share of overly expensive patients--in all 12 metropolitan areas studied. Outlier generalists and other generalists saw similar numbers of Medicare patients and their respective patients averaged the same number of office visits. However, after taking health status and location into account, GAO found that Medicare patients who saw an outlier generalist--compared with those who saw other generalists--were more likely to have been hospitalized, more likely to have been hospitalized multiple times, and more likely to have used home health services. By contrast, they were less likely to have been admitted to a skilled nursing facility. Certain public and private health care purchasers routinely evaluate physicians in their networks using measures of efficiency and other factors. The 10 health care purchasers in our study profiled physicians--that is, compared physicians' performance to an efficiency standard to identify those who practiced inefficiently. To measure efficiency, the purchasers we spoke with generally compared actual spending for physicians' patients to the expected spending for those same patients, given their clinical and demographic characteristics. Most of the 10 purchasers also evaluated physicians on quality. To encourage efficiency, all 10 purchasers linked their physician evaluation results to a range of incentives--from steering patients toward the most efficient providers to excluding physicians from the purchaser's provider network because of inefficient practice patterns. CMS has tools available to evaluate physicians' practices for efficiency but would likely need additional authorities to use results in ways similar to other purchasers. CMS has a comprehensive repository of Medicare claims data to compute reliable efficiency measures for most physicians serving Medicare patients and has substantial experience using methods that adjust for differences in patients' health status. However, CMS may not currently have the flexibility that other purchasers have to link physician profiling results to a range of incentives encouraging efficiency. Implementation of other strategies to encourage efficiency would likely require legislation. CMS said that our recommendation was timely and that our focus on the need for risk adjustment in measuring physician resource use was particularly helpful. However, CMS only discussed using profiling results for educating physicians. GAO believes that the optimal profiling effort would include financial or other incentives to encourage efficiency and would measure the effort's impact on Medicare. GAO concurs with CMS that this effort would require adequate funding.
|
Several components within DOJ, DHS, and the Departments of Defense, Labor, and State have responsibility for investigating and prosecuting human trafficking crimes, as shown in figure 1. In addition to federal investigative and prosecutorial agencies, other agencies play a role in helping to identify human trafficking, such as DHS’s Transportation Security Administration (TSA), U.S. Citizenship and Immigration Services (USCIS), U.S. Customs and Border Protection, Federal Emergency Management Agency, and Coast Guard. These agencies may encounter human trafficking victims in their daily operations, including at airports, land borders, and seaports. The Equal Employment Opportunity Commission (EEOC) and the Department of Labor’s Wage and Hour Division may encounter human trafficking when conducting investigations related to their statutory authority. For example, EEOC investigates alleged violations of Title VII of the Civil Rights Act of 1964, which prohibits employment discrimination based on, race, color, religion, sex, and national origin, which in certain circumstances involve human trafficking victims. In addition to investigating and prosecuting human trafficking crimes, federal agencies, primarily DOJ and the Department of Health and Human Services (HHS), support state and local efforts to combat human trafficking and assist victims. Several components within DOJ’s Office of Justice Programs, including the Office of Juvenile Justice and Delinquency Prevention, Office for Victims of Crime, Bureau of Justice Assistance (BJA), and National Institute of Justice (NIJ), administer grants to help support state and local law enforcement in combating human trafficking and to support nongovernmental organizations and others in assisting trafficking victims or conducting research on human trafficking in the United States. HHS also provides grant funding to entities to provide services and support for trafficking victims, primarily through components of the Administration for Children and Families (ACF), including the Children’s Bureau, Family and Youth Services Bureau, and Administration for Native Americans. Further, ACF established the Office on Trafficking in Persons in 2015 to coordinate anti-trafficking responses across multiple systems of care. Specifically, HHS supports health care providers, child welfare, social service providers, and other first responders likely to interact with potential victims of trafficking through a variety of grant programs. These efforts include integrated and tailored services for victims of trafficking, training and technical assistance to communities serving high-risk populations, and capacity-building to strengthen coordinated regional and local responses to human trafficking. In our May 2016 report, we identified 105 provisions across the six statutes that we reviewed that called for the establishment of a program or initiative. Many of the provisions identified more than one entity that is responsible for implementing the programs or initiatives. The breakdown of whether or not federal entities reported taking actions to implement these provisions is as follows: For 91 provisions, all responsible federal entities reported taking action to implement the provision. For 11 provisions, all responsible federal entities reported that they had not taken action to implement the provision. For 2 provisions, at least one of the responsible federal entities reported that they had not taken action to implement the provision or they did not provide a response. For 1 provision, none of the responsible federal entities provided a response. The provisions cover various types of activities to address human trafficking and related issues, including: Grants (33), Coordination and Information Sharing (29), Victim Services (28), Reporting Requirements (26), Training and Technical Assistance (25), Research (24), Criminal Justice (20), Public Awareness (14), and Penalties and Sanctions (7). Agency officials provided various explanations for why they had not taken any actions to implement certain provisions for which they were designated as the lead or co-lead. For example, in three cases, officials cited that funding was not appropriated for the activity. In June 2016, we reported that federal, state and local law enforcement officials and prosecutors identified several challenges with investigating and prosecuting human trafficking, including a lack of victim cooperation, limited availability of services for victims, and difficulty identifying human trafficking. The officials told us that obtaining the victim’s cooperation is important because the victim is generally the primary witness and source of evidence. However, the officials said that obtaining and securing victims’ cooperation is difficult, as victims may be unable or unwilling to testify due to distrust of law enforcement or fear of retaliation by the trafficker, among other reasons. According to these officials, victim service programs, such as those that provide mental health and substance abuse services have helped improve victim cooperation; however, the availability of services is limited. Further, the officials reported that identifying and distinguishing human trafficking from other crimes such as prostitution can be challenging. Federal, state, and local agencies have taken or are taking actions to address these challenges, such as increasing the availability of victim services through grants and implementing training and public awareness initiatives. With respect to training, we reported that federal agencies have implemented several initiatives to train judges, prosecutors, investigators and others on human trafficking. For example, in accordance with the JVTA, the Federal Judicial Center provided training to federal judges and judicial branch attorneys, including judicial law clerks, on human trafficking through a webinar in August 2015. The training walked participants through the provisions of the JVTA and addressed how child exploitation manifests in human trafficking cases, among other things. According to Federal Judicial Center officials, 1,300 registered viewers participated in the webinar, which is now available for on-demand viewing on the Federal Judicial Center website. In addition, DHS’s Immigration and Customs Enforcement, Homeland Security Investigations provides a human trafficking training course that uses video scenarios and group discussions to teach its agents how to identify human trafficking, how to distinguish human trafficking from smuggling, and how to conduct victim- centered investigations, among other things. Similarly, the Federal Bureau of Investigation provides annual specialized training in the commercial sexual exploitation of children and dealing with victims of child sex trafficking. We reported that some federal agencies also have efforts related to increasing public awareness of human trafficking. For example, In January 2016, DOJ’s Office for Victims of Crime released resources to raise awareness and serve victims, including a video series called “The Faces of Human Trafficking” and posters to be used for outreach and education efforts of service providers, law enforcement, prosecutors, and others in the community. The video series includes information about sex and labor trafficking, multidisciplinary approaches to serving victims of human trafficking, effective victim services, victims’ legal needs, and voices of survivors. Since 2010, DHS, through the Blue Campaign, reported it has worked to raise public awareness about human trafficking, leveraging partnerships with select government and nongovernmental entities to educate the public to recognize human trafficking and report suspected instances. According to DHS officials, Blue Campaign posters are displayed in public locations including airports and bus stops. HHS established the “Look Beneath the Surface” public awareness campaign through its Rescue and Restore Victims of Human Trafficking program. These materials, which included posters, brochures, fact sheets, and cards with tips on identifying victims, were available in eight languages. In June 2016, we also reported that in addition to training and public awareness, federal agencies have established grant programs to, among other things, increase the availability of services to assist human trafficking victims. We identified 42 grant programs with awards made in 2014 and 2015 that may be used to combat human trafficking or to assist victims of human trafficking, 15 of which are intended solely for these purposes. According to our prior work addressing overlap and duplication: Overlap occurs when multiple granting agencies or grant programs have similar goals, engage in similar activities or strategies to achieve these goals, or target the same or similar beneficiaries. Duplication occurs when a single grantee uses grant funds from different federal sources to pay for the exact same expenditure or when two or more granting agencies or grant programs engage in the same or similar activities or provide funding to support the same or similar services to the same beneficiaries. Each of the15 grant programs that are intended solely to combat human trafficking contained at least some potential overlap with other human trafficking grant programs in authorized uses. For instance, funding under each of the 15 grant programs can be used for either collaboration or training purposes. Similarly, 9 of the 15 grant programs provide support for direct services to victims of human trafficking. Further, of the 123 organizations that were awarded grants specific to human trafficking in fiscal years 2014 or 2015, 13 received multiple grants for either victim services or for collaboration, training, and technical assistance from DOJ and HHS. Of the 13, 7 had multiple grants that could be used for victim services, and 3 had multiple grants that could be used for collaboration, training, and technical assistance. We also reported in June 2016 that there are circumstances in which some overlap or duplication may be appropriate. For example, overlap can enable granting agencies to leverage multiple funding streams to serve a single purpose. However, coordination across the administering granting agencies is critical for such leveraging to occur. On the other hand, there are times when overlap and duplication are unnecessary, such as if a grantee uses multiple funding streams to provide the same services to the same beneficiaries. DOJ and HHS each have intra-agency processes in place to prevent unnecessary duplication. According to DOJ and HHS officials, each agency operates an internal working group to allow the components administering human trafficking grants to communicate on a regular basis. For example, HHS officials indicated that offices that administer human trafficking grant programs meet monthly to exchange information, which may include grant-related announcements and coordination of anti-trafficking activities. DOJ has taken action to implement recommendations from a prior GAO report to identify overlapping grant programs and mitigate the risk of unnecessary grant award duplication in its programs. In response to these recommendations, DOJ also requires grant applicants to identify in their applications any federal grants they are currently operating under as well as federal grants for which they have applied. DOJ and HHS officials also reported that they routinely shared grant announcements with one another in an informal manner. For instance, HHS officials noted that DOJ and HHS meet bi-weekly during co-chair meetings for the Senior Policy Operating Group (SPOG) Victim Services Committee and both agencies participate in the SPOG Grantmaking Committee meetings, which provide opportunities to share information for the purposes of coordination and collaboration. Since 2006, the SPOG has provided a formal mechanism for all agencies administering human trafficking grants to communicate with one another. According to the SPOG guidance, which was updated in March 2016, participating agencies are to share information with members of the grants committee prior to final decisions in at least one of the following ways: (1) share plans for programs containing anti-trafficking components during the grant program development process; (2) notify the SPOG of grant solicitations within a reasonable time after they are issued; or (3) notify SPOG partner agencies of proposed funding recipients prior to announcing the award. Further, agencies are also to share information with members of the Grantmaking Committee after final decisions are made. Chairman Grassley, Ranking Member Leahy, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Gretta L. Goodwin, Acting Director, Homeland Security and Justice at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Kristy Love, Assistant Director; Kisha Clark, Analyst-in Charge; Paulissa Earl; Marycella Mierez; and Amanda Parker. Key contributors for the previous work on which this testimony is based are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Human trafficking involves the exploitation of a person typically through force, fraud, or coercion for the purpose of forced labor, involuntary servitude, or commercial sex. Human trafficking victims include women, men and transgender individuals; adults and children; and foreign nationals and U.S. citizens or nationals who are diverse with respect to race, ethnicity, and sexuality, among other factors. Over the few decades, Congress has taken legislative action to help combat human trafficking and ensure that victims have access to needed services. The executive branch also has several initiatives to address human trafficking in the United States and assist victims. The Justice for Victims of Trafficking Act of 2015 (JVTA) includes two provisions for GAO to study efforts to combat human trafficking. This testimony summarizes GAO’s May 2016 and June 2016 reports that were related to the JVTA. Specifically, this testimony addresses (1) federal efforts to implement certain provisions across six human trafficking-related statutes, including the JVTA; (2) any challenges faced by federal and selected state law enforcement and prosecutorial agencies when addressing human trafficking; and (3) federal grant programs to combat trafficking and assist trafficking victims, as well as the extent to which there is any duplication across these grant programs. GAO identified 105 provisions across six human trafficking-related statutes that called for the establishment of a program or initiative. Many of the provisions identified more than one entity that is responsible for implementing the programs or initiatives. The breakdown of whether or not federal entities reported taking actions to implement these provisions is as follows: For 91 provisions, all responsible federal entities reported taking action to implement the provision. For 11 provisions, all responsible federal entities reported that they had not taken action to implement the provision. For 2 provisions, at least one of the responsible federal entities reported that they had not taken action to implement the provision or they did not provide a response. For 1 provision, none of the responsible federal entities provided a response. GAO identified 42 grant programs with awards made in 2014 and 2015 that may be used to combat human trafficking or to assist victims of human trafficking, 15 of which are intended solely for these purposes. Although some overlap exists among these human trafficking grant programs, federal agencies have established processes to help prevent unnecessary duplication. For instance, in response to recommendations in a prior GAO report, DOJ requires grant applicants to identify any federal grants they are currently operating under as well as federal grants for which they have applied. In addition, agencies that participate in the grantmaking committee of the Senior Policy Operating Group (SPOG)—an entity through which federal agencies coordinate their efforts to combat human trafficking—are to share grant solicitations and information on proposed grant awards, allowing other agencies to comment on proposed grant awards and determine whether they plan to award funding to the same organization.
|
CID carries out IRS’ criminal law enforcement responsibilities under three principal statutes. Under title 26 U.S.C., IRS has authority to investigate alleged criminal tax violations, such as tax evasion and filing a false tax return. Under title 18 U.S.C., IRS has authority to investigate a broad range of fraudulent activities, such as false claims against the government and money laundering. Under title 31 U.S.C., IRS is responsible for enforcing certain recordkeeping and reporting requirements of large currency transactions, such as cash bank deposits of more than $10,000. In carrying out its responsibilities, CID coordinates as necessary with IRS’ District Counsel, the Tax Division within the Department of Justice (Justice), and local U.S. Attorneys to prosecute violators of these statutes. Combating money laundering and other financial crimes is considered a high priority by both Justice and the Department of the Treasury (Treasury). According to Justice officials, because of CID’s expertise in conducting detailed financial investigations, U. S. Attorneys and law enforcement agencies routinely rely on CID’s assistance in investigating financial crimes, particularly those involving money laundering related to narcotics trafficking. In addition, CID agents have access to tax information, which they can use to develop financial investigations more fully. CID is also involved in ongoing efforts to identify and investigate emerging financial crimes, such as health care and bankruptcy fraud. According to CID officials, although such assistance places competing demands on CID’s time, it also aids in establishing the cooperative environment conducive to getting CID’s tax cases prosecuted and in obtaining information that can lead to criminal tax investigations. Historically, CID’s total staffing and budget have represented about 5 percent of IRS’ overall resources. As of the end of fiscal year 1996, CID had the full-time equivalent of 4,504 staff, including 3,065 special agents, and a budget of $366 million. CID is also reimbursed for some of the assistance it provides to other law enforcement agencies, particularly Justice’s Organized Crime Drug Enforcement Task Force (OCDETF) program. To determine the actions CID has taken since the early 1990s to increase the time spent on tax investigations relative to nontax investigations, we interviewed senior CID officials in IRS’ National Office, officials from both the Tax and Criminal Divisions in Justice, and officials from the Office of the Under Secretary for Enforcement in Treasury. We interviewed the CID officials because of their responsibilities in managing CID operations and setting division policies. Justice and Treasury officials were interviewed to obtain their opinions regarding CID’s (1) assistance in narcotics and money laundering investigations and (2) increased emphasis on tax investigations. We also reviewed CID’s annual goals and objectives and annual performance reports for fiscal years 1990 through 1996, as well as relevant documentation on the reorganization of its administrative functions and operations. To determine the types of investigations initiated and the results of referrals to U.S. Attorneys for prosecution and sentences resulting from these prosecutions, we analyzed IRS’ Criminal Investigation Management Information System (CIMIS) data. CID uses CIMIS data to track the status and overall results of its criminal investigations, including the direct investigative time (DIT) expended on investigations. DIT is the amount of time that CID agents spend directly working on investigations. We selected fiscal years 1990 through 1996 to identify CID’s investigative trends in order to capture data from the time of the IRS studies that raised concerns about CID’s investigative priorities through fiscal year 1996, the most recent period for which data were available. We obtained and analyzed CIMIS data to identify nationwide by fiscal year (1) the number and results of various types of CID investigations, and (2) the principal sources of information that led to CID investigations. CID staff, at our request, reconfigured CIMIS data for fiscal years 1990 through 1996 to reflect the current IRS field alignment of 4 regions and 33 districts, as well as CID’s current program areas—fraud and narcotics—with fraud further broken out between tax gap fraud and other fraud. Other than reconciling the totals from CIMIS data extracts to CID annual performance reports, we did not verify the accuracy of the CIMIS data. We did our work from October 1996 to August 1997 in accordance with generally accepted government auditing standards. The work was done at IRS’ National Office and Southeast Regional Office; IRS’ Georgia, South Florida, and Delaware-Maryland District Offices; and at U. S. Attorney’s offices in the Northern District of Georgia, the Southern District of Florida, and the Maryland District. We selected the offices we visited because of the proximity of our staff working on this assignment. We requested comments on a draft of this report from the Acting Commissioner of Internal Revenue and the Attorney General. Their comments are discussed at the end of this letter. In the early 1990s, concerns raised in IRS studies regarding CID’s investigative priorities spurred CID to take actions to increase the amount of time its agents spent on tax investigations. Between fiscal years 1990 and 1992, the percent of DIT spent on tax gap investigations decreased from 56 percent to 46 percent; since then, the percent of DIT spent on tax gap investigations increased to 59 percent as of fiscal year 1996. CID has established a range of 57 to 61 percent of DIT to be spent on tax gap investigations as its goal for fiscal year 1997 and beyond. Subsequent to hearings on IRS employee misconduct in 1989 before the Subcommittee on Commerce, Consumer and Monetary Affairs, House Committee on Government Operations, the Commissioner of Internal Revenue appointed an independent panel to review various concerns raised during the hearings, including issues relating to criminal investigations. In its October 1990 report, the panel stated that there had been a significant decrease in CID resources applied to tax investigations and a corresponding increase in resources applied to nontax investigations. The panel believed that CID’s work priorities were not properly aligned with IRS’ strategic goal of increasing taxpayers’ compliance with the tax laws. The panel recommended that CID (1) establish a criminal enforcement policy in line with IRS’ overall efforts to increase compliance with the tax laws, (2) ensure that its allocation of resources and mix of cases are consistent with such a policy, and (3) closely monitor and control implementation of this policy through the National Office. Also, to address concerns about whether CID’s workload was properly balanced between tax and nontax investigative efforts, IRS convened a study group that included representatives from Justice and Treasury. The study group’s August 1991 report found that CID resources used for tax investigations had declined about 18 percentage points between fiscal years 1980 and 1990; on that basis, the study group recommended that resources devoted to tax investigations be increased and that future resources devoted to narcotics investigations be limited to the amount expended in fiscal year 1991. The IRS Executive Committee agreed with these recommendations, and in June 1992 CID initiated an action plan to implement them. The actions led to a reorganization of CID, which began in October 1993 and was fully implemented in October 1994. The reorganization was done in part with the intent of giving CID’s national office a better means to control and monitor field activities to keep them aligned with national policies and objectives as recommended by the review panel and the study group. In terms of its organizational structure, CID was reduced from 7 regions and 63 districts to 4 regions and 34 districts. In addition, the position of Director of Investigation (DI), reporting directly to the IRS National Office Assistant Commissioner for Criminal Investigation, was established in each region to oversee and coordinate investigative activities. The DIs replaced seven former Assistant Regional Commissioners for Criminal Investigation, who reported directly to the Regional Commissioners. The DIs are responsible for ensuring that CID field offices adhere to national office program objectives and policies. Another action CID took to better track the allocation of its resources to tax versus nontax investigations was to consolidate its major program areas and to establish a specific category for tax gap investigations. In fiscal year 1995, CID consolidated the five program areas under which investigations had been categorized and tracked—narcotics, organized crime, public corruption, financial compliance, and other illegal crime—into two principal program areas—fraud and narcotics. The fraud program was subdivided into tax gap fraud and other fraud. Tax gap fraud pertains to investigations of legal industries with alleged criminal tax violations. The other fraud category involves investigations of illegal industries or money laundering investigations with no tax-related charges.The narcotics program primarily relates to investigations of money laundering activity by individuals and organizations involved in narcotics trafficking. Beginning in fiscal year 1996, CID set specific national goals for the percent of DIT to be used on tax gap and narcotics investigations to help ensure that additional resources would be allocated to tax gap investigations. The fiscal year 1996 DIT goals were 58 percent for tax gap investigations—1 percent higher than the actual DIT for fiscal year 1995—and 24 percent for narcotics investigations—the same as the actual DIT for fiscal year 1995. Since CID began taking these actions, DIT applied to tax gap investigations has increased. (See fig. 1.) According to data provided by IRS, the percent of time applied to tax gap investigations for fiscal years 1993 through 1996 increased 13 percentage points. As of fiscal year 1996, CID applied 59 percent of DIT to tax gap investigations, exceeding its goal by 1 percentage point, while applying 22 percent of DIT to narcotics investigations, 2 percentage points short of its goal. According to CID national office officials, the goal for the amount of DIT to be allocated to tax gap investigations in fiscal year 1997 is a range of 57 to 61 percent, and the goal for narcotics investigations is a range of 23 to 25 percent. They stated that these goals, which are expected to be the goals for the next few years, were developed with input from the DIs. It is CID officials’ judgment that these goals will enable CID to (1) conduct investigations in support of IRS’ strategic goal of increasing compliance with the tax laws; (2) contribute to the government’s efforts in combating narcotics and money laundering; and (3) continue allocating some of its investigative time to cases involving emerging financial crimes, such as health care fraud. CID considers completed investigations that merit referral to the U.S. Attorneys for prosecution as an important step toward the eventual prosecution, conviction, and sentencing for criminal tax violations and related financial crimes. By publicizing convictions, CID hopes to deter others from engaging in such criminal activity and to promote voluntary compliance with the tax laws. Consequently, CID uses statistical data from CIMIS to track the number and percent of investigations initiated, as well as the number and percent of referrals made to U.S. Attorneys for prosecution and sentences handed down by the U.S. courts based on CID cases. CIMIS data show that the percent of tax gap investigations initiated, the percent of tax gap cases referred to U.S. Attorneys for prosecution, and the percent of court sentences based on tax gap cases have all begun to increase since CID increased the time spent on tax gap investigations. However, as of fiscal year 1996, the increases have not been enough to match fiscal year 1990 levels for these indicators. As shown in figure 2, tax gap investigations represented 54 percent of all CID investigations initiated in fiscal year 1996. This is an increase over the fiscal year 1992 level of 47 percent and just under the fiscal year 1990 level of 55 percent. The figure also shows that between fiscal years 1990 and 1996, narcotics investigations decreased from 30 percent to 25 percent of all CID investigations initiated. Other fraud investigations were 6 percentage points higher in fiscal 1996 than in fiscal year 1990. Figure 3 shows that, in general, the percent of CID cases being referred to the U.S. Attorneys for prosecution for tax gap fraud since fiscal year 1992 has increased, while the percent of other types of referrals—narcotics and other fraud cases—either declined or remained somewhat stable. Specifically, tax gap referrals represented 47 percent of all CID referrals in fiscal year 1996 compared to 39 percent in fiscal year 1992 and 49 percent in fiscal year 1990. Court sentences—including incarceration, probation, and fines—based on tax gap investigations decreased from 54 percent of all court sentences based on CID investigations in fiscal year 1990 to a low of 37 percent in fiscal year 1994. Since that time tax gap sentences have increased to 44 percent of all court sentences for CID cases as of fiscal year 1996. Overall, sentences based on narcotics investigations increased from 32 percent to 39 percent of all court sentences based on CID investigations between fiscal years 1990 and 1993, then decreased to 31 percent as of fiscal year 1996. (See fig. 4.) Additional information relating to CID’s investigations between fiscal years 1990 and 1996 is shown in appendixes I, II, III, and IV. Appendix I shows the number of staff days applied nationwide by type of criminal investigation. Appendix II contains information on the percent of DIT applied to each type of criminal investigation by IRS location. Appendix III shows the numbers of investigations, referrals to the U.S. Attorneys for prosecution, and court sentences by type of criminal investigation. Appendix IV discusses the principal sources of information on which CID’s investigations were based. IRS and the Department of Justice each provided comments on a draft of this report. Each agency generally agreed with the information presented in the report and offered technical comments, which we have incorporated where appropriate. Copies of this report are being sent to the Chairmen and Ranking Minority Members of the Senate Committee on Finance, the Senate Committee on Governmental Affairs, the House Committee on Ways and Means, and the House Committee on Government Reform and Oversight; the Chairman and Ranking Minority Member of the Subcommittee on Treasury, General Government, and Civil Service, Senate Committee on Appropriations; and the Chairman and Ranking Minority Member of the Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations; various other congressional committees; the Secretary of the Treasury; the Attorney General; and other interested parties. We will also make copies available to others upon request. Major contributors to this report are listed in appendix VI. Please contact me on (202) 512-9110 if you or your staff have any questions about this report. This appendix presents the nationwide number of staff days applied directly to the major types of Criminal Investigation Division (CID) investigations. This does not include staff days applied to noninvestigative activities, such as training. Table I.1: Number of Direct Staff Days Applied Nationwide by Type of Criminal Investigation in Fiscal Years 1990 Through 1996 Number of staff days by fiscal year Note 1: Investigative time spent following up on information provided to CID that indicates potential criminal violations prior to initiation of an investigation is categorized as information items. Note 2: Totals do not add due to rounding. This appendix shows the percent of direct investigative time (DIT) applied to the major types of CID investigations nationwide, by regions, and by district offices from fiscal years 1990 through 1996. For fiscal year 1996, CID set national DIT goals of 58 percent for tax gap investigations and 24 percent for narcotics investigations. To achieve these goals, CID requested that the Directors of Investigation for each region help to ensure that the regional DIT was within a range of 58 to 60 percent for tax gap investigations and 24 to 26 percent for narcotics investigations. Table II.1: Percent of Total DIT Applied to Tax Gap Investigations by Location in Fiscal Years 1990 Through 1996 Percent of DIT by fiscal years (continued) Percent of DIT by fiscal years Table II.2: Percent of Total DIT Applied to Narcotics Investigations by Location in Fiscal Years 1990 Through 1996 Percent of DIT by fiscal years (continued) Table II.3: Percent of Total DIT Applied to Other Fraud Investigations by Location in Fiscal Years 1990 Through 1996 Percent of DIT by fiscal years (continued) This appendix presents information on the number of CID investigations that were initiated, the number of investigations in which CID recommended prosecution, and the number of sentences resulting from prosecutions from fiscal year 1990 through fiscal year 1996. According to CID officials, completing an investigation and subsequently prosecuting and sentencing the subject of the investigation may take a year or more. As a result, the number of prosecution recommendations and sentences shown in a particular fiscal year in tables III.2 and III.3 may not have resulted from the investigations initiated in the corresponding fiscal year in table III.1. CID relies on various sources of information for initiating its investigations, including information from (1) within IRS, such as from the Examination Division; (2) other government sources, such as U.S. Attorneys; (3) currency transaction reports; and (4) the public. Although information from other government sources may result in various types of CID investigations, including tax fraud, information from within IRS predominantly results in tax fraud investigations. Information provided from within IRS and by other government sources accounted for about 75 percent of the total CID investigations initiated each year for fiscal years 1990 through 1996. As shown in figure IV.1, investigations based on information provided by other government sources increased from about 45 percent in fiscal year 1990 to about 51 percent in fiscal year 1996. Investigations based on information provided from within IRS fluctuated from about 32 percent in fiscal year 1990 to about 29 percent in fiscal year 1996. In 1991, a study group convened to examine CID’s workload and to recommend changes to better balance, direct, and strengthen its future investigative activities recommended that IRS reemphasize its internal fraud referral program. In an effort to increase the quality of internal fraud referrals from other IRS groups to CID that may lead to tax fraud investigations, IRS established formal fraud referral procedures effective for fiscal year 1996. This included establishing the position of fraud coordinator in each district office to act as a focal point for fraud referrals. According to CID officials, the objective of these procedures is to increase coordination between CID and other IRS functions, particularly the Examination Division, in an effort to ensure that only cases involving potential criminal fraud, rather than civil fraud, are referred to CID. CID officials further stated that it is too soon to determine the overall success of the new procedures. A. Carl Harris, Assistant Director Clarence Tull, Senior Evaluator Sally P. Gilley, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO reported on the actions the Internal Revenue Service's (IRS) Criminal Investigation Division (CID) has taken since the early 1990s to increase the time spent on tax investigations versus nontax investigations, focusing on: (1) investigations initiated by CID; and (2) referrals to U.S. Attorneys for prosecution and court sentences based on these investigations, for fiscal year (FY) 1990 through FY 1996. GAO noted that: (1) between FY 1990 and FY 1992, IRS data show that the percent of time spent on tax gap investigations decreased by 10 percentage points, continuing a downward trend since the early 1980s; (2) on the basis of the recommendations of two IRS studies done in the early 1990s, CID began in October 1993 taking actions designed to increase the amount of time its agents spent conducting tax investigations: (3) specifically, CID reorganized its administrative functions and operations with the intent of better targeting resource allocations; (4) it also consolidated and recategorized its program areas with an objective of better targeting its investigations; (5) in addition, as of FY 1996, CID established goals for the percent of time to be spent on its investigations, particularly for tax gap investigations; (6) since these actions were initiated, the percent of time spent on tax gap investigations has increased by 13 percentage points from a low of 46 percent in 1992 to 59 percent in 1996; (7) overall, the 59 percent in FY 1996 represented a net increase of 3 percentage points over the FY 1990 level; (8) between FY 1992 and FY 1996, there was an increase in the percent of tax gap investigations that CID initiated and in the percent of referrals to U.S. Attorneys for prosecution based on tax gap cases; (9) since FY 1994, the percent of court sentences based on tax gap cases has also increased; (10) however, as of FY 1996 the increases in these indicators have not been enough to match FY 1990 levels.
|
How and when federal agencies can dismiss or take action to address poor performance has been a long-standing personnel issue that dates back to the creation of the civil service and has been the subject of a number of reforms since the civil service began. Modern merit principles state that appointments should be based upon qualifications, employees should maintain high standards of integrity and conduct, and employees should operate free of political coercion. However, according to the MSPB, the mechanisms put in place to ensure merit principle goals are met have at times been seen as bureaucratic obstructions that reduce civil service effectiveness. The Civil Service Reform Act of 1978 (CSRA) was intended, in part, to address the difficulty of dismissing employees for poor performance. Among other changes, CSRA established new procedures for taking action against an employee based on poor performance set forth under chapter 43 of title 5 of the U.S. Code. Despite CSRA’s enactment, addressing poor performance continues to be a complex and challenging issue for agencies to navigate. In 1996, we testified that the redress system that grew out of CSRA and provides protections for federal employees facing dismissal for performance or other reasons diverts managers from more productive activities and inhibits some of them from taking legitimate actions in response to performance or conduct problems. In 2005, we reported on ways agencies have sought to better address poor performance, including more effective performance management and efforts to streamline appeal processes. In 2014, we testified that opportunities remain for agencies to more effectively deal with poor performance through enhanced performance management. In general, agencies have three means to address employees’ poor performance, with dismissal as a last resort: (1) day-to-day performance management activities (which should be provided to all employees, regardless of their performance levels), (2) dismissal during probationary periods, and (3) use of formal procedures. Agencies’ choices will depend on the circumstances at hand. The first opportunity a supervisor has to observe and correct poor performance is in day-to-day performance management activities. Performance management and feedback can be used to help employees improve so that they can do the work or—in the event they cannot do the work—so that they can agree to move on without going through the dismissal process. Agencies invest significant time and resources in recruiting potential employees, training them, and providing them with institutional knowledge that may not be easily or cost-effectively replaceable. Therefore, effective performance management – which consists of activities such as expectation-setting, coaching and feedback – can help sustain and improve the performance of more talented staff and can help marginal performers to become better. According to officials we interviewed and our literature review, agencies should seek ways to improve an employee’s performance and only dismiss that employee if he or she does not reach an acceptable performance level. OPM’s experience suggests that many employees who are considered to exhibit performance problems can often improve when action is taken to address their performance, such as employee counseling, clarification of expectations, or additional training. Performance improvement is considered a win-win for both the agency and the employee because it preserves the investments agencies have already made in that individual and those investments that the individual has made with the agency. We have previously reported that day-to-day performance management activities benefit from performance management systems that, among other things, (1) create a clear “line of sight” between individual performance and organizational success; (2) provide adequate training on the performance management system; (3) use core competencies to reinforce organizational objectives; (4) address performance regularly; and (5) contain transparent processes that help agencies address performance “upstream” in the process within a merit-based system that contains appropriate safeguards. Implementing such a system requires supervisors to communicate clear performance standards and expectations, to provide regular feedback, and to document instances of poor performance. In cases where an employee cannot do the work, regular supervisory feedback may help the employee realize that he or she is not a good fit for the position and should seek reassignment to a more appropriate position within the agency or should voluntarily leave the agency, rather than go through the dismissal process. According to the performance management experts and labor union officials we interviewed, an employee voluntarily leaving is almost always preferable to dismissal and benefits all parties. Experts stated that such an arrangement can produce the following benefits: The employee maintains a clean record of performance, allowing him or her to pursue a more suitable position. Unacceptable performance scores and dismissal actions can severely limit job prospects for the employee, within and outside of the federal government. In some cases, an employee leaves before a poor rating is issued. Other times, employees and agencies may agree to have the record expunged. Organizations we interviewed stressed that agreeing to a clean record of performance as part of a voluntary separation can be appropriate, particularly in cases when an employee has otherwise demonstrated professional aptitude. They cautioned, however, that clean record agreements must be used judiciously, in an effort to avoid making a low-performing employee another agency’s problem. The supervisor can focus on fulfilling the agency’s mission, rather than expending the time and energy associated with the dismissal process. The agency and employee avoid costs associated with litigation. However, effective performance management has been a long-standing challenge for the federal government and the issue is receiving government-wide attention. In 2011, the National Council on Federal Labor-Management Relations (in conjunction with the CHCO Council, labor unions, and others) developed the Goals-Engagement- Accountability-Results (GEAR) framework. The framework was designed to help agencies improve the assessment, selection, development, and training of supervisors. GEAR emphasized that agencies should select and assess supervisors based on supervisory and leadership proficiencies rather than technical competencies, and should hold them accountable for performance of supervisory responsibilities. In June 2014, OPM officials said that the agency will facilitate the collaboration and information-sharing between agencies on their approaches to implement the principles outlined in the GEAR framework. They added that OPM will continue to provide technical support and expertise on successful practices for performance management. Given the critical role that supervisors play in performance management, it is important for agencies to identify, promote and continue to develop effective supervisors. However, according to CHCOs we interviewed and to our literature review, performance management continues to be a challenge at many agencies for three reasons: Some employees promoted to supervisory positions because of their technical skill are not as inclined towards supervision. According to CHCOs we interviewed, as higher-graded work in the federal government is typically in managerial and supervisory positions, career advancement in many agencies requires that employees take on supervisory responsibilities. However, some employees critical to meeting the agency’s mission are not interested in (or as inclined to conduct) supervisory duties, but are promoted by the agency to increase their pay and to retain them. As a result, some supervisors are not able to effectively conduct performance management activities. NASA addresses this problem by offering a dual career ladder structure: one ladder to advance employees who may have particular technical skills and/or education but who are not interested or inclined to pursue a management or supervisory track, and another for those seeking managerial responsibilities. One potential benefit to this approach is that agencies may have more flexibility to promote supervisors who are better positioned to effectively address poor performance. Supervisory training may not cover performance management sufficiently. Under 5 U.S.C. § 4121, agencies, in consultation with OPM, are required to establish training programs for supervisors on actions, options, and strategies to use in relating to employees with unacceptable performance and in improving that performance, and in conducting employee performance appraisals, among other things. OPM implementing regulations state that all agencies are required to have policies to ensure they provide training within one year of an employee’s initial appointment to a supervisory position. However, some agencies include performance management as part of a general new supervisory curriculum that also includes training on subjects such as cybersecurity, ethics, and an array of human resource policy topics. CHCOs told us that receiving training in this way can be “like drinking from a fire hose” and can be difficult to fully retain, particularly for topics that can benefit from experiential learning, such as dealing with poor performance. Some agencies seek to address this problem by assigning a new supervisor a mentor to assist with ongoing coaching in performance management and in other areas where the supervisor may have limited previous experience. Agencies may not be using the supervisory probationary period as intended. A new supervisor is given a 1-year probationary period to demonstrate successful performance as a supervisor. During the supervisory probationary period, the agency is to determine whether to retain that employee as a supervisor or to return the employee to a non-supervisory position. The MSPB found that agencies are not consistently using the probationary period to assess new supervisors’ capabilities and supervisors in general received varying levels of feedback from management. CHCOs told us a related issue is that the supervisory probationary period may not be long enough for the supervisor to conduct many performance management responsibilities associated with the agency’s employee appraisal cycle. As a result of these issues, agencies may not be providing adequate feedback to help new supervisors understand where further development is needed and if they are well suited for supervisory responsibilities, and new supervisors may not have the opportunity to demonstrate performance management capabilities. MSPB officials told us that some agencies address these issues by providing details or rotation opportunities where employees interested in supervisory positions can observe and, as appropriate, participate in performance management activities in other parts of the organization. These rotations not only give the employee more experience in that role, but can also give the agency time to observe and assess that employee’s potential for success as a supervisor. We previously reported that within the Nuclear Regulatory Commission, where a high number of technical experts are employed, rotational assignments are encouraged to build supervisory capacity and to allow interested employees an opportunity to gain new experiences and responsibilities. As described above, although effective performance management continues to be a challenge at many agencies, individual agencies have taken steps to better identify those employees with an aptitude towards performance management, to develop related leadership skills, and to more fully assess those employees before those individuals are given supervisory responsibilities. According to OPM officials, other agencies have authority to take similar actions as appropriate for their agency. When an individual enters the competitive service, he or she is put on a probationary period which lasts for 1 year. Individuals entering the excepted service may serve a trial period, often for 2 years. The probationary period is the last step in the employee screening process during which time, according to an MSPB report, the individual needs to demonstrate “why it is in the public interest for the government to finalize an appointment to the civil service.” The appeal rights of an individual in the probationary period are limited. If an agency decides to remove an individual during the probationary period, the agency is not required to follow the formal procedures for removing an employee (described below). Rather, the agency’s only obligation is to notify the individual in writing of its conclusions regarding the individual’s inadequacies and the effective date of the removal. Generally, a probationary employee may not appeal their removal. Appeal rights are extended to employees in the competitive service and to preference eligible employees in the excepted service who have completed 1 year of current continuous service. Appeal rights are extended to non-preference eligible excepted service employees after 2 years of current continuous service. Because dismissing a poorly performing employee becomes more difficult and time consuming after the probationary period, it is important that agencies use this time to assess employee performance and dismiss those that cannot do the work. However, according to our interviews, supervisors are often not making performance-related decisions about an individual’s future likelihood of success with the agency during the probationary period. Interviewees said this can happen for two reasons: (1) the supervisor may not know that the individual’s probationary period is ending, and (2) the supervisor has not had enough time to observe the individual’s performance in all critical areas of the job. Because of these two possible issues, agencies risk continuing poorly performing individuals in a position in the civil service, with all the rights that such an appointment entails. According to OPM, to remedy the first problem, some agencies are using a tool, such as an automatic notification issued from the agency’s payroll system, to remind supervisors that an individual’s probationary period is nearing its end and to take action as appropriate. While not all agencies use this tool, OPM officials told us that all Shared Service Centers’ existing HR systems already contain the functionality to notify supervisors that the probationary period is ending. Because it is the agencies’ decision whether or not to use automated notifications, it is important that agencies are aware of and understand the potential benefits of this tool. Other agencies require an affirmative decision by the individual’s supervisor (or similar official) before deciding whether to retain an individual beyond the probationary period. By sending a reminder or requiring an affirmative decision, supervisors know when the probationary period is ending and are prompted to consider the prospects of the individual, according to CHCOs we interviewed and to our literature review. OPM considers an affirmative decision a leading practice and has implemented it for its supervisors. However, not all agencies have an automated tool to alert supervisors prior to the expiration of an employee’s probationary period. CHCOs also told us supervisors often do not have enough time to adequately assess an individual’s performance before the probationary period ends, particularly when the occupation is complex or difficult to assess. This can happen for a number of reasons, including the occupation is complex and individuals on a probationary period spend much of the first year in training before beginning work in their assigned areas, the occupation is project based and an individual on a probationary period may not have an opportunity to demonstrate all of the skills associated with the position, and individuals on a probationary period often rotate through various offices in the agency and supervisors have only a limited opportunity to assess their performance. In the past, agencies exempt from provisions of title 5 have sought to address this by extending the probationary period and limiting appeal rights during that time. Unless exempt however, a decision to allow agencies to extend probationary periods beyond 1 year and to limit appeal rights during that period would require legislative action in certain circumstances. CHCOs told us such an extension of the probationary period would provide supervisors with time to make a performance assessment for those occupations that are particularly complex or difficult to assess. However, they cautioned that such an extension would only be beneficial if an agency had effective performance management practices in place and it used the extra time for the purpose intended. Generally, once an employee has completed a probationary period, if that employee is a poor performer who does not voluntarily leave, an agency is required to follow the procedural requirements under either 5 U.S.C. § 4303 (hereinafter “chapter 43”) or 5 U.S.C. § 7513 (hereinafter “chapter 75”) in order to take an action such as removal. Though the process for dismissal under both authorities shares several common steps, some key differences exist. One key difference under chapter 43 is the employee must be given a formal opportunity to improve. While the law and OPM implementing regulations establish requirements and timeframes for certain steps under chapter 43 dismissal actions, experts representing various agency and employee perspectives told us that the practical implementation of chapter 43 is time consuming and resource intensive. For example, based on the experiences of experts we interviewed, it often takes 50 to 110 days to complete steps associated with the performance improvement period (PIP). Overall, it can take six months to a year (and sometimes significantly longer) to dismiss an employee. Moreover, once an employee is dismissed from his or her agency, he or she may file an appeal with the MSPB. As we report later, it took the MSPB an average of 243 days in 2013 to adjudicate an appeal from start to finish. Figure 1 illustrates an example of the dismissal process under this procedure. The timeframes cited here are not required by statute or regulation. The length of time to address performance problems can vary based on the facts and circumstances of each situation. The other option for taking action—chapter 75—is largely similar to chapter 43, but has no formal improvement period and does not require a specific standard of performance to be established and identified in advance. The burden of proof for sustaining a dismissal under chapter 75 is higher than under chapter 43. Depending on the circumstances, the differences between the two approaches make one option preferable over the other for supervisors, according to our interviews and literature research. For example, the formal opportunity to improve provided by chapter 43 makes this option preferable when there is a possibility of the employee improving after receiving additional training or more specific expectations. In contrast, because chapter 75 has no improvement period, it is generally faster and therefore is preferable for agencies when it is unlikely an employee will improve or if the poor performance is in part related to conduct issues. Supervisors, working with agency human resources and legal counsel, have discretion to determine the most appropriate option for dismissing an employee for poor performance. The following table lists examples of circumstances where the use of one authority may be more appropriate than the other. Appendix II provides a full comparison of these two legal authorities for dismissing employees for performance. The process for taking action against a career member of the Senior Executive Service (SES) for a less-than-fully-successful performance rating differs from that for other civil servants. Career executives are removed from the SES for poor performance as provided for by 5 U.S.C. §§ 3592 and 4314(b). Agencies are required to either reassign, transfer, or remove a senior executive who has been assigned an unsatisfactory performance rating; required to remove an executive who has been assigned two performance ratings at less than fully successful within a three year period; and required to remove an executive who receives two unsatisfactory ratings within five years. Unlike dismissals for performance for non-SES civil servants, most career SES members are not removed from the agency, but rather from the SES only, and they remain employed at a lower grade. Career SES members serve a 1 year probationary period upon initial appointment. Most career executives removed during the probationary period for performance reasons (and all removed after completing it) are entitled to placement in a GS-15 or equivalent position. Removals from the SES for performance reasons may not be appealed to the MSPB. However, non-probationary career executives may request an informal hearing before an official designated by the MSPB. Additionally, an executive who believes the removal action was based on discrimination may file a discrimination complaint with their agency. Or, if an executive believes the removal was based on a prohibited personnel practice, such as reprisal for whistleblowing, they may go to the Office of Special Counsel (OSC) to seek corrective action. From 2009-2013, twelve senior executives were removed from the SES for performance reasons. In addition to the procedural requirements agencies must adhere to, federal employees have additional protections designed to ensure that they are not subject to arbitrary agency actions and prohibited personnel actions, such as discrimination and reprisal for whistleblowing. In the event that an agency dismisses an employee for performance reasons, that employee may file an appeal of that agency action with the MSPB. During this appeal, an employee has a right to a hearing before an MSPB administrative judge. If the employee or agency is unsatisfied with the administrative judge’s initial decision, either may request that the full 3- member board review the matter by filing a petition for review. If the employee is unsatisfied with the final decision of the MSPB, the employee may seek judicial review of that decision, generally with the United States Court of Appeals for the Federal Circuit (Federal Circuit). In the alternative, an employee who is a member of a collective bargaining unit may instead choose to pursue a grievance under the negotiated grievance procedure, if the appeal has not been excluded from coverage by the collective bargaining agreement. If the matter goes to an arbitrator, judicial review of the arbitration award is also available at the Federal Circuit. Finally, under certain circumstances, judicial review may be sought in United States district court. While these protections are important to ensuring due process, they generally add to the time and resources agencies commit to addressing poor performance, as well as to the overall complexity of the process. Discrimination complaints and allegations of whistleblowing reprisal are redress options available to employees at any time and are not specific to the dismissal process. Allegations of discrimination in dismissal actions may be filed with an agency’s Equal Employment Opportunity office, or under the negotiated grievance procedure, if applicable. Allegations of reprisal for whistleblowing can be made with the OSC. Employees may be more likely to consider such redress options when informed of performance problems or of the possibility for dismissal or demotion, according to experts and our literature review. Appendix III provides more information on appeal avenues available to employees who are dismissed or demoted for poor performance under chapters 43 or 75. A number of agency supports and constraints may reduce a supervisor’s willingness to pursue dismissal or other action against a poor performing employee. According to representatives from organizations we interviewed, supervisors may opt against dismissing a poor performer for a variety of reasons, including Internal support. Supervisors may be concerned about a lack of internal support from their supervisors or other internal agency offices involved in the dismissal process. Specifically, Upper management may view the supervisor as unable to effectively manage employees, particularly considering that most employees have a history of meeting or exceeding expectations in performance ratings. Our analysis found that employees rarely receive performance ratings that indicate a problem with performance. In 2013, about 8,000 of the nearly 2 million federal employees received “unacceptable” or “less than fully successful” performance ratings. According to one expert we interviewed, senior managers who only have knowledge of an employee’s work history through past performance ratings may tell a supervisor, “None of the previous supervisors had problems with him. Why do you?” An agency’s personnel office may lack the capacity to provide guidance or an agency’s general counsel or a senior agency official may be inclined settle a matter or not pursue a dismissal action because of concern over litigation. According to CHCOs we interviewed, agencies are increasingly settling performance-related actions and discrimination complaints with financial awards, rather than litigating the cases. According to the CHCOs, such financial payouts may provide an incentive to file such appeals and claims– even when they are not valid. Time and resource commitment. As depicted earlier in figure 1, the time commitment for removing an employee under chapter 43 can be substantial. After communicating performance problems to an employee, a supervisor will likely find it necessary to increase the frequency of monitoring and documentation he or she conducts and of feedback sessions he or she provides during the performance improvement period. In turn, this takes time away from other job responsibilities and agency priorities. Supervisory skills and training. Supervisors may lack experience and training in performance management, as well as lacking understanding of the procedures for taking corrective actions against poor performers. Specifically, supervisors may lack (a) confidence or experience having difficult conversations; (b) skills or training on addressing poor performance, including a basic understanding of the processes under chapters 43 and 75; and (c) knowledge or an understanding of requirements for addressing poor performance under collective bargaining agreements. These factors point to the importance of effective selection, assessment, and development of new supervisors, as well as to the importance of providing refresher training for current supervisors. Legal concerns. Supervisors who take performance-based actions may need to be involved in providing depositions, witness statements, internal meetings, and meeting with attorneys and union representatives for an extended period of time where an employee seeks an avenue of redress concerning the performance-based action. Supervisors may be concerned about appeals, grievances, or discrimination complaints if the topic of poor performance is broached. In 2013, agencies dismissed 3,489 employees for performance or a combination of performance and conduct, representing 0.18 percent of the career permanent workforce. Agencies most often dismissed employees for performance reasons during the probationary period. As noted earlier, dismissing employees during probation is much less time and resource intensive than doing so once they are made permanent and the procedural and appeal provisions of chapter 43 or 75 come into play. As shown in figure 2, dismissals for performance occurred more frequently for employees in probationary periods. Over the last ten years (2004-2013), the number of individuals dismissed for performance or a combination of performance and conduct ranged from a low of 3,405 in 2006 to a high of 4,840 in 2009. On average, around 4,000 individuals were dismissed for performance-related reasons annually. The rate of dismissals for individuals in the career permanent workforce (2004-2013) range from a low of 0.18 percent in 2013 to a high of 0.27 percent in 2009. Trends in performance dismissals since 2004 are associated with fluctuations in the number of probationary employees. Most employee dismissals for performance took place during the probationary period in each year from 2004 to 2013. The general increase in new hires from 2006 through 2010 is associated with the number of probationary dismissals from 2007 through 2011. As hiring and the number of new employees slowed after 2010, so too did the number of dismissals during probation. As an alternative to dismissal, agencies may demote or reassign employees for poor performance. Agencies reassigned 652 employees for performance-related reasons in 2013, with nearly all following an unacceptable performance rating. (A reassignment is defined as the change of an employee from one position to another without promotion or change to lower grade, level or band.) According to our interviews and literature review, reassignment is considered appropriate when (1) the employee is willing to improve and does not have conduct or delinquency issues contributing to their performance issues, and (2) the reasons the employee failed in one position is not likely to cause him or her to fail on the next job. There were 168 demotions for performance reasons in 2013, including 58 for an employee’s failure to successfully complete the supervisory or managerial probationary period. As noted above, dismissing employees is and should be a last resort in performance management. Identifying and addressing poor performance “upstream” in the performance management process may result in outcomes that are more desirable than dismissal, most notably improved performance, but also the employee moving to a different position that might be a better fit or voluntarily leaving the agency. The extent to which cases of employee poor performance result in these outcomes is not known. As mentioned earlier, when the employee cannot perform the work, the employee voluntarily leaving the agency can be the most favorable outcome for both the agency and the employee. Our analysis of OPM data found more than 2,700 cases of employees voluntarily leaving in 2012 after receiving a “less than fully successful” (or lower) performance rating at any point from 2010 to 2012. These cases most likely undercount the number of employees voluntarily leaving for performance reasons because many employees who have performance problems never receive a “less than fully successful” (or lower) performance rating, and performance ratings may be expunged as part of an agreement to voluntarily leave. However, sufficient data does not exist to conclude that employees have voluntarily left federal service due to performance reasons. Because voluntary retirements or resignations result in the employee leaving without formally having a personnel action taken against him or her, it is not possible to determine from available OPM data the universe of employees voluntarily resigning or retiring for performance-related reasons. However, according to experts we interviewed, such separations happen “all the time.” One CHCO we interviewed estimated that a large majority of his agency’s performance-related separations would be considered voluntary retirements or resignations and other CHCOs agreed that employees with performance issues are more likely to voluntarily leave than go through the dismissal process. While an “unacceptable” performance rating sends a strong signal to the employee that the agency is going to take action for performance reasons, receiving an “unacceptable” performance rating is not necessarily an indicator that an employee will either be formally dismissed or will voluntarily leave. Of the 2,001 employees receiving an “unacceptable” performance rating in 2009, 1,104 (55 percent) remained employed with the same agency in 2013, while 897 (45 percent) are no longer with the agency. Those remaining with the agency may have improved their performance or may have been reassigned within the agency. While agencies rarely use chapter 43 to dismiss employees, of the 280 employees dismissed under this legal option in 2013, 125 (45 percent) were processed by MSPB. As noted above, on average, it took 243 days to complete the appeal process for initial appeals of dismissals that were affirmed. In cases where a decision is rendered, the agency’s decision to dismiss is usually affirmed. In 2013, 18 cases were affirmed in the agency’s favor and 4 were reversed in the employee’s favor. Thirty-six cases were dismissed in 2013. Cases may be dismissed for a variety of reasons, including lack of jurisdiction, lack of timeliness, withdrawal by the appellant, or failure to prosecute. Sixty-seven of the 125 appeals in 2013 were resolved through settlement, a process whereby both the agency and the employee come to a mutual agreement prior to the case being heard or decided by the MSPB. If at all possible, the MSPB encourages settlements between parties. According to government lawyers we interviewed, employees and agencies have a number of potential settlement options available related to cases involving poor performance. They include expunging poor appraisal ratings in return for the employee separating from the agency and waiving further appeal rights, provision of employment references that do not provide a prospective employer with negative information about the employee, agency payment of the employee’s attorney’s fees, provisions relating to unemployment compensation, confidentiality clauses, resignation agreements, and reassignments. Figure 4 shows how the MSPB resolved initial dismissal appeals taken under chapter 43 in 2013. Taking action to address poor performance is challenging for agencies, due to time and resource intensity, lack of supervisory skill and training, and other factors (as described earlier). As a result, tools and guidance are needed to help agencies manage employee performance and to navigate dismissal processes. To meet its strategic goal of enhancing the integrity of the federal workforce, OPM provides guidance, tools, and training to help agencies attain human capital management goals. In addition to its regulations, OPM makes a range of different tools and guidance available to help agencies address poor performance through multiple formats, including through its website, webinars, webcasts, in-person training, guidebooks, and through one-on-one assistance and consultation with agencies, according to OPM officials. Appendix IV provides some examples of the tools and guidance OPM developed to help agencies address poor performance. Our interviews with individuals who have expertise in performance management issues indicated that improvements could be made in the tools and guidance OPM produces on poor performance to better meet their needs, including the following areas: Improvements in Content. Multiple experts we spoke with told us the content of OPM’s training and guidance seemed to be written for human resources (HR) officials or lawyers, rather than supervisors. According to one expert, “An average manager will not be able to understand what the guidance means if they don’t have time to continuously go to their HR office for assistance.” According to OPM, its guidance is often written for HR officials charged with assisting supervisors in addressing poor performance. We have recently reported, however, that HR offices often lack the capacity for assisting in performance management-related activities. Instead, they are focused on transactional human resource activities such as verifying benefits and processing personnel actions. Because of this, tools and guidance developed for HR officials may not be reaching the supervisors who need them. Improvements in Outreach. CHCOs and organizations representing federal employees and supervisors told us they were unaware of the tools and guidance OPM produces on the topic of managing poor performance. One group told us that a critical gap in training for managers exists, and that “none of the individuals we work with know about .” According to CHCOs we interviewed, some supervisors may lack awareness in part because they lack interest in performance management in general and do not seek out tools. The CHCOs said there is a role for both the agencies and OPM in reinforcing the critical importance of effective performance management amongst supervisors. Improvements in Format. OPM’s tools and guidance are generally posted online or as hard-copy guide books. Both of these methods cost-effectively disseminate information to a broad audience and can be used by employees when their schedule allows. At the same time, experts we spoke with said addressing poor performance is more effectively taught in a classroom setting, as it is a sensitive topic where the most practical information is gleaned from fellow class participants. According to one expert, “The topic of dealing with poor performers demands interaction amongst participants.” OPM told us that developing and promoting tools and guidance can be costly and that resources available for that purpose are highly constrained. OPM has previously acknowledged that it could do more to better assess the tools and guidance it produces. It is also a challenge to decide what topics to address, particularly as there are frequently changes in human capital initiatives or in topic areas that take precedence. Regular meetings with senior OPM officials, use of training evaluation and feedback forms, and informal feedback from the CHCO Council will help to inform OPM of the tools and guidance to provide. However, agencies are not always aware of this material and in some cases it falls short of their needs. Going forward, it will be important for OPM to fully leverage existing information sources (such as survey results) to inform decisions on what material to develop and how best to distribute it. According to OPM, the Employee Services group will deploy a comprehensive strategic human capital management needs survey that will be distributed to the CHCO Council. The survey will be designed to directly solicit information from human capital professionals about what relevant tools, guidance, and resources will benefit their human capital management processes. This tool is also intended to help OPM with developing/providing suggested tools. Deployment is planned for the summer of 2015. While these plans are an important step in helping to ensure agencies get the tools and guidance they need, OPM is not fully leveraging information provided by two existing sources to help prioritize the tools and guidance it develops: the 2014 Federal Employee Viewpoint Survey (FEVS) and the Performance Appraisal Assessment Tool (PAAT), a voluntary self- assessment tool agencies can use to assess the strength of their performance appraisal system. In FEVS, performance management-related questions receive some of the lowest positive scores in the survey, but OPM told us respondents may not have sufficient information to answer the question. These questions cover topics such as the extent to which employees believe their supervisors are effectively addressing poor performers and whether differences in performance are recognized in a meaningful way. With respect to the PAAT, agencies identified areas of strength and weakness in their performance appraisal programs. For example, the PAAT includes information on topics such as how often supervisors are required to hold feedback sessions with employees, an important avenue for dealing with poor performance. It also includes information about how agencies deal with unacceptable performance, including the number of PIPs, performance-based dismissals, reassignments, and reductions-in- grade. Agencies’ responses provide some insight into their own strengths and weaknesses as well as into to the topics where additional tools and guidance could be more effectively targeted government-wide. Agencies may submit their PAAT results to OPM. However, OPM told us that it was not using these responses to inform the development of resources that would help agencies better address poor performers. The process for dismissing an employee after the probationary period ends can be complex and lengthy. But many of these process challenges can be avoided or mitigated with effective performance management. Supervisors who take performance management seriously and have the necessary training and support can help poorly performing employees either improve or realize they are not a good fit for the position. We found that a number of employees voluntarily resign after receiving negative performance feedback. The probationary period for individuals entering the federal service is the ideal time to remove those who cannot do the work required of the position, but this period could be more effectively used by agencies. Given the number of issues agencies can encounter when addressing poor performance after the probationary period ends, improving how the probationary period is used could help agencies more effectively deal with poor performers. Effectively addressing poor performance has been a long-standing government-wide challenge. OPM has a role in ensuring that agencies have the tools and guidance they need to effectively address poor performance and to maximize the productivity of their workforces. Though OPM already provides a variety of tools, guidance, and training to help agencies address performance management issues, more can be done to leverage priority information and to make tools and guidance available for agencies when and where they need it. To help strengthen the ability of agencies to deal with poor performers, we recommend that the Director of OPM, in conjunction with the CHCO Council and, as appropriate, with key stakeholders such as federal employee labor unions, take the following four actions: 1. To more effectively ensure that agencies have a well-qualified cadre of supervisors capable of effectively addressing poor performance, determine if promising practices at some agencies should be more widely used government-wide. Such practices include (1) extending the supervisory probationary period beyond 1-year to include at least one full employee appraisal cycle; (2) providing detail opportunities or rotational assignments to supervisory candidates prior to promotion, where the candidate can develop and demonstrate supervisory competencies; and (3) using a dual career ladder structure as a way to advance employees who may have particular technical skills and/or education but who are not interested in or inclined to pursue a management or supervisory track. 2. To help ensure supervisors obtain the skills needed to effectively conduct performance management responsibilities, assess the adequacy of leadership training that agencies provide to supervisors. 3. To help supervisors make effective use of the probationary period for educate agencies on the benefits of using automated notifications to notify supervisors that an individual’s probationary period is ending and that the supervisor needs to make an affirmative decision or otherwise take appropriate action, and encourage its use to the extent it is appropriate and cost-effective for the agency; and determine whether there are occupations in which—because of the nature of work and complexity—the probationary period should extend beyond 1-year to provide supervisors with sufficient time to assess an individual’s performance. If determined to be warranted, initiate the regulatory process to extend existing probationary periods and, where necessary, develop a legislative proposal for congressional action to ensure that formal procedures for taking action against an employee for poor performance (and a right to appeal such an action) are not afforded until after the completion of any extended probationary period. 4. To help ensure OPM’s tools and guidance for dealing with poor performers are cost-effectively meeting agencies’ and supervisors’ needs, use SHCM survey results (once available), FEVS results, PAAT responses, and other existing information, as relevant, to inform decisions on content and distribution methods. The importance of effective performance management and addressing poor performance may need to be reinforced with agency supervisors so that they more routinely seek out tools and guidance. We provided a draft of this product to the Director of OPM and Chairman of MSPB for comment. Written comments were provided by OPM’s Associate Director for Employee Services, and are reproduced in appendix V. Of our four recommendations, OPM concurred with one recommendation, partially concurred with two recommendations, and partially concurred with part of a third recommendation. OPM did not concur with the first part of this latter recommendation. For those recommendations OPM concurred or partially concurred with, OPM described the steps it planned to take to implement them. OPM and the Executive Director of MSPB also provided technical comments, which we incorporated as appropriate. OPM concurred with our recommendation to assess the adequacy of leadership training for supervisors. Specifically, OPM noted that it will evaluate how agencies are training new supervisors and provide agencies guidance on evaluating the effectiveness of leadership training. OPM partially concurred with our recommendation to determine if promising practices at some agencies should be more widely used government-wide. Importantly, OPM agreed to work with the CHCO Council to (1) determine if technical guidance is needed to help agencies more effectively use the supervisory probationary period, (2) explore more government-wide use of rotational assignments, and (3) discuss options for employees to advance without taking on supervisory or managerial duties. In each of these cases, OPM noted that agencies already have authority to take these actions. We acknowledge OPM’s point and have clarified the report accordingly. We maintain, however, that OPM can still play a leadership role and encourage agencies to take these steps. Our recommendation for OPM to take steps to help supervisors make effective use of the probationary period for new employees contained two parts. OPM partially concurred with the part of the recommendation calling on OPM to determine if certain occupations require a probationary period longer than 1-year to allow supervisors sufficient time to assess an individual’s performance. In particular, OPM agreed to consult with stakeholders to determine, among other things, if an extension to the probationary period for certain complex occupations is needed and, if necessary, pursue the established Executive Branch deliberation process for suggesting legislative proposals. OPM noted that it has authority to provide for longer probationary periods under certain circumstances and we have modified the recommendation so that it also calls on OPM to initiate the regulatory process to do so if warranted. As stated in our report, however, extending the probationary period and concurrently limiting appeal rights during that time would require legislative action under certain circumstances. At the same time, OPM did not concur with the part of the recommendation for OPM to determine the benefits and costs of providing automated notifications to supervisors that an individual’s probationary period is ending and that the supervisor needs to make an affirmative decision. OPM stated that choosing the best method to ensure that supervisors are aware that the probationary period is ending and appeal rights will accrue is an agency responsibility. We agree. OPM also wrote that HR systems at all Shared Service Centers have the functionality to notify supervisors when an employee’s probationary period is ending. However, as our report notes, even though OPM considers having a tool in place to notify supervisors that a probationary period is ending to be a leading practice, not all agencies have implemented that practice. Accordingly, we have clarified the recommendation so that it calls on OPM to educate agencies on the benefits and availability of automated notifications to alert supervisors. OPM partially concurred with our recommendation to use the results of various surveys such as the FEVS and other information sources to help determine the extent to which its tools and guidance for dealing with poor performers are cost-effectively meeting agencies’ needs. Specifically, OPM said it would use relevant data from these resources to inform decisions about content and distribution methods for the material OPM makes available to agencies. At the same time, OPM noted that the information contained in these surveys and other data sources had certain limitations and may not always be relevant. We agree and have clarified the recommendation accordingly. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of the Office of Personnel Management, the Chairman of the Merit Systems Protection Board, as well as to the appropriate congressional committees and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. We were asked to examine the rules and trends relating to the review and dismissal of employees for poor performance. Our objectives were to (1) describe and compare avenues for addressing poor performance, including the formal procedures required when dismissing employees for poor performance; (2) describe issues that can affect an agency’s response to poor performance; (3) determine trends in dismissals and other agency actions taken for poor performance since 2004; and (4) assess the extent to which OPM provides the policy, guidance, and training that agencies say they need to address poor performance. To describe and compare avenues for addressing poor performance, including the formal procedures required when dismissing employees for poor performance, we reviewed relevant sections of title 5 of the United States Code (title 5) and Office of Personnel Management (OPM) regulations to describe the process for addressing poor performance in the competitive, excepted, and Senior Executive services. We analyzed the process for taking personnel actions for poor performance under chapter 43 and chapter 75 of title 5, including when use of one authority over the other may be preferable in certain circumstances. To determine how agencies are addressing poor performance and to understand the practical issues various agency employees consider when addressing poor performance, we interviewed OPM officials from the Merit System Accountability and Compliance Office, Office of Employee Services, and other offices that work with agencies to address poor performance; the Merit Systems Protection Board (MSPB) including the Executive Director, representatives from the Office of Regional Operations, the Office of Appeals Counsel, and an administrative judge; selected chief human capital officers (CHCO) chosen for their particular expertise in the issue area as identified through the Executive Director’s Office of the CHCO Council and previous GAO work on related topics, the National Treasury Employees Union, American Federation of Government Employees, the Federal Managers Association, individual members of the Federal Employees Lawyers Group, the Partnership for Public Service and the Senior Executives Association. Additionally, we interviewed selected experts from academia and the private sector, including Dr. Dennis Daley, Professor of Public Administration, North Carolina State University, School of Public and International Affairs; Dr. Ellen Rubin, Assistant Professor at Rockefeller College of Public Affairs & Policy, University at Albany, State University of New York; Stewart Liff, author of Improving the Performance of Government Employees: A Manager’s Guide (2011) and The Complete Guide to Hiring and Firing Government Employees (2010); and Robin Wink, Esq., who teaches a seminar “Managing the Federal Employee: Discipline and Performance Process.” Their expertise was determined by a review of their published materials or training they provide on the topics of performance management and addressing poor performance. We also conducted a literature review. To determine trends in dismissals and other agency actions taken for poor performance since 2004, we analyzed data from OPM’s Enterprise Human Resources Integration (EHRI) data warehouse for fiscal years 2004 through 2013, the most recent year available. We analyzed EHRI data starting with fiscal year 2004 because personnel data for the Department of Homeland Security (which was formed in 2003) had stabilized by 2004. Personnel actions, such as separations, demotions, and reassignments are assigned Nature of Action (NOA) and legal authority codes that describe the action and the legal or regulatory authority for the action. We reviewed OPM’s “The Guide to Processing Personnel Actions” to determine which NOA/legal authority combinations are associated with performance-related dismissals, demotions, or reassignments, and with conduct-related dismissals and we confirmed these codes with OPM. In some cases, NOA/legal authority combinations could cover both performance and conduct. In these cases, we counted the action as performance-related only so that a) we would most accurately capture the magnitude of actions taken for performance in the government, and b) avoid double counting dismissals. Thus, some cases counted exclusively as a performance action may have elements of conduct as well. To identify individuals with poor performance who voluntarily retired or resigned before action was taken against them, we counted separation actions for voluntary retirement or resignations and retirements or resignations in lieu of involuntary action where there was a corresponding unacceptable performance rating within the separation year or year prior to separation. To examine attrition patterns for employees who received unacceptable performance ratings, we tracked the status of employees who received an unacceptable performance rating in 2008 to determine how many were dismissed and when, how many voluntarily left the government and when and how many remained in the government as of 2013. There are some data reliability limitations with the rating field. While ratings generally reflect recent performance, there can be some variation. Not all rating periods are the same across the agencies and they may not align with the fiscal year, there may be lags in agencies’ updates of ratings, and some ratings are never updated. Consequently, we looked at recorded ratings for the past three years to develop a somewhat more comprehensive picture of employees’ performance ratings. To assess the reliability of EHRI data, we reviewed past GAO assessments of EHRI data, interviewed OPM officials knowledgeable about the data, and conducted electronic testing of EHRI to assess the accuracy and completeness of the data used in our analyses. We reviewed MSPB data and interviewed officials to determine the number of employee appeals for actions based on performance, the outcomes of the cases, and how long it took to resolve those cases. We determined the data used in this report to be sufficiently reliable for our purposes. To assess the extent to which OPM provides policy, guidance, and training to help agencies address poor performance, we reviewed guidance and tools that OPM provides to agencies to assist them in addressing poor performance. We compared the content of OPM tools and guidance to what CHCOs, key stakeholders, and experts said is needed. We reviewed documentation of guidance and tools that OPM provides to agencies to the challenges articulated by CHCOs, key stakeholders, and experts. We interviewed OPM officials about mechanisms they use to (1) collect information to develop tools and guidance, and (2) collect feedback from agencies about the usefulness of existing guidance and tools. We also reviewed documentation and interviewed OPM officials on its Performance Appraisal Assessment Tool. We conducted this performance audit from February 2014 through January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Agency must prove the performance deficiency is in a critical element. chapter 75 Agency is not required to prove the performance deficiency is in a critical element. When the employee’s performance in one or more critical elements is unacceptable, the employee will (1) be notified of the deficiency; (2) be offered the agency’s assistance to improve; and (3) be warned that continued poor performance could lead to a change to lower grade or removal. (This is commonly referred to as the PIP, an abbreviation for both performance improvement plan and for performance improvement period.) The extent to which an employee is on notice of the agency’s expectations is a factor in determining the appropriateness of the penalty. Also, an agency cannot require that an employee perform better than the standards that have been communicated to the employee. If the employee’s performance improves during the PIP, and remains acceptable for 1 year, a new PIP is necessary before taking an action under this chapter. There is no obligation to offer a period of improvement at any point. Agency is not required to prove that the personnel action will promote the efficiency of the service. Agency must prove that the personnel action will promote the efficiency of the service. Action must be supported by substantial evidence: that a reasonable person might find the evidence supports the agency’s findings regarding the poor performance, even though other reasonable persons might disagree. Action must be supported by a preponderance of the evidence: that a reasonable person would find the evidence makes it more likely than not that the agency’s findings regarding the poor performance are correct. The agency must provide a notice of proposed action 30 days before any action can be taken, and must provide the employee with a reasonable opportunity to reply before a decision is made on the proposal. The notice must state the specific instances of unacceptable performance that are the basis for the action and also the critical performance element involved. The notice must state the specific instances of poor performance that are the basis for the action. A person higher in the chain of command than the person who proposed the action must concur. The deciding official does not have to be a person higher in the chain of command than the person who proposed the action. Agency must issue a final decision within an additional 30 days of the expiration of the 30 day advance notice period. Agency is under no particular time constraint, other than there cannot be a delay so extensive that it constitutes an error that harms the employee. Once the agency meets the requirements to take an action, the MSPB cannot reduce the agency’s penalty. After finding that the agency meets the requirements to take a chapter 75 action, the MSPB may reduce the agency’s penalty. The Douglas factors are not used. The agency must consider the relevant Douglas factors when reaching a decision on the appropriate penalty. Douglas factors are established criteria that supervisors must consider in determining an appropriate penalty to impose to address problems with an employee. Affirmative Defenses The agency action will not be sustained if the employee was harmed by the agency’s failure to follow procedures, if the agency decision was reached as a result of the commission of a prohibited personnel practice, or if the decision is otherwise not in accordance with the law. Set forth below are the basic appeal avenues available to employees who are removed or demoted for poor performance pursuant to chapters 43 or 75. In addition to the appeal avenues discussed below, other appeal options are available to employees removed or demoted for poor performance. For example, while probationary employees are generally unable to appeal a removal or demotion to the Merit Systems Protection Board (MSPB), those in the competitive service may do so if they believe the agency action was based on partisan political reasons or due to the employee’s marital status. Furthermore, any employee may file an Equal Employment Opportunity (EEO) complaint with his or her agency if the employee believes that the removal or demotion was motivated by unlawful employment discrimination, regardless of whether the employee has due process or appeal rights. Similarly, any employee who believes his or her demotion or removal was the result of a prohibited personnel practice, such as retaliation for whistleblowing, may go to the Office of Special Counsel (OSC) to seek corrective action. Chapters 43 and 75 provide that an employee with appeal rights who wants to contest an agency decision to remove or demote may file an appeal of that agency decision with the MSPB. If that employee is a member of a collective bargaining unit, the employee also has the option of pursuing a grievance under negotiated grievance procedure if the appeal has not been excluded from coverage by the collective bargaining agreement. The employee may pursue either option, but not both. If an employee chooses to appeal his or her removal or demotion to the MSPB, the employee must do so within 30 days after the effective date of the agency action or receipt of the agency’s decision (to remove or demote), whichever is later. An employee who files an appeal with the MSPB has a right to a hearing. In a performance-based removal or demotion taken under chapter 43, an agency must establish that (1) OPM approved the agency’s performance appraisal system, (2) the agency communicated to the employee the performance standards and critical elements of his or her position, (3) the employee’s performance standards are valid (performance standards are not valid if they do not set forth the minimum level of performance that an employee must achieve to avoid removal for unacceptable performance), (4) the agency warned the employee of the inadequacies of his or her performance during the appraisal period and gave the employee a reasonable opportunity to improve, and (5) the employee’s performance remained unacceptable in at least one critical element. White v. Department of Veterans Affairs, 120 M.S.P.R. 405 (2013). In a removal or demotion action taken under chapter 75, an agency must establish that the action will “promote the efficiency of the service.” A specific standard of performance does not need to be established and identified in advance for the employee; rather, an agency must prove that its measurement of the employee’s performance was both accurate and reasonable. Shorey v. Department of the Army, 77 M.S.P.R. 239 (1998); Graham v. Department of the Air Force, 46 M.S.P.R. 227 (1990) (agency contention that “basic medical care” was performance standard for physician was not unreasonable). While it is within an agency’s discretion to take an action under chapter 75 rather than chapter 43, an agency taking an action under chapter 75 may not circumvent chapter 43 by asserting that an employee should have performed better than the standards communicated to the employee. Lovshin v. Department of the Navy, 767 F.2d 826 (Fed. Cir. 1985), cert.denied, 475 U.S. 1111 (1986), reh. denied, 476 U.S. 1189 (1986). An employee subject to a removal or demotion action under chapter 75 has no right to a performance improvement period and the failure to afford an employee one is not grounds for reversing the agency action. However, an agency’s failure to provide such a period is relevant to the consideration of whether the penalty (removal or demotion) is reasonable; specifically, whether or not the employee was on notice that the deficient performance might be the basis for an adverse action.; Fairall v.Veterans Administration, 844 F.2d 775 (Fed. Cir. 1987); Madison v. Defense Logistics Agency, 48 M.S.P.R. 234 (1991). In an initial decision issued by the MSPB administrative judge, a removal or demotion taken under chapter 43 will be sustained if the agency’s decision is supported by substantial evidence or, in a case brought under chapter 75, is supported by a preponderance of the evidence. However, even where the burden of proof is met, if the employee shows harmful error in the agency procedure used in arriving at the decision, or that the decision was based on a prohibited personnel practice, the agency decision may not be sustained. The initial decision becomes final 35 days after issuance, unless a party requests the full 3-member board (the Board) review the matter by filing a petition for review. OPM may also file a petition for review but only if OPM believes the opinion is erroneous and will have a substantial impact on civil service law, rule, or regulation. If the Board grants the petition for review (for example, where new and material evidence is available or the decision is based on erroneous interpretation of law) the Board may affirm, reverse, or vacate the initial decision (in whole or in part), may modify the decision, or may send the matter back to the administrative judge for further processing. An employee (but not the agency) may obtain judicial review of a final MSPB decision with the United States Court of Appeals for the Federal Circuit (hereinafter referred to as the Federal Circuit) by filing a petition for review within 60 days of the final Board action. Under certain limited circumstances, OPM may also obtain review at the Federal Circuit. However, if OPM did not intervene in the matter before the MSPB, then OPM must first petition the MSPB for a reconsideration of its decision before petitioning the Federal Circuit for review. The Federal Circuit reviews and sets aside agency action, findings or conclusions found to be (1) arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law, (2) obtained without procedures required by law, rule or regulation being followed, or (3) unsupported by substantial evidence. 5 U.S.C. § 7703(c). If the employee files a grievance under negotiated grievance procedure, and the parties are not able to resolve the matter, the exclusive representative or the agency may invoke binding arbitration. The employee cannot invoke arbitration. An arbitrator is to adhere to the same burdens of proof for sustaining agency actions under chapter 43 or 75 as are required if appealed at the MSPB. Judicial review of an arbitrator award, as with a final MSPB decision, may be obtained at the Federal Circuit. The Federal Circuit review is conducted in the same manner and under the same conditions as if the matter had been decided by the MSPB. Where an employee with appeal rights under chapter 43 or 75 believes that unlawful discrimination motivated his or her removal or demotion, the employee may choose to file a discrimination complaint with his or her agency (referred to as a “mixed-case complaint”) – or may file an appeal with the MPSB (referred to as a “mixed-case appeal”). If the employee is a member of a collective bargaining unit, the employee also has the option of pursuing a grievance alleging discrimination under the negotiated grievance procedure where such appeals have not been expressly excluded from coverage by the collective bargaining agreement. The employee may either pursue a mixed case (complaint or appeal) or a negotiated grievance procedure, but not both. Where an employee chooses to pursue a mixed-case complaint and has filed a complaint of discrimination, an agency has 120 days from the filing of the complaint to issue a final decision on that complaint of discrimination. If the decision is not issued timely, the employee may appeal to the MSPB at any time after the expiration of the 120 days. Or, if the employee is dissatisfied with a final agency decision, the employee may appeal to the MSPB within 30 days of receipt of the decision. Instead of filing an appeal with the MSPB, the employee also has the option of filing a civil action in district court. Filing an action in district court results in a de novo review. Where an employee chooses to pursue a mixed case appeal, the employee must file with the MSPB within 30 days after the effective date of the removal or demotion action. If the employee appeals to the MSPB—either under a mixed case complaint or a mixed case appeal—the appeal is to be processed in accordance with MSPB’s appellate procedures (including a right to a hearing) and a decision must be rendered by MSPB within 120 days after the appeal is filed. Within 30 days after receiving a final MSPB decision, an employee has the choice of petitioning the U.S. Equal Employment Opportunity Commission (EEOC) to consider the MSPB decision or filing a civil action in district court. If the employee petitions the EEOC, the EEOC shall determine within 30 days whether to consider the MSPB decision. If the EEOC determines to do so, it has 60 days to consider the MSPB record of the proceedings and either (1) concur in the Board decision or (2) issue an EEOC decision which finds that the Board decision incorrectly interpreted applicable discrimination law or that the decision is not supported by the evidence. If the EEOC concurs with the MSPB decision, the employee may file a civil action in district court. If the EEOC issues its own decision, the matter is then immediately referred back to the MSPB which has 30 days to consider the decision. The MSPB may either (1) concur with EEOC’s decision or (2) find that the EEOC decision incorrectly interprets civil service provisions or that the record does not support the EEOC’s decision as to such provisions, and reaffirm its initial decision. If the MSPB reaffirms its decision, the matter goes to a special panel which has 45 days in which to issue a final decision. The employee may file a civil action in district court if dissatisfied with the special panel decision. Where an employee chooses to pursue a negotiated grievance procedure which results in an arbitration decision, if unsatisfied with the arbitrator’s decision the employee (but not the agency) may request, within 35 days of the decision, the MSPB conduct a review of that decision. The Board may require additional development of the record, through submissions of evidence or a hearing. If not satisfied with the results of the MSPB review decision, the employee may continue on with the administrative and judicial appeal process provided for mixed case appeals under 5 U.S.C. § 7702, described above. Format Hard copy guide and online at Human Resources University (free) Description This guidebook for supervisors describes the legal process for taking action against an employee for poor performance, provides answers to frequently asked questions, and provides samples of documents provided by a supervisor to an employee at different stages in the process of addressing performance problems. This course provides an overview and tools for dealing with poor performing employees. The course material includes information on communicating performance matters to employees, developing a performance improvement plan, and how to take corrective and legal action when performance continues to decline. Online (free) The goal of this course is to provide supervisors with the necessary skills to have the difficult conversation that is inherent when dealing with poor performance and to provide a safe environment to practice delivering difficult conversations. Online (free) This course is intended to enhance Merit System Principles awareness and understanding among managers throughout the Federal Government. OPM’s website provides agencies with guidance on addressing poor performance, including a glossary of terms and concepts used when taking performance based actions, an overview of employee appeal options for performance based actions, and guidance on how to write valid performance standards for employees, among other topics. OPM provides assistance in response to inquiries on how to address and resolve poor performance. OPM does not become involved in the details of specific cases but will provide agency HR officials or managers with assistance regarding commonly asked questions that arise during the process. The PAAT is designed to help agencies develop and manage performance appraisal programs. To participate in the PAAT, agencies answer questions on their appraisal programs and OPM scores agencies on a scale from 1- 100. The PAAT has three questions related to poor performance. The FEVS measures employees’ perceptions of whether, and to what extent, conditions characterizing successful organizations are present in their agencies. Agencies are to use this information to make strategic decisions about management. The FEVS includes several questions on performance management and dealing with poor performers. In addition to the individual named above, Tom Gilbert, Assistant Director; Shea Bader, Analyst-in-Charge; Sara Daleski; Jeffrey DeMarco; Karin Fangman; Colleen Marks; Donna Miller; Cristina Norland; and Rebecca Shea made major contributions to this report.
|
Federal agencies' ability to address poor performance has been a long-standing issue. Employees and agency leaders share a perception that more needs to be done to address poor performance, as even a small number of poor performers can affect agencies' capacity to meet their missions. GAO was asked to examine the rules and trends relating to the review and dismissal of federal employees for poor performance. This report (1) describes and compares avenues for addressing poor performance, (2) describes issues that can affect an agency's response to poor performance, (3) determines trends in how agencies have resolved cases of poor performance since 2004, and (4) assesses the extent to which OPM provides guidance that agencies need to address poor performance. To address these objectives, GAO reviewed OPM data, and interviewed, among others, OPM and MSPB officials, selected CHCOs, and selected union officials. Federal agencies have three avenues to address employees' poor performance: Day-to-day performance management activities (such as providing regular performance feedback to employees) can produce more desirable outcomes for agencies and employees than dismissal options. However, supervisors do not always have effective skills, such as the ability to identify, communicate, and help address employee performance issues. Probationary periods for new employees provide supervisors with an opportunity to evaluate an individual's performance to determine if an appointment to the civil service should become final. According to the Chief Human Capital Officers (CHCOs) that GAO interviewed, supervisors often do not use this time to make performance-related decisions about an employee's performance because they may not know that the probationary period is ending or they have not had time to observe performance in all critical areas Formal procedures —specifically chapters 43 and 75 of title 5 of the United States Code and OPM implementing regulations—require agencies to follow specified procedures when dismissing poor performing permanent employees, but they are more time and resource intensive than probationary dismissals. Federal employees have protections designed to ensure that they are not subject to arbitrary agency actions. These protections include the ability to appeal dismissal actions to the Merit Systems Protection Board (MSPB) or to file a grievance. If employees are unsatisfied with the final decision of the MSPB or an arbitrator decision, they may seek judicial review. The time and resource commitment needed to remove a poor performing permanent employee can be substantial. It can take six months to a year (and sometimes longer) to dismiss an employee. According to selected experts and GAO's literature review, concerns over internal support, lack of performance management training, and legal issues can also reduce a supervisor's willingness to address poor performance. In 2013, agencies dismissed around 3,500 employees for performance or a combination of performance and conduct. Most dismissals took place during the probationary period. These figures do not account for those employees who voluntarily left rather than going through the dismissal process. While it is unknown how many employees voluntarily depart, the CHCOs that GAO interviewed said voluntary departures likely happen more often than dismissals. To help agencies address poor performance, the Office of Personnel Management (OPM) makes a range of tools and guidance available in different media, including its website, in-person training, and guidebooks. However, CHCOs and other experts said agencies are not always aware of this material and in some cases it fell short of their needs. Going forward, it will be important for OPM to use existing information sources, such as Federal Employee Viewpoint Survey results, to inform decisions about what material to develop and how best to distribute it. GAO is making four recommendations to OPM to strengthen agencies' ability to deal with poor performers including working with stakeholders to assess the leadership training agencies provide to supervisors. OPM concurred or partially concurred with all but one recommendation noting that GAO's recommendation to explore using an automated process to notify supervisors when a probationary period is about to end is an agency responsibility. GAO agrees and has clarified the recommendation.
|
Meeting veterans’ long-term care needs has become a more pressing issue as the veteran population ages. The elderly veteran population most in need of long-term care—those 85 years and older—-grew dramatically from about 387,000 to about 764,000, an increase of about 100 percent from fiscal years 1998 to 2003. (See fig. 1.) Over the past two decades the provision of long-term care has been shifting away from institutions and nursing homes towards more noninstitutional long-term care services in VA and in other programs. In recognition of this change in approach to how long-term care is provided, the Federal Advisory Committee on the Future of VA Long-Term Care recommended, in 1998, that VA update its long-term care policy by meeting the growing demand for long-term care through significant expansion of its capacity to provide home and community-based services—also known as noninstitutional long-term care services—while maintaining its nursing home capacity at the 1998 level. VA provides a continuum of noninstitutional long-term care services to provide care to veterans needing assistance. Long-term care provided in noninstitutional settings—-including services provided in veterans’ homes and community-based services such as adult day health care centers—-is preferred by many veterans. Noninstitutional care also includes respite care services that temporarily relieve a veteran’s caregiver from the burden of caring for a chronically ill and disabled veteran in the home. VA offers noninstitutional long-term care services directly or through other providers with which VA contracts. (See table 1 for the noninstitutional long-term care services in our review.) Veterans can also receive nursing home care and noninstitutional services financed by sources other than VA, including Medicaid and Medicare, private health or long-term care insurance, or self-financed. States design and administer Medicaid programs that include coverage for nursing home care and home and community-based services. Medicare primarily covers acute care health costs and therefore limits its nursing home coverage to short-term stays following hospitalization. Medicare also pays for home health care. State Medicaid programs are the principal funders of nursing home and home health care services, besides patients self-financing their care. We have estimated that private insurance pays for about 11 percent of nursing home and home health care expenditures. VA’s overall nursing home workload—-average daily census—-was 33,214 in fiscal year 2003, slightly below its fiscal year 1998 workload. However, the workload was below the fiscal year 1998 level each year, reaching its lowest level in fiscal year 2000. Over the last 6 years, VA’s use of nursing homes by setting changed. These changes in workload and use of different settings to provide nursing home care varied by network. VA’s nursing home workload was 33,214 in fiscal year 2003, 1 percent below its fiscal year 1998 workload. (See table 2.) Nursing home workload varied over this period but was consistently below the fiscal year 1998 level, decreasing by as much as 8 percent in fiscal year 2000 from its fiscal year 1998 level. The distribution of the nursing home workload among the three nursing home settings shifted during this period. From fiscal years 1998 through 2003, workload in the nursing homes VA operates declined by 1,014. In addition, workload in community nursing homes declined by 1,434. In contrast, workload in state veterans’ homes increased by 2,032. Although VA nursing home workload did not change greatly from fiscal years 1998 through fiscal year 2003, some networks experienced significant increases or decreases. Fourteen of VA’s 21 networks had lower nursing home workloads in fiscal year 2003 than in fiscal year 1998 for all three settings combined. (See fig. 2.) Network 5 (Baltimore) had the largest decline in workload—19 percent. Seven networks’ nursing home workloads grew during this period. Network 17 (Dallas) had the largest increase in nursing home workload—42 percent. VA’s use of nursing home care among the three settings changed from fiscal years 1998 through 2003. The percentage of workload met in state veterans’ nursing homes increased from 43 to 50 percent. (See fig. 3.) This increase is attributable in large part to 18 more state veterans’ nursing homes receiving payment from VA to provide such care. By fiscal year 2003, 109 state veterans’ nursing homes received VA payment to provide this care. VA is authorized to pay for about two-thirds of the costs of construction of state veterans’ nursing homes and pays about a third of the costs per day to provide care to veterans in these homes. The percentage of workload provided in state veterans’ nursing homes increased in 19 of VA’s 21 health care networks. Network 17 (Dallas) had the largest increase in the percentage of workload provided by state veterans’ nursing homes. The percentage of nursing home care provided by state veterans’ nursing homes in this network increased from 0 to 30 percent during this period after the opening of four state veterans’ nursing homes in Texas. By contrast, the percentage of workload provided by state veterans’ nursing homes declined in 2 networks: Network 5 (Baltimore) by 3 percent and Network 21 (San Francisco) by 2 percent. The percentage of nursing home workload provided in VA’s own nursing homes declined from 40 to 37 percent during this period. Thirteen networks provided a smaller percentage of nursing home care in VA- operated nursing homes in fiscal year 2003 than in fiscal year 1998. Network 17 (Dallas) had the largest decrease in the percentage of workload provided by VA-operated nursing homes, declining from 68 percent to 49 percent during this period. This resulted because the state veterans’ nursing home workload increased substantially. By contrast, the percentage of care provided in VA-operated homes increased in 8 networks. Network 5 (Baltimore) had the largest increase, growing from 50 percent in fiscal year 1998 to 64 percent in fiscal year 2003. In Network 21 (San Francisco), the percentage of care in VA-operated nursing homes increased by 7 percent and in the remaining 6 networks the percentage of care in VA-operated nursing homes increased 3 percent or less. Our analysis of length-of stay trends in VA-operated nursing homes shows that the decline in the number of veterans with long stays—90 days or more—largely explains the decline in nursing home workload during this period. The number of long-stay veterans declined from about 14,200 in fiscal year 1998 to about 12,700 in fiscal year 2002, the most recent year for which data are available. At the same time the number of short-stay veterans—-those with stays of less than 90 days—-increased from about 26,700 to about 32,200. However, the increase in short-stay patients was not large enough to offset the decline in workload resulting from the decrease in long-stay patients. This results because multiple short-stay patients are required to generate the same workload as a single long-stay patient. For example, a single long-stay patient in a nursing home for 12 months creates a workload of an average daily census of 1 over a year. By contrast, 12 short-stay patients staying in a nursing home for one month each creates the same average daily census. Among VA’s networks, 16 had declines in the number of long-stay patients in VA-operated homes during this period. Five networks, however, had increases in the number of long-stay patients: Network 1 (Boston), Network 5 (Baltimore), Network 7 (Atlanta), Network 12 (Chicago) and Network 21 (San Francisco). VA officials attribute some of the changes in nursing home workload in VA-operated facilities to an increased emphasis on short-term, post-acute rehabilitation care. VA’s policy is to provide nursing home care in its own nursing homes as a priority to post-acute patients, patients who cannot be adequately cared for in community nursing homes or in noninstitutional settings, and those patients who can be cared for more efficiently in VA’s own nursing homes. In addition, VA may provide nursing home care, to the extent resources are available, to other patients who need long-term care for chronic disabilities. Consistent with VA’s policy, the proportion of discharged veterans whose length of stays were less than 90 days in VA- operated nursing homes increased from 74 to 81 percent from fiscal years 1998 through 2003. This is similar to lengths of stay provided in facilities certified by Medicare—-but not Medicaid—-that provide post-acute skilled nursing home care. About 81 percent of discharged patients in these certified Medicare facilities had length of stays of less than 90 days in fiscal year 1999. The percentage of workload in community nursing homes declined from 17 to 13 percent from fiscal year 1998 through fiscal year 2003. This decline occurred because VA reduced the number of patients served and the number of days paid for under contract in this setting. The number of patients in these settings declined from 28,893 to 14,032 during this period. Some VA officials told us that in the past VA used community nursing homes for more patients and for longer-term contracts than currently. VA officials told us that now shorter-term contracts are often used to transition veterans to nursing home care, which is paid by other payers such as Medicaid. For example, some network officials told us that contracts for community nursing home care are often 30 days or less. Of the 21 networks, 17 reduced the percentage of nursing home workload provided in community nursing homes during this period. Four networks reduced the percentage of nursing home care provided in community nursing homes by about 11 percent: Network 4 (Pittsburgh), Network 5 (Baltimore), Network 6 (Durham), and Network 17 (Dallas). By contrast, the percentage of workload provided in community nursing homes increased in 4 networks. The percentage of nursing home care provided in community nursing homes in Network 19 (Denver) increased by about 10 percent. The percentage of nursing home care provided in community nursing homes among the other 3 networks—- Network 23 (Minneapolis), Network 20 (Portland), and Network 18 (Phoenix)—-increased 3 percent or less. VA’s noninstitutional long-term care workload—-average daily census—- for the six services in our review increased by approximately 75 percent from fiscal years 1998 through 2003. Workload increased by 4,655 during this period to 10,892. (See table 3.) Much of this growth came from increases in skilled home health and homemaker/home health aide care— services that are most likely to help veterans prevent or delay the need for nursing home care. One of the services that grew most rapidly was skilled home health care which increased by 127 percent during this period. Although noninstitutional long-term care workload increased, all veterans may not have access to these services because there are limitations in the availability of these services. We previously reported a number of limitations in access to noninstitutional services that veterans experienced in the fall of 2002. At that time some facilities did not offer some of these noninstitutional services at all, or offered them only in certain parts of the geographic area they served. For example, more than half of VA’s 139 medical facilities did not provide home-based primary care or adult day health care in the fall of 2002. The noninstitutional workload numbers for home-based primary care in table 3 are different from those reported by VA in its appropriations submissions to Congress and in recent VA testimony. In its reports on noninstitutional workload, VA has measured home-based primary care services using enrolled days—-the number of days a veteran is enrolled to receive a service—-rather than the number of home-based primary care visits a veteran receives. However, VA has measured use of the other noninstitutional services in visits. Therefore, to ensure comparability across services, we used visits as the workload measure for home-based primary care. As a result, our workload total for home-based primary care is smaller than the number VA reports because veterans do not typically receive a home-based primary care visit for each day in which they are enrolled in home-based primary care. Specifically, we report the 2002 home-based primary care workload as 903 while VA has reported it as 8,081. Our consistent measure of all services in visits results in a lower total noninstitutional workload than that reported by VA. Over the last 6 years, the veteran population most in need of long-term care has grown dramatically. During this period, VA’s use of nursing home care by setting has changed so that state veterans’ nursing homes now provide one-half of all nursing home workload provided or paid for by VA. At the same time, VA decreased the workload it serves in its own nursing homes consistent with VA’s policy to emphasize short-stay, post-acute care in its own nursing homes. VA also used community nursing home care less as it transitioned more veterans who needed such care to care paid for by other payers such as Medicaid. In addition, VA increased the long-term care workload provided in noninstitutional settings. These trends over the last 6 years raise important questions for how VA is meeting current long-term care need and what it may need to do to meet future long-term care need. What does the significant variation in nursing home workload change among the networks over this 6-year period mean for meeting veterans’ long-term care needs in different parts of the country? What are the implications for access, quality, and costs of VA’s significant shift to using state veterans’ nursing homes to provide one-half of its nursing home care? How has VA’s increased emphasis on post-acute care in its own nursing homes affected its ability to continue providing long-term care in its nursing homes for veterans with chronic disabilities? To what extent does total VA long-term care workload—-composed of a fairly constant nursing home workload and a rapidly expanding but smaller noninstitutional workload—-meet the needs of a rapidly growing elderly veteran population? The continuing rapid rise in the veteran population likely to be in greatest need of long-term care—-those 85 years and older—-poses a major challenge for VA health care. Answers to these four questions can help policymakers, VA, and its stakeholders better understand the best ways to meet VA’s long-term care challenge. We look forward to continuing to work with you on these significant issues. Mr. Chairman, this concludes my prepared remarks. I will be pleased to answer any questions you or other Members of the Committee may have. For further information regarding this testimony, please contact me at (202) 512-7101. Individuals making key contributions to this testimony include James C. Musselwhite, Thomas A. Walke, and Pamela A. Dooley. VA Long-Term Care: Veterans’ Access to Noninstitutional Care Is Limited by Service Gaps and Facility Restrictions. GAO-03-815T. Washington, D.C.: May 22, 2003. VA Long-Term Care: Service Gaps and Facility Restrictions Limit Veterans’ Access to Noninstitutional Care. GAO-03-487. Washington, D.C.: May 9, 2003. Department of Veterans Affairs: Key Management Challenges in Health and Disability Programs. GAO-03-756T. Washington, D.C.: May 8, 2003. Long-Term Care: Availability of Medicaid Home and Community Services for Elderly Individuals Varies Considerably. GAO-02-1121. Washington, D.C.: September 26, 2002. Medicare: Utilization of Home Health Care by State. GAO-02-782R. Washington, D.C.: May 23, 2002. VA Long-Term Care: The Availability of Noninstitutional Services Is Uneven. GAO-02-652T. Washington, D.C.: April 25, 2002. VA Long-Term Care: Implementation of Certain Millennium Act Provisions Is Incomplete, and Availability of Noninstitutional Services Is Uneven. GAO-02-510R. Washington, D.C.: March 29, 2002. VA Long-Term Care: Oversight of Community Nursing Homes Needs Strengthening. GAO-01-768. Washington, D.C.: July 27, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Department of Veterans Affairs (VA) is likely to see a significant increase in long-term care need over the next decade. The number of veterans most in need of longterm care services--those 85 years old and older--is expected to increase from about 870,000 to 1.3 million over this period. Many of these veterans will rely on VA to provide or pay for nursing home care or noninstitutional services that may help them remain at home and, for some, delay or prevent the need for nursing home care. VA operates its own nursing home care units in 132 locations. VA also pays for nursing home care under contract in non-VA nursing homes--referred to as community nursing homes. In addition, VA pays part of the cost of care for veterans at state veterans' nursing homes and also pays a portion of the construction costs for some state veterans' nursing homes. Congress has expressed concerns about recent trends in VA long-term care service delivery and how VA plans to meet the nursing home care needs and related longterm care needs of veterans as the elderly population most in need of long-term care increases. GAO was asked to determine for fiscal years 1998 through 2003 (1) how VA nursing home workload has changed and (2) how VA noninstitutional long-term care workload has changed. Recent trends in VA nursing home care and noninstitutional service delivery raise important questions, particularly whether access to services is sufficient to meet the needs of a rapidly growing elderly veteran population. VA's overall nursing home workload--average daily census--was 33,214 in fiscal year 2003, 1 percent below its fiscal year 1998 workload. The workload was below the fiscal year 1998 level each year, decreasing by as much as 8 percent below the fiscal year 1998 level in fiscal year 2000. VA's use of nursing home care by setting also changed over the 6-year period. First, the percentage of workload in state veterans' nursing homes increased as the number of state veterans' nursing homes receiving VA payments increased. Second, the percentage of workload in VA's own nursing homes declined, in part, because VA decreased the number of long-stay patients and increased the number of short-stay patients it treats in the nursing homes it operates. This is consistent with VA's increased emphasis on post-acute care. Third, the percentage of workload in community nursing homes declined from 17 to 13 percent. VA officials told us that now shorter-term contracts are often used to transition veterans to nursing home care, which is paid for by other payers such as Medicaid. VA's noninstitutional long-term care workload--average daily census--increased by approximately 75 percent from fiscal years 1998 through 2003. Workload increased by 4,655 during this period to 10,892, reflecting a change in VA's approach to care which includes meeting more long-term care need through noninstitutional services. Most of the growth in noninstitutional workload came from VA's greater use of contract skilled home health care, which includes medical services provided to veterans at home, and homemaker/home health aide such as grooming and meal preparation.
|
Most federal civilian employees are covered by the Civil Service Retirement System (CSRS) or the Federal Employees’ Retirement System (FERS). Both of these retirement plans include survivor benefit provisions. Three separate retirement plans apply to various groups of judges in the federal judiciary, with JSAS being available to participants in all three retirement plans to provide annuities to their surviving spouses and children. Appendix I provides additional information regarding retirement plans that are available to federal judges. JSAS was created in 1956 to provide financial security for the families of deceased federal judges. It provides benefits to eligible spouses and dependent children of judges who elect coverage within either 6 months of taking office, 6 months after getting married, or during an open season authorized by statute. Active and senior judges currently contribute 2.2 percent of their salaries, and retired judges generally contribute 3.5 percent of their retirement salaries to JSAS. Upon a judge’s death, the surviving spouse is to receive an annual annuity that is equal to 1.5 percent of the judge’s average annual salary during the 3 highest consecutive paid years (commonly known as the “high 3”) times the judge’s years of creditable service. The annuity may not exceed 50 percent of the high 3 and is guaranteed to be no less than 25 percent. Separately, an unmarried dependent child under age 18, or 22 if a full-time student, receives a survivor annuity that is equal to a maximum of 10 percent of the judge’s 3 highest paid years or 20 percent of the judge’s 3 highest paid years divided by the number of children, whichever is smaller. JSAS annuitants receive an annual adjustment in their annuities at the same time, and by the same percentage, as any cost-of-living adjustment (COLA) received by CSRS annuitants. Spouses and children are also eligible for Social Security survivor benefits. Since its inception in 1956, JSAS has changed several times. Because of concern that too few judges were participating in the plan (74 percent of federal judges participated in 1985, which was down from 90 percent in 1976), Congress made broad reforms effective in 1986 with the Judicial Improvements Act of 1985 (Public Law 99-336). The 1985 Judicial Improvement Act (1) increased the annuity formula for surviving spouses from 1.25 percent to the current 1.5 percent of the high 3 for each year of creditable service and (2) changed the provisions for surviving children’s benefits to relate benefit amounts to judges’ high 3 rather than the specific dollar amounts provided by the Judicial Survivors’ Annuities Reform Act of 1976 (Public Law 94-554). In recognition of the significant benefit improvements that were made, the 1985 Judicial Improvements Act increased the amounts that judges were required to contribute from 4.5 percent to 5 percent of their salaries, including retirement salaries. The 1985 Judicial Improvements Act also changed the requirements for government contributions to the plan by specifying that the government would contribute whatever amounts were necessary (up to a maximum of 9 percent of participating judges’ salaries or retirement salaries) to keep the plan fully funded. Under the 1976 Judicial Survivors’ Annuities Reform Act, the government matched the judges’ contributions of 4.5 percent of salaries and retirement salaries. Despite the benefit improvements in the 1985 Judicial Improvements Act, the rate of participation in JSAS continued to decline. In 1991, the rate of participation was about 40 percent overall and 25 percent for newly appointed judges. In response to concerns that required contributions of 5 percent may have created a disincentive to participate, Congress enacted the 1992 Federal Courts Administration Act. Under this act, participants’ contribution requirements were reduced to 2.2 percent of salaries for active and senior judges and 3.5 percent of retirement salaries for retired judges. Another significant change was an increase in benefits for survivors of retired judges. This increase was accomplished by including years spent in retirement in the calculation of creditable service and the high 3 salary averages. The 1992 Federal Courts Administration Act also allowed the judges to stop contributing to the plan if they ceased to be married and granted benefits to survivors of any judge who died in the interim between leaving office and the commencement of a deferred annuity. As of September 30, 2001, there were 1,256 active and senior judges, 203 retired judges, and 260 survivor annuitants covered under JSAS compared to 1,284 active and senior judges, 136 retired judges, and 241 survivor annuitants as of September 30, 1998. Defining Cost for JSAS JSAS is financed by judges’ contributions and direct appropriations in an amount estimated to be sufficient to fund the future benefits to current participants. The government’s contribution is approved through an annual appropriation and is not based on a rate or percentage of judges’ salaries. An enrolled actuary engaged by the Administrative Office of the United States Courts (AOUSC) calculates the annual amount of funding needed based on the difference between the present value of the expected future benefit payments to participants and the present value of net assets in the plan. Appendix II provides more details on the formulas used to determine the participants’ and the government’s contributions and lump sum payments. The cost of a retirement or survivor benefit plan is typically not measured by annual expenditures for benefits. Such expenditures are not an indicator of the overall long-term cost of a plan. The more complete and acceptable calculation of a plan’s cost is the projected future outlays to retirees or survivors, based on the current pool of participants, with such costs allocated annually. This annual cost allocation is referred to as the normal cost. Normal cost calculations, prepared by an enrolled actuary, are estimates and require that many actuarial assumptions be made about the future, including mortality rates, turnover rates, return on investments, salary increases, and COLA increases over the life spans of current and future participants. The plan’s actuary using the plan’s funding method, in this case, the aggregate cost method, determines the plan’s normal cost. Under the aggregate cost method, the normal cost is the level percentage of future salaries, which will be sufficient, along with investment earnings and the plan’s assets, to pay the plan’s benefits. There are many acceptable actuarial methods for calculating normal cost. Regardless of which cost method is chosen, the expected total long-term cost of the plan should be the same; however, year-to-year costs may differ, depending on the cost method used. Our objectives were to determine whether participating judges’ contributions for the 3 years ending in fiscal year 2001 accounted for 50 percent of the JSAS costs and, if not, what adjustments in the contribution rates would be needed to achieve the 50 percent figure. To satisfy our objectives, we examined the normal costs reported in the JSAS annual report submitted by AOUSC to the Comptroller General for plan years 1999 through 2001. We also examined participants’ contributions and other relevant information in the annual report. An independent accounting firm hired by AOUSC audited the JSAS financial and actuarial information, included in the JSAS annual report, with input from an enrolled actuary regarding relevant data such as actuarial present value of accumulated plan benefits. An enrolled actuary certified those amounts that are included in the JSAS annual report. We also discussed the contents of the JSAS reports with officials from AOUSC for the 3 fiscal years (1999 through 2001). We did not independently audit the JSAS annual report or the actuarially calculated cost figures. We performed our review in Washington, D.C., from August 2001 through May 2002, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of AOUSC or his designee. On June 10, 2002, the Deputy Associate Director of AOUSC provided technical comments, which we incorporated into the report where appropriate. For JSAS plan years 1999 through 2001 under the Federal Courts Administration Act of 1992, the participating judges paid more than 50 percent of the JSAS normal costs in the first year and less than 50 percent in the remaining 2 years. In fiscal year 1999, the participating judges contributed approximately 61 percent of JSAS normal costs, and in fiscal years 2000 and 2001, the participating judges contributed approximately 48 percent of the JSAS normal costs. On the basis of data from plan years 1999, 2000, and 2001, the participating judges contributed, on an average, approximately 52 percent of JSAS normal costs; the government’s share amounted to, on an average, approximately 48 percent. Table 1 shows the judges’ and government’s contribution rates and shares of JSAS normal costs (using the aggregate cost method, which is discussed in appendix II) for the period covered in our study. The judges’ and the government’s contribution rates for each of the 3 years, shown in table 1, were based on the actuarial valuation that occurred at the end of the prior year. For example, the judges’ contribution of 2.36 percent and the government’s contribution of 2.60 percent in fiscal year 2001 were based on the September 30, 2000, valuation contained in the fiscal year 2001 JSAS report. The judges’ share of JSAS normal costs in the above table decreased from approximately 61 percent in fiscal year 1999 to approximately 48 percent in fiscal years 2000 and 2001, while the government’s share of JSAS normal costs increased from approximately 39 percent to approximately 52 percent. During those same years, the judges’ contribution rates remained constant, while the government’s contribution rate increased from 1.5 percent of salaries in fiscal year 1999, based on the September 30, 1998, valuation, to 2.6 percent of salaries in 2000 and 2001. The increase in the government’s contribution was primarily a result of the increase in total normal costs determined by the actuary from 3.86 percent of salaries in fiscal year 1999, based on the September 30, 1998, valuation, to 4.97 percent of salaries in fiscal year 2000. The increase in normal costs resulted from a decline in the market value of assets held in JSAS, as well as an increase in plan benefits being paid out over the period. Specifically, the total plan assets decreased from $366.7 million in fiscal year 1998 to $363.6 million in fiscal year 1999. At the same time, the accumulated plan benefit obligations increased from $311.9 million in fiscal year 1998 to $329.3 million in fiscal year 1999. The increase in the JSAS normal costs reflects the combined effect of the decrease in the value of plan assets and increase in the estimates of plan benefit obligations. Although the judges’ contribution rate remained fairly constant, the judges’ share of normal costs decreased to approximately 48 percent in fiscal years 2000 and 2001 because the total normal costs increased. In fiscal year 2001, the normal costs covered by the judges’ and government’s contributions remained constant because the percentage change in asset value was approximately 6.5 percent, which was in line with the 7.0 percent rate of return on investments that was assumed by the plan actuary. On the basis of the information contained in the JSAS actuarial report as of September 30, 2001, we determined that the participating judges’ future contributions would have to increase a total of 0.1 percentage point above the current 2.2 percent of salaries for active and senior judges and 3.5 percent of retirement salaries for retired judges in order to cover 50 percent of JSAS costs. If the increase were distributed equally among the judges, those contributing 2.2 percent would have to increase to 2.3 percent, and those contributing 3.5 percent would have to increase to 3.6 percent. A potential effect of increasing the contribution rates could be a decline in the participation rate for JSAS, which would run counter to the goal of increasing participation—a major reason for the changes made to JSAS in 1992. However, this potential impact appears to be less likely as compared with our findings from 3 years ago, when we reported that an increase of 0.3 percentage points would have been needed to achieve the 50 percent contribution goal. Since fiscal year 1998, the participating judges increased from 1,420 to 1,459 as of September 30, 2001. However, increasing the contribution rates now could affect the judges’ decision to participate in JSAS. Even if contribution rates are adjusted to the levels currently estimated to cover 50 percent of future normal costs, the future normal costs are estimates that could change in any given plan year. During the course of any year, certain events, such as the number of survivors or judges who have died, the number of new judges electing to participate, or the number of judges who decide to retire, as well as the value of and the rates of return on assets in the plan could create normal statistical variances that would affect the annual normal costs of the plan. Since the plan only has 1,459 participants—both active and retired judges—and 260 survivor annuitants, such variances can have a significant effect on the expected normal costs and lead to short-term variability. Therefore, the long-term view is important when evaluating the expected judges’ contributions of 50 percent of the normal costs. As shown in table 2, the average of the annual share of judges’ contributions since enactment of the 1992 Federal Courts Administration Act has been approximately 47 percent. We requested comments on a draft of this report from the Director of AOUSC or his designee. On June 10, 2002, the Deputy Associate Director of AOUSC provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to AOUSC. Copies of this report will be made available to others upon request. This report will also be available at no charge on the GAO Web site @www.gao.gov. Should you or your staffs have any questions concerning our review, please contact me at (202) 512-9406 or Hodge Herry, Assistant Director, at (202) 512-9469. You can also reach us by e-mail at [email protected] or [email protected]. Key contributors to this report were Joseph Applebaum, Jacquelyn Hamilton, Meg Mills, Charles Ego, and Deborah Silk. AOUSC administers three retirement plans for judges in the federal judiciary. The Judicial Retirement System automatically covers United States Supreme Court Justices, federal circuit and district court judges, and territorial district court judges and is available, at their option, to the Administrative Assistant to the Chief Justice, the Director of AOUSC, and the Director of the Federal Judicial Center. The Judicial Officers’ Retirement Fund is available to bankruptcy and full-time magistrate judges. The United States Court of Federal Claims Judges’ Retirement System is available to the United States Court of Federal Claims judges. Also, except for judges who are automatically covered under the Judicial Retirement System, judges and judicial officials may opt to participate in FERS or elect to participate in the Judicial Retirement System for Bankruptcy Judges, Magistrate Judges, or United States Court of Federal Claims Judges. Judges who retire under any of the three judicial retirement plans generally continue to receive the full salary amounts that were paid immediately before retirement, assuming the judges met the age and service requirements. Retired territorial district court judges generally receive the same COLA that CSRS retirees receive, except that their annuities cannot exceed 95 percent of an active district court judge’s salary. United States Court of Federal Claims judge retirees continue to receive the same salary payable to active United States Court of Federal Claims judges. Those in the Judicial Retirement System and the United States Court of Federal Claims Judges’ Retirement System are eligible to retire when the number of years of service and the judge’s age total at least 80, with a minimum retirement age of 65, and service ranging from 10 to 15 years. Those in the Judicial Officers’ Retirement Fund are eligible to retire at age 65 with at least 14 years of service or may retire at age 65 with 8 years of service, on a less than full salary retirement. Participants in all three judicial retirement plans are required to contribute to and receive Social Security benefits. Aggregate Funding Method. This method, as used by the JSAS plan, defines the normal cost as that level percentage of future salaries, which will be sufficient, along with investment earnings and the plan’s assets, to pay the plan’s benefits. The formula is as follows: The present value of future normal costs (PVFNC) equals present value of future benefits (PVFB) less net asset value. The present value of future normal costs is the amount that remains to be financed by the judges and the government. Normal cost percentage (NC percent) equals PVFNC divided by present value of future salaries (PVFS). Government Contribution. The following formula is used to determine the government’s contribution amount: The government contribution represents the portion of NC not covered by participants’ contributions. Lump Sum Pay Out. This may occur upon the dissolution of marriage either through divorce or death of spouse. Payroll contributions cease, but previous contributions remain in JSAS. Also, if there is no eligible surviving spouse or child upon the death of the judicial official, the lump sum pay out to the judicial official’s designated beneficiaries is computed as follows: Lump sum pay out equals total amount paid into the plan by the judge plus 3 percent annual interest accrued less 2.2 percent of salaries for each participating year (forfeited amount). In effect, the interest plus any amount contributed in excess of 2.2 percent of judges’ salaries will be refunded. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
|
The Federal Courts Administration Act of 1992 requires GAO to review certain aspects of the Judicial Survivors' Annuities System (JSAS), one of several survivor benefit plans applicable to federal employees. JSAS provides annuities to surviving spouses and dependent children of deceased Supreme Court Justices, judges of the United States, and other participating judicial officials. For the 3 years covered by GAO's review, the judges' contributions represented more than 50 percent of the JSAS normal costs for fiscal year 1999, but less than 50 percent for fiscal years 2000 and 2001. To cover 50 percent of JSAS estimated future normal costs, the judges' contributions would need to increase by 0.1 percentage point above the 2.2 percent of salaries paid by retired judges. However, increasing required contributions could reduce the judges' rate of participation even though increasing participation was one of the main reasons for enhancing JSAS benefits and reducing judges' contributions in 1992.
|
Identified in 1937 and named after the Ugandan province where its discovery took place, West Nile virus has a widespread distribution in Africa, West Asia, and the Middle East, occasionally causing epidemics also in Europe. Many people infected with the virus do not become ill or show symptoms, and even when they do, symptoms may be limited to a headache, sore throat, backache, or fatigue. Because no effective antiviral drugs have been discovered, treatment for those who do become seriously ill can only attempt to address symptoms such as swelling of the brain (encephalitis) and other complications such as bacterial pneumonia. Fatality rates—the percentage of people with confirmed infections who have died—have ranged from 3 to 15 percent for West Nile and are highest in the elderly. The virus that was originally misidentified as the cause of the New York outbreak is called St. Louis virus. Both West Nile and St. Louis encephalitis viruses are in a group called “flaviviruses,” and can be spread when mosquitoes bite birds (often a natural host for the virus), acquire the virus, and then pass it on to humans (see fig. 1). St. Louis encephalitis is found in nature through much of the lower 48 states and is the most common mosquito-borne virus causing outbreaks of human disease in the United States. About 30 confirmed cases occur on average each year during non- outbreak years. St. Louis encephalitis is also similar to West Nile in that most people infected with it show no symptoms, but for those who become seriously ill, no effective antiviral drugs are available. St. Louis encephalitis has a slightly higher fatality rate than West Nile, ranging from 3 to 30 percent of confirmed cases. Rapid and accurate diagnosis of disease outbreaks is essential for many reasons. It can help contain an outbreak quickly by allowing health officials to implement appropriate control or prevention measures and provide the most effective treatment for those who are affected. Rapid and accurate diagnosis is essential not only for the public at large, but also for health care workers and others who work with patients and laboratory samples. Accurate diagnosis is also important in providing information that could help determine whether the outbreak could have been deliberate—an act of bioterrorism. Public health officials use the term “surveillance” to denote the ongoing effort to collect, analyze, and interpret health-related data so that public health actions can be planned, implemented, and evaluated. Local health personnel are likely to be the front line of response. Local and state health departments might be the first to recognize unusual patterns of illnesses. For example, an epidemiologist (a health official trained to investigate diseases of unknown origin) in a city health department might receive phone calls from nurses, doctors, or emergency room personnel about increasing numbers of patients with similar symptoms. If the problem is thought to be widespread or suspicious in origin, the local health department is likely to involve the state health department, which is responsible for statewide surveillance and investigations involving multiple locations. The local and state response may also involve emergency management personnel. Current protocols recommend that law enforcement officials be notified if a case or series of cases have a suspicious origin. Local, state, and federal laboratories also play a vital role. Initially, this role may be to determine whether the unusual cases have the same pathogen (the specific causative agent for the disease), and if so, to identify it. Once an outbreak is established, laboratories may be called upon to test samples such as blood or spinal fluid from persons with similar symptoms, to determine who has the illness and the extent of the problem. At the federal level, CDC, an agency of the Department of Health and Human Services (HHS), is available upon request to help state and local officials investigate the nature and origin of disease outbreaks. For example, CDC maintains several laboratories that identify unusual or exotic viruses and other pathogens when other laboratories are unable to do so. One such laboratory, at the Division of Vector-Borne Infectious Diseases in Fort Collins, Colorado, deals with viral and bacterial diseases transmitted by vectors such as mosquitoes and ticks. It is part of CDC’s National Center for Infectious Diseases. Besides providing laboratory services, this division also develops ways to diagnose vector-borne pathogens more quickly and helps develop and evaluate approaches to preventing and controlling outbreaks. CDC is also the lead agency in HHS for bioterrorism preparedness. In recent years, the President and Congress have been increasingly concerned about the threat of terrorists using weapons of mass destruction, including biological agents.Part of CDC’s National Center for Infectious Diseases, the Bioterrorism Preparedness and Response Program is responsible for public health preparedness for potential acts of bioterrorism. In fiscal year 2000, HHS received $278 million of the $10.2 billion in counterterrorism monies allocated to federal agencies. Of the HHS funding, CDC received approximately $155 million for bioterrorism preparedness programs in fiscal year 2000, approximately $40 million of which is to be awarded to state and local health departments for surveillance, epidemiology, laboratory, and communications. During the first recognized outbreak of West Nile virus in the United States, infection of animals preceded the first human cases by at least 1 to 2 months. Large numbers of dying birds and an unusual cluster of human cases were at first viewed as separate events. Gradually, as an increasing number of laboratories became involved to conduct further testing on human, animal, and mosquito samples, the linkages became clear, resulting in the identification of the West Nile virus (see fig. 2). The scale of these efforts was substantial, involving participants around the country. Since the end of the outbreak, various local, state, and federal agencies have taken actions to address the potential ongoing consequences of the virus’s introduction into North America. The identification of a newly emerging infectious diseasewithin a few months was due to the combined, considerable efforts of scores of individuals and several agencies in the animal and the human public health fields and in academia. Here is an overview of the key events that occurred. Appendix II contains a more detailed chronology. No one is sure exactly when or how birds became infected. By late June a veterinarian at an animal health clinic in the New York City borough of Queens had examined and treated several birds that appeared to have nervous system disorders, releasing those that survived. Reports of dead birds increased through July and into August. By mid-August, dead birds were being sent to the wildlife pathologist at the New York State Department of Environmental Conservation. The wildlife pathologist was able to determine that the birds were not dying from any of several common problems, but he could not identify a clear cause. By late August, veterinarians at the Bronx and Queens zoos had joined the effort to identify the disease, after several wild and caged birds died on zoo property. Meanwhile, near the end of August, a specialist in infectious diseases in a community hospital in Queens noticed that the hospital had an abnormally large number of suspected cases of encephalitis or meningitis (diseases involving inflammation of the brain or spinal cord) and that several of the patients had developed an unusual pattern of muscle weakness. When the hospital’s doctors were unable to find a clear cause or an effective treatment, the specialist called the Bureau of Communicable Disease within the New York City Department of Health.After a quick but careful investigation, city health officials contacted the state health department and CDC for additional help. Blood and spinal fluid specimens from hospital patients were rapidly tested at state and CDC laboratories. On September 3, CDC announced that the test results were positive for St. Louis encephalitis, a virus known in the United States but never before known to occur in New York City. That same day, the city, assisted by the state and CDC, launched a massive campaign to prevent people from being bitten by mosquitoes and to determine the extent of the St. Louis encephalitis outbreak. Within the next week, however, the State Department of Health obtained what appeared to be conflicting test results for St. Louis encephalitis, raising doubts among some health officials about whether the exact cause of the outbreak in humans had been determined. In addition, CDC officials were questioned by city and state health workers and the public as to whether the deaths of large numbers of birds and the human encephalitis cases might be connected. Because St. Louis encephalitis had not been known to kill its bird hosts, CDC officials said they considered the two outbreaks to be unrelated. The cause of the outbreak in birds remained unidentified, and, to help identify it, the zoo veterinarians and the state wildlife pathologist enlisted the help of federal veterinary laboratories at the U.S. Department of Agriculture (USDA) and the U.S. Geological Survey (USGS). By mid-September, both laboratories concluded that the bird disease was caused by a virus, that it did not appear to be any strain of St. Louis encephalitis or other avian virus they had previously tested, and that they had insufficient laboratory capabilities to identify it more specifically. The USDA veterinary laboratory sent its virus samples to the CDC laboratory for further analysis. The test results in birds, along with repeated negative test results in human samples in the state health department laboratory, increased the doubts of some state health officials about whether the human disease agent had been correctly identified as St. Louis encephalitis. On September 15, they invited a visiting academic researcher from California to try out some new testing methods on tissue specimens from human patients. The following week, a Connecticut agricultural laboratory involved in that state’s routine mosquito surveillance reported isolating St. Louis encephalitis virus from both a dead bird and mosquitoes collected near the outbreak area. This finding was significant in implying that, if the virus was St. Louis encephalitis, it was killing birds and possibly could be connected to the human outbreak. At about the same time, CDC had begun testing and retesting mosquito, bird, and human specimens against a wider variety of flaviviruses in order to rule out the possibility of another closely related virus. Independently, the head pathologist at the Bronx Zoo gained agreement from the U.S. Army Medical Research Institute of Infectious Diseases to attempt to identify the virus in birds. Beginning on September 23, the academic researcher and CDC came to the same general conclusion: the virus causing the outbreak was not St. Louis encephalitis but, rather, a virus that had never before appeared in the United States. By the week of September 27, CDC had confirmed that a “West Nile-like” virus was responsible for both the animal and the human outbreaks. The effort involved in addressing these outbreaks and identifying the cause was concentrated and considerable. Hundreds of reported potential human cases were investigated to determine whether West Nile was the infecting virus. By the end of the investigation, health officials confirmed 62 cases of West Nile virus, including 7 people who died. Thousands of bird deaths were similarly investigated by several state and federal laboratories and agencies, to determine how far the virus had spread. In addition to the laboratory investigations, state and local emergency management teams were mobilized to respond to public health concerns. They managed the coordination of conference calls and other communications, the establishment of hotlines to address the general public’s concerns, and the procurement, distribution, and application of pesticides. The New York City and State Departments of Health also developed fact sheets for the public on each of the pesticides in 1999, and in 2000 they implemented a surveillance system for health effects from pesticides.Table 1 shows some specific examples of the case surveillance and laboratory workload experienced by some of the involved agencies during and since the outbreak. Not all of the agencies involved have developed cost estimates for their efforts. As one indication of the cost, however, New York State officials estimated that the state, city, and four counties in the area spent more than $14 million on protective measures such as mosquito control from late August through October. While the first frost of the season signaled the end to the initial outbreak in 1999, activities at the national, state, and local levels have continued. In the first week of October 1999 the New York City Department of Health and CDC conducted a random survey of Queens residents to assess the overall infection rate associated with the outbreak. The results of this serosurvey (in which a blood test for West Nile antibodies is performed) revealed that between 1.2 and 4.1 percent of the population in the area surveyed had been infected with West Nile virus. The change in diagnosis from St. Louis encephalitis to West Nile also caused public health agencies to evaluate whether aspects of their intervention response should be changed. While the West Nile and St. Louis encephalitis viruses are closely related and mosquito-borne, the change in diagnosis had some implications for the intervention approach. For example, past research had shown that different types of mosquitoes might carry the viruses. Both West Nile and St. Louis encephalitis are carried by a certain species of mosquito, Culexpipiens. However, West Nile is also carried by other species, including Aedesvexansand Anopheles. Some of these species have different habitat and activity patterns. For example, Culexpipiensbreeds in polluted water and is active at night, while A. vexanshas been found in natural areas and is active during the day.Once the distinction between the viruses was made, the public health interventions were changed accordingly to reflect the other types of mosquitoes potentially carrying the West Nile. For example, local public health notices stated that the public should also avoid contact with mosquitoes active during the day. While these differences are not considered significant since the public health recommendations for mosquito control are appropriate in either case, they illustrate the potential significance of an accurate diagnosis in that even closely related viruses might require different responses. Some of the activity since the initial outbreak has involved learning more about where the virus came from and when it arrived. Research into the origins of the virus found that it is most closely related to a strain isolated in a goose found in Israel in 1998. Testing of previously stored bird tissue samples at the Bronx Zoo was negative for West Nile virus, suggesting the virus was introduced in 1999. Much of the ongoing effort has been applied to determining whether West Nile will be an ongoing threat to animal and human health. The West Nile outbreak represents a potential problem stretching well beyond New York City, because the virus can spread through bird migrations. In fiscal year 2000, HHS and CDC will provide approximately $10 million for West Nile virus activities. This amount includes grants totaling $4.5 million available to 19 state and local health departmentsalong the Atlantic flyway of migratory birds for West Nile virus surveillance in humans, mosquitoes, and birds. An additional $2.7 million of the $10 million has been made available to 31 other state health departments to expand surveillance capabilities. As of August 2000, communities in at least seven eastern stateshad undertaken active mosquito control programs, such as spraying, as well as public education campaigns and surveillance activities. Surveillance activities have already produced evidence that West Nile has spread to other areas. In October 1999 a dead crow carrying the virus was found in Baltimore, Maryland. In 2000, as of August, West Nile virus had been detected in birds in nearly all New York counties as well as in Massachusetts, Connecticut, Rhode Island, and New Jersey and in mosquito pools in several states. If West Nile is carried further south along bird migratory routes (see fig. 3 for examples), it could become permanently established in the Western Hemisphere. The spread of the virus by birds and mosquitoes has significant implications for animal health as well. Animal health officials are concerned about the potential effects on wildlife and other animals, particularly those birds that are susceptible to fatal illness from the virus. The USGS, which conducts surveillance of wildlife health, has helped develop and maintain national maps showing the current wildlife surveillance data now submitted by states.Economic concerns also have been raised. While wild birds were the primary carrier of West Nile in last year’s outbreak, the disease was also detected in domestic livestock. Twenty-five cases were identified in horses on Long Island, nine of which died or were euthanized. Although there is no evidence that the virus can spread from infected horses to uninfected horses or other animals, countries from Argentina to the United Arab Emirates placed import restrictions on horses from affected areas.In addition, the role of commercial poultry in maintaining or transmitting the virus is not thoroughly understood. CDC research has found that chickens can develop a short-lived infection without clinical signs. Several organizations, including CDC, USDA, the Wildlife Conservation Society, and Flushing Hospital, have organized conferences and workshops to review the West Nile virus outbreak. In December 1999, CDC issued guidelines for West Nile virus surveillance, prevention, and control. In the spring of 2000, HHS and USDA appointed West Nile coordinators to oversee efforts against the virus. See appendix III for a list of some key publications about or related to the virus outbreak. While many officials and experts we contacted believe aspects of the outbreak investigation went quickly and well, nearly all of them also believe there were lessons to be learned. These lessons may be especially relevant for acts of bioterrorism, where the outbreak of cases may be much more rapid and law enforcement agencies may need to be involved to prevent terrorists from releasing additional biological agents. The time available for decision-making and response may be compressed from days or weeks to a matter of hours. The lessons we identified related primarily to addressing possible needs in five areas: local surveillance and response capabilities, communication among public health agencies, coordination between public health and animal health efforts, capabilities of laboratories, and efforts to distinguish between natural and unnatural events. The West Nile outbreak provided a number of lessons about surveillance. We learned that many aspects of the surveillance network worked well, speeding the response to the outbreak. These positive lessons can serve as models for other communities that may have less substantial surveillance networks. However, while several of the lessons are positive, the outbreak also exposed some weaknesses. The human outbreak of West Nile began with a few unusual cases. The potential that one or two persons’ medical conditions could be an indication of some larger concern, such as an emerging infectious disease, may not be readily apparent to the health professionals involved. In many cases, such events might not be noticed until a number of physicians have reported the cases and the local health department identifies a cluster, or a number of victims seek care for similar conditions at the same location. Alert responses by the doctors and nurses who first see such victims are particularly crucial in alerting the public health community to the possibility of a wider problem. In the West Nile outbreak, several actions were particularly important in providing this early alert, as well as in providing valuable evidence for the investigation. Among these actions are the following: The physician who encountered the first human cases at the local hospital in Queens reported the unusual cluster of illnesses to local public health officials. Such occurrences could easily go unreported, if, for example, the physician does not consider the circumstances to be unusual enough to report or does not recognize a rare disease. Epidemiologists and staff at the New York City Department of Health took a number of actions that were essential to containing the outbreak. They quickly investigated and recognized the potential significance of the initial case reports. Their interviews with patients and families identified common features in how the patients were exposed out of doors, suggesting that a mosquito-borne disease might be involved. They canvassed all New York City area hospitals to identify potential cases, and throughout their investigation, they remained in daily touch with the many local, state, and federal officials who had quickly become involved. These staff members said previous planning for bioterrorism response in place at the city health department was key to the success of the Department’s response. Autopsies were performed on the victims. The New York City Department of Health and Office of Chief Medical Examiner worked together to ensure that autopsies were performed on any fatal case of encephalitis. Autopsies were performed on over 25 fatal cases who were initially suspected as having viral encephalitis, including all 4 fatal cases of West Nile encephalitis that occurred among city residents. According to one assessment of the response, information obtained from the autopsies pointed to a flavivirus as the cause and helped guide subsequent laboratory testing.Autopsy rates nationally have been decreasing, at a time when public health officials believe they should increase to help detect infectious diseases. The decline has been influenced by such factors as costs and jurisdictional and authorization uncertainties. While the West Nile outbreak was identified more quickly than otherwise might have been expected because an astute physician reported two unusual cases, it still provides evidence that the reporting system could be improved. The virus might have been identified earlier—perhaps by a week according to an involved official—if case reporting had been better and if good baseline data showing past trends of encephalitis and related diseases had been available. Similarly, a physician we interviewed who had treated West Nile patients said clinicians often do not know whom to call when a cluster of patients with a disease of unknown origin is noticed. Wildlife and zoo officials also indicated that within their fields there is a need for better information and guidance about whom to contact in the public health community when an outbreak is suspected. These problems have been noted in other instances besides the West Nile outbreak. For example, a November 1998 workshop on public health systems and emerging infections sponsored by the Institute of Medicine— an organization chartered by the National Academy of Sciences to examine public health policy matters—reported that physicians are not sure when or where to report suspicious cases of infection. The workshop also reported that physicians are unaware of the need to collect and forward clinical specimens for laboratory analysis and may not be educated regarding the criteria used to launch a public health investigation. Unlike the case in New York City, where the health department had been actively communicating with physicians, the workshop found that there is often a lack of communication between public health agencies and community physicians. A 1999 assessment by the Institute found that disease surveillance systems in place at local, state, and federal levels rely on systems of disease reporting from health providers that are notorious for their poor sensitivity, lack of timeliness, and minimal coverage.Because an effective medical response to a bioterrorist event would depend in part on the ability of individual clinicians to identify, accurately diagnose, and effectively treat diseases (including many that may be uncommon), the Institute reported that education about the threat posed by bioterrorism and about the diagnosis and treatment of various agents deserves priority. Although this outbreak was relatively small in terms of the number of human cases, it taxed the resources of one of the nation’s largest local health departments. The strain on resources is particularly noteworthy because local health departments in the United States have initiated nearly all the investigations that led to the recognition of infectious disease outbreaks. At the time of the West Nile outbreak, the New York City Department of Health had a unit of about eight people responsible for surveillance and case investigations related to over 50 reportable infectious diseases. Officials told us that having even this small number of trained staff available was critical to the quick response to the initial outbreak. Once the outbreak was identified, these and other staff assigned to help from other agencies and departments worked long hours, seven days a week. We reported in 1999 that surveillance for important emerging infectious diseases is not comprehensive in all states, leaving gaps in the nation’s surveillance network.Many state epidemiologists reported inadequate staffing for generating and using laboratory data—often considered more reliable for case investigation purposes than physician-reported data—for performing infectious disease surveillance. The Institute of Medicine workshop reported that, in general, epidemiological investigations and surveillance efforts are challenged by a variety of factors. These include changes in the health care system and the continuing use of paper-based disease-reporting systems in many locations, where surveillance is consequently sporadic and inadequate. Experts consider rapid and reliable communication among public health agencies to be essential to bioterrorism preparedness and coordination. Timely dissemination of information allows public health officials to make decisions with the most current information available. During the West Nile outbreak, however, officials indicated that the lack of leadership in the initial stages of the outbreak and the lack of sufficient and secure channels for communication among the large number of agencies involved prevented them from sharing information efficiently. Many officials interviewed pointed to the lack of clear reporting guidelines as one source of confusion. Knowing who was in charge or could act as an agency spokesperson, and which agency was responsible for what, would have allowed each agency to operate more effectively. Some officials suggested that each agency should have one “point person” overseeing operations and the flow of information. During the outbreak, local, state, and federal officials held daily conference calls coordinated by the City or State Department of Health, or CDC. During these calls, officials received up-to-date information on such topics as the human and animal surveillance systems, test results from each laboratory, and schedules for mosquito spraying. While these calls were considered necessary to ensure that all parties heard the same information, they sometimes involved over 100 people and lasted 2 hours or more.As a result, key officials had less time to investigate the outbreak in the laboratory and in the field. Additionally, veterinary health officials were concerned because they were not always included in these calls. While a secure electronic communication network was in place at the time of the initial outbreak, not all involved agencies and officials were using it at the time. For example, because CDC’s laboratory was not linked to the New York State network, the New York State Department of Health had to act as an intermediary in sharing CDC’s laboratory test results with local health departments. CDC and the New York State Department of Health laboratory databases were not linked to the database in New York City, and laboratory results consequently had to be manually entered there. Physicians, local health departments, and laboratory officials indicated that during the outbreak, it was sometimes difficult to determine the status of patients’ samples and of the laboratory results. During and since the outbreak, however, officials indicated that the use and utility of the network have improved for West Nile surveillance and information sharing. Using the network, the state has put together an interactive surveillance system for mosquito, bird, and human disease reports. Since the fall of 1999, access to the network has been provided to more health officials, including animal health agencies, for tracking West Nile in animals and humans. The communication limitations during the outbreak, the resulting changes to the electronic network capabilities, and the increased reliance on the network for sharing information have increased awareness of the need for established electronic data-sharing mechanisms. New York State officials told us that the state has invested heavily in its communication infrastructure and has created an advanced information system, but at a national level some local health departments still do not have access to modern communication technologies. A 1999 survey by the National Association of County and City Health Officials found that one- third of health departments serving fewer than 25,000 people did not have access to the Internet or electronic mail. Similarly, more than half the agencies surveyed had neither continuous, high-speed access to the Internet nor broadcast facsimile transmission capabilities. The West Nile events illustrate the value of communication between public and animal health communities, the latter including those dealing with domestic animals, wildlife, and other animals such as zoo animals. Many infectious diseases, including West Nile, are zoonotic, that is, capable of infecting both animals and people. According to recent research, approximately three of every four emerging infectious diseases reach humans through animals.Of over 1,700 known pathogens affecting humans, including viruses and bacteria, 49 percent are zoonotic. Of the 156 pathogens associated with emerging diseases, 73 percent are zoonotic. Many of the viruses or other pathogens considered most likely by CDC to be used in a bioterrorist incident are zoonotic, such as anthrax, plague, brucellosis, tularemia, and the equine encephalitic viruses. An official of the USGS National Wildlife Center noted that many zoonotic pathogens become established in wildlife before they are transmitted to humans and domestic animals. The November 1998 Institute of Medicine workshop reported that, because of their familiarity with a number of these biological agents, the veterinary medicine community should not be overlooked in surveillance efforts.Moreover, veterinarians and veterinary laboratory workers are likely to have been vaccinated against many zoonotic diseases and are used to working with zoonotic pathogens. The West Nile outbreak shows how domestic, wild, and zoo animals can be considered “sentinels,” providing an early warning device for diseases that can harm people. Even for a deliberate biological attack, animals may be the first victims, unintentionally or as part of an effort to avoid discovery, according to the Institute of Medicine and National Research Council.In the case of the West Nile outbreak, USDA and USGS National Wildlife Center laboratories were involved in early or mid-September in testing bird samples prior to the identification of the West Nile virus. However, because these laboratories lacked reagentsfor the virus, they were unable at the time to specifically identify it. Experience with the West Nile outbreak also illustrates how links between the animal and public health communities were missing. For example, Some key public health officials, such as the city health department’s Director of the Bureau of Communicable Disease, indicated that they were not aware of the similarities in the clinical symptoms occurring in the birds and humans until many days or weeks after the human outbreak began. Officials said they believe that communication was hindered even further because, even within the animal health community, there is fragmentation at the state and federal level in what agencies are responsible for different types of animals. For example, domestic animals, such as cats and dogs, are usually the responsibility of state and local health departments. Livestock, such as cattle and swine, are often the responsibility of state agricultural agencies. Wildlife, such as birds, are under the state environmental or wildlife agencies. When wildlife health officials approached the state public health laboratory to test the bird samples, they were told their samples should be tested at another laboratory, because the state laboratory did not have the reagents to perform animal (bird) testing. According to a New York State animal health official, not having adequate capacity within the state laboratory to test animal samples can create administrative and cost barriers to getting samples tested. For example, many veterinary laboratories will test samples only on a fee basis and not for public health purposes. In some areas of the country, such as the Southwest, where zoonotic diseases such as hantavirus are endemic in the animal population, integration of the animal and public health communities is considered to be better. Several persons involved in the outbreak commented that the zoo community is currently left out of the animal and public health paradigm, even though zoo animals may be useful sentinels. Zoo animals generally receive close attention from veterinarians, and in some cases pathologists track health care and disease causes, creating detailed health records and storing tissue samples for future analysis. Officials indicated that because zoo animals are not considered to be wildlife or domestic animals, they do not fall within the jurisdiction of animal health agencies such as the USGS, which tracks wildlife issues, or the USDA, which tracks concerns related to domestic animals. The Bronx Zoo pathologist tried many different channels in order to find laboratories willing to prioritize performing additional tests on the bird samples and to provide advice on needed safety precautions for zoo laboratory personnel working with the bird samples. Many officials provided other examples of where communication between public and animal health communities had not worked well and indicated that the West Nile events pointed to a need for better partnership between these communities. This opinion was voiced even by those who at first disregarded animal health officials’ views and questions about the potential links between the animal and the human outbreaks. For example, in its own internal assessment of the West Nile events, CDC concluded that the relationships between public health agencies at the federal, state, and local levels and their counterparts in public and private agencies that monitor veterinary health should be strengthened. There are indications that some of this greater collaboration has begun. Since the outbreak, archived blood samples from zoo animals drawn in past years have been analyzed as part of the ongoing investigation to determine when and how West Nile was introduced. Another frequently cited lesson was the need for improved laboratory infrastructure and technologies for responding to outbreaks and newly emerging viruses. While the concerns were wide-ranging, three common themes emerged: broadening laboratory capabilities, ensuring adequate staffing and expertise, and improving ability to deal with work surges in testing needs. Since the extent to which public health and other laboratories across the country are capable of safely testing dangerous pathogens is unknown, a first step in addressing these concerns may be to complete assessments of inventory and core capacity needs. At the same time, lessons from the West Nile outbreak point to the need to improve current linkages among laboratories. The need for enhanced laboratory capabilities was frequently mentioned by officials involved in the West Nile outbreak, as well as in various assessments. Officials pointed out the need for more laboratory capacity for identifying and handling infectious agents of high concern to human health, particularly emerging or exotic ones. For example, they said that at the time of the outbreak, only two or three laboratories in the country had the reagents necessary to identify the West Nile virus. One of these was CDC’s laboratory in Fort Collins, which did not initially use this reagent since the first test it had performed was consistent with the related St. Louis encephalitis virus.Because New York State’s laboratory was considered less equipped to perform the diagnostic testing on the human samples once the outbreak was identified, CDC performed the bulk of these tests. In this regard, the need to “expect the unexpected,” a phrase frequently quoted in outbreak assessments, expresses the importance of developing a broader awareness within the laboratories of the potential for new agents to appear and, concurrent with such awareness, developing broader testing capacity. One federal laboratory official suggested, for example, that federal policy should consider a broader dissemination of methods for identifying more exotic pathogens—perhaps those pathogens that are more likely to be introduced to the country through international travel or otherwise. Some bioterrorism and public health officials noted that, while expansion of laboratory capacity is vital to preparedness, efforts to identify more exotic agents may be beyond the scope of all but the largest health departments and therefore should be a regional or state-based activity. Consequently, some experts have suggested research into determining the utility of developing a network of regional laboratories capable of rapid diagnostic testing.Determining current capacity will be a key first step in assessing the need for a regional network. Currently, the number of public health and other laboratories that can handle those viruses considered most harmful (those classified as requiring Biosafety Level-3 or Biosafety Level-4 equipment, trained staff, and safety procedures in place) is unknown.CDC information indicates that most states lack the public health laboratory capacity to handle many of those viruses that CDC has classified as dangerous and identified as high priority because of risk to national security and public health. Specifically, in fiscal year 1999, less than half of the over 40 states and localities receiving funding for laboratory capacity through CDC’s bioterrorism preparedness grant program reported having advanced capacity for rapid testing for at least four critical biologic agents.Within the veterinary community, a USDA official told us that probably fewer than 20 veterinary laboratories across the country have the capacity to test for Biosafety Level-3 pathogens, and no veterinary laboratories have Biosafety Level-4 capacity. Several officials commented on the declining capacity and expertise within the federal and state public health laboratory infrastructure, particularly as it relates to zoonotic and vector-borne diseases. At the time of the outbreak the Fort Collins laboratory capacity was considered to be low, and many needed specialist positions had been eliminated or left vacant as experienced staff had left. Similarly, CDC reported that only a few states and even fewer local health departments have trained personnel or the resources to adequately address vector-borne diseases. According to CDC and other officials, the infrastructure of laboratories with the capacity to handle such diseases has deteriorated in recent decades. The number of laboratories and extent of capacity have dropped, and the staffing, physical plant, and financial support of many remaining laboratories have also been affected. New York State, prior to the outbreak, lacked the capacity to address vector-borne diseases. A New York State laboratory official indicated that at one time the state had 5 or 6 staff to perform mosquito surveillance to track viruses. In recent years the laboratory’s staff had been cut back as funding was diverted to other public health priorities. By contrast, Connecticut officials indicated that they had—after a similar encounter with eastern equine encephalitis, another mosquito-borne virus—instituted mosquito surveillance in 1997, at a cost of about $200,000.Because of its ongoing surveillance program, the state was able to quickly respond to the outbreak, placing mosquito-monitoring devices in potentially infected areas and identifying the appropriate places to spray. According to a program official, having baseline data—for example, data on where most mosquitoes of concern resided in previous years—allowed the state to make informed decisions about where to spray. Testing for West Nile taxed those parts of the laboratory system that were dealing with the outbreak—and in some ways, affected what some of these laboratories were normally expected to do. The New York laboratory that was testing samples for St. Louis encephalitis was also dealing with an outbreak of EscherichiacoliO157:H7 at the same time.Both the New York State and CDC Division of Vector-Borne Infectious Disease laboratories were quickly inundated with requests for tests, and because of the limited capacity at the New York laboratories, the CDC laboratory handled the bulk of the testing. CDC officials reported that nearly all the Fort Collins Arbovirus Disease Branch laboratory staff at one point was working on the response to the virus. Normally, the CDC laboratory functions as a reference laboratory for arboviruses, maintaining the technology and capability to accurately diagnose viruses of this type.In this case, it was acting largely as a diagnostic laboratory, testing patient samples to determine who had the virus and who did not. Officials indicated that the CDC laboratory would have been unable to respond to another outbreak, had one occurred at the same time. Some officials also described what were considered to be unfortunate aspects of CDC’s taking on the role of the diagnostic laboratory. Typically, the CDC laboratory’s role would be to confirm test results rather than to perform diagnostic testing. In this case, in assisting the state in performing the diagnostic testing, CDC focused on determining whether individual patients had St. Louis encephalitis (and then West Nile) rather than identifying other possible causes of illness. This was considered by some to be unfortunate from the standpoint of the individual patients, whose diagnoses could therefore be delayed. Testing at the state laboratory from 95 patients with suspect viral infections found 16, or about 17 percent, of the patients positive for viruses other than West Nile. Improving the laboratory network is key to improving the laboratory capacity to respond to surges in workload and to provide the new technologies, staff, and expertise to respond to outbreaks. Networks or linkages among federal, state, academic, and possibly private sector laboratories may also be needed, in part to clarify responsibilities for involved laboratories for providing surge capacity, diagnostic testing, and other critical roles in emergency situations. CDC’s internal investigation concluded that the agency should enlist help from academic laboratories. The California researcher who conducted some of the diagnostic laboratory work on the West Nile outbreak was brought in by officials at New York State’s Department of Health because they learned of the innovative research his laboratory was developing to quickly and accurately identify the viral causes of unexplained deaths from encephalitis.Some involved officials indicated that the California laboratory’s involvement was fortuitous in allowing a laboratory not consumed with diagnostic testing for the outbreak to focus on performing the types of tests required to eventually identify the virus. On the other hand, some officials also indicated that this laboratory’s unplanned involvement contributed to confusion about which laboratories were performing tests and the types of tests being performed. Those involved in responding to the West Nile outbreak have concluded that with a more formal network and clearer roles, necessary tests to accurately identify the virus could have been started the resulting confusion about which federal and other laboratories were involved in the process and the tests each laboratory was performing could have been avoided or minimized. However, while many agree that more should be done to develop the laboratory network, the plans for such a network are still being developed. CDC’s planned laboratory response network for bioterrorism—linking public health laboratories at the local, state, and federal levels—is still under development.Private, veterinary, and USDA laboratories are not yet part of the network. Assessment of the public health infrastructure by public health experts, and CDC’s strategic plan for preventing emerging infectious diseases, also point out the need for defining and building the laboratory network. The Institute of Medicine workshop that assessed the capabilities of the public and private sectors for identifying emerging infections reported that surge capacity in response to an outbreak is an area in which the public health laboratory should define its core capability and standards, including the unique and complementary roles of the public and private sector laboratories.CDC’s strategic plan has a goal to strengthen the public health infrastructure in part by strengthening CDC’s capacity to serve as the national and international reference laboratory for the diagnosis of infectious diseases. Finally, the outbreak and surrounding events illustrate the challenges inherent in recognizing a bioterrorist event versus a natural outbreak. In October 1999 a media report suggested that the outbreak could have had an unnatural origin.The Central Intelligence Agency examined the allegations and concluded that there was no evidence indicating that the outbreak was caused intentionally. The report of the possibility of a bioterrorist event, and the difficulties in correctly identifying the virus and its source, highlight how hard it can be to determine whether an outbreak has an unnatural origin. While the actual response to the West Nile virus outbreak might not have been significantly different had it been considered a potential bioterrorist act, such an event would require the involvement of additional organizations to carry out a criminal investigation. CDC’s current recommended protocols are to notify the Federal Bureau of Investigation and law enforcement officials, who would also seek to determine whether terrorists had targeted additional locations for release of the pathogen. The need to involve these agencies may not be evident at the start. An HHS Office of Emergency Preparedness official indicated that an investigation of a real bioterrorism attack may start as an emerging infectious disease outbreak investigation that finds that the cause was terrorism. It is difficult to establish specific criteria for reporting an outbreak as suspicious, but officials indicated that improved reporting criteria may be needed. The West Nile investigation is not the only incident that has illustrated the difficulties of determining whether an outbreak was intentionally caused. According to the Federal Bureau of Investigation, there has been only one act of terrorism in the United States in which a biological agent was used, and in this case the deliberate cause was not known until long after the outbreak had passed. The event occurred in September 1984, when 751 persons in Oregon became ill with gastroenteritis, an inflammation of the stomach and intestines. The local health department, with assistance from CDC, discovered that food at salad bars had been contaminated with Salmonellatyphimurium. More than a year later, the Federal Bureau of Investigation learned through a former member of a religious cult that the cult had used the Salmonellato contaminate the food. The West Nile outbreak may also illustrate the importance of improving our understanding of the causes of unexplained deaths of previously healthy people. Currently, much is unknown about the specific pathogens that cause the deaths of Americans from suspected infectious diseases. Most of the specific causes of encephalitis are undiagnosed. From the point of view of improving surveillance for acts of bioterrorism, a key may be in improving the ability to identify the causative agent in any case where the disease is serious and unusual. One effort toward this end is CDC’s unexplained death project. The project—the focus of the Albany conference at which the academic laboratory at the University of California at Irvine was asked to use innovative techniques to test the human samples—aims to improve CDC’s capacity to rapidly identify the cause of unexplained deaths or critical illness, and to improve understanding of the causes of specific infectious disease syndromes for which a cause is often not found. Finally, the outbreak and surrounding events support public health officials’ views that bioterrorism preparedness rests in large part on the soundness and preparedness of the public health infrastructure for detecting any disease and the causes of disease outbreaks. An important public health responsibility in any disease outbreak is to identify the agent and source of the disease as part of the process to determine how to prevent it from spreading further. From the public health standpoint, whether an outbreak is natural or artificial may be of little significance, although the political or legal ramifications may be large.Bioterrorism preparedness officials aware of the West Nile outbreak and investigation indicated that because the local public health officials were taking appropriate steps to identify the spread and source of the disease, the proper steps were under way for determining whether the source or origin should be considered suspicious. Appendix III contains a bibliography of selected assessments and reports that relate to the public health infrastructure and bioterrorism preparedness. The sudden appearance of West Nile virus in this hemisphere is a clear illustration of the often-repeated need to “expect the unexpected.” Much of the initial response was based on typical steps for identifying and responding to diseases that occasionally break out in the United States. From that standpoint, the correct public health agencies were involved and the response was timely and appropriate. But in this case, critical information and clues pointing to a newly emerging virus were discounted early on, but reemerged later. Persistence, coupled with the significant contributions of additional laboratories, investigators, and researchers, produced the additional evidence leading to the final identification of West Nile as the cause of the outbreak. However, as more agencies became involved, coordination with those already involved in the investigation was not always effective, and communication became more difficult. How can this incident be translated into increasing the likelihood that the public health network can detect similar threats and then identify and contain them more effectively in the future? The public health community is doing a great deal to respond—both to this particular outbreak, which continues to unfold, and to the larger set of concerns it raised. The lessons we identified are, to some extent, already part of that ongoing effort. These lessons support the view of many that “an outbreak is an outbreak is an outbreak”—that is, whether an outbreak is intentional or natural, the public health response of determining the causes and containing its spread will be the same. Thus, policies and actions that improve the capabilities of the public health infrastructure—including those that improve the animal health infrastructure—do more than help the nation better prepare for a potential bioterrorist event. These same improvements will also increase our ability to detect and contain the more likely sort of outbreak that starts with a global traveler, a wayward mosquito, or a migrating bird. We provided a copy of the draft report to CDC, USDA, and New York City and State Department of Health officials for comment. CDC and the New York City Department of Health provided written comments, which are provided in Appendices IV and V. USDA and the New York State Department of Health also provided comments, which are summarized below. Generally, officials agreed with the lessons and conclusions drawn from experience that are presented in the report. The commenting agencies also offered several observations on various aspects of the report draft. CDC said that its strategic plans for emerging infectious diseases and bioterrorism should be mentioned in the report, and we have done so. CDC expressed a concern with the emphasis in the draft on those aspects that did not go as well as others. Because this report was designed to analyze the events of the fall of 1999 and identify lessons learned for the nation’s preparedness, it necessarily focused on those things that were perceived as problems at that time. CDC also expressed a concern that the report overemphasized the role of the convergence of the human and animal investigations, because laboratory tests conducted by the California researcher and others on the human side were also showing that the virus was not the one initially identified as the cause at the same time as the animal tests. We agree that these contributions were significant, and we made clarifications to the text to recognize them. Nonetheless, we continue to believe that the information from the animal investigations was critical to the timing of the final accurate diagnosis. New York City Department of Health officials highlighted as important the points in the draft discussing the importance of effective disease surveillance, the need for better communication among public health agencies, and particularly the need for better communication within and among public and animal health communities. USDA officials indicated that increased emphasis is warranted on the importance not only of public health preparedness, but also of animal health preparedness. Several New York State Department of Health officials and all of the agencies mentioned above provided technical comments, which were incorporated where appropriate. We also provided relevant excerpts of the draft report to officials from the Bronx Zoo, State of Connecticut, New York State Department of Environmental Conservation, USGS, and University of California at Irvine for technical review, and their comments were incorporated in the draft where appropriate. We are sending copies of this letter to Donna E. Shalala, Secretary of Health and Human Services; Daniel R. Glickman, Secretary of Agriculture; Jeffrey Koplan, M.D., Director of CDC; and other interested officials. This work was performed under the direction of Marcia Crosse, Assistant Director. Other major contributors are Rob Ball, Katherine Iritani, Anita Kay, Deborah Miller, and Stan Stenerson. Please contact me at (202) 512- 7119 if you or your staff have any questions. We interviewed officials in the public and private sectors at the national, state, and local levels, and, to the extent it was made available to us, we obtained relevant documentation from them. With this information, we developed a chronology and compiled a list of lessons learned from the West Nile virus outbreak. To some extent, the chronology was based on officials’ recollections of the specific events occurring on particular dates. When information provided by agencies or officials was inconsistent, we assessed its relevance to our reporting objectives, sought any needed corroboration from other involved officials, and incorporated the information accordingly. Officials and agencies contacted included the following: U.S. Department of Health and Human Services, Office of Emergency Centers for Disease Control and Prevention, National Center for Infectious Diseases, Division of Vector-Borne Infectious Diseases, Division of Viral and Rickettsial Diseases, Division of Bacterial and Mycotic Diseases Central Intelligence Agency U.S. Department of Agriculture, Animal and Plant Health Inspection U.S. Geological Survey, National Wildlife Health Center U.S. Army Medical Research Institute of Infectious Diseases New York State Department of Health New York State Department of Environmental Conservation New York City Department of Health New York City Commissioner’s Office of Emergency Management Wildlife Conservation Society/Bronx Zoo Flushing Hospital Medical Center Connecticut Agricultural Experiment Station University of California at Irvine Association of Public Health Laboratories National Association of County and City Health Officials a ProMED moderator active during the initial outbreak To gather background information and relevant literature on the West Nile outbreak, West Nile virus, and surveillance activities put in place since the outbreak, we searched academic journals and news media and performed an extensive review of publications related to the virus. We performed a similar review to identify reports and literature related to the preparedness of the public health infrastructure for a bioterrorist event. We also reviewed assessments of the response to the West Nile outbreak prepared by various agencies. These assessments both describe the views of these agencies on lessons learned and outline the steps they have taken and policies they have implemented since the initial outbreak. Time is of the essence in responding to an outbreak of an infectious disease. When the cause of an outbreak is unknown, it is much more difficult to respond quickly and effectively. As can be seen in the following chronological table of events, the key to rapidly identifying and responding to the West Nile virus outbreak lay in merging efforts and information from separate investigations of outbreaks in animals and humans. At the same time, as the number of participants increased, so did the complexity and difficulty of communication and coordination. Looking back on the outbreak of the fall of 1999 provides an opportunity not only to review the significant investigative and laboratory work of a myriad of participants and the contributions of each toward the final diagnosis of the virus, but also to analyze the communications and actions of the responding government agencies in order to improve the nation’s preparedness for future outbreaks, including ones not due to natural causes. Table 2 provides a detailed chronology of significant actions and events. Centers for Disease Control and Prevention. ExpectingtheUnexpected: Lessonsfromthe1999WestNileEncephalitisOutbreak.Atlanta, Ga.: Centers for Disease Control and Prevention, July 2000. -----. Epidemic/EpizooticWestNileVirusintheUnitedStates:Guidelinesfor Surveillance,Prevention,andControl.Atlanta, Ga.: Centers for Disease Control and Prevention, March 2000. New York State Department of Health. NewYorkStateWestNileVirus ResponsePlan. Albany, N.Y.: New York State Department of Health, May 2000. Wildlife Conservation Society. ProceedingsoftheWestNileVirusAction Workshop. New York, N.Y.: Wildlife Conservation Society, Jan. 19-21, 2000. Anderson, J.F., T.G. Andreadis, C.R. Vossbrinck, and others. “Isolation of West Nile Virus From Mosquitoes, Crows, and a Cooper’s Hawk in Connecticut.” Science, Vol. 286, No. 5448 (Dec. 17, 1999), p. 2331. Nolen, R.S. “Veterinarians Key to Discovering Outbreak of Exotic Encephalitis.” JournaloftheAmericanVeterinaryMedicalAssociation, http://www.avma.org/onlnews/javma/nov99/s111599a.asp (cited Nov. 15, 1999). Steele, K.E., M.J. Linn, R.J. Schoepp, N. Komar, T.W. Geisbert, R.M. Manduca, P.P. Calle, B.L. Raphael, T.L Clippinger, T. Larsen, J. Smith, R.S. Lanciotti, N.A. Panella, and T.S. McNamara. “Pathology of Fatal West Nile Virus Infections in Native and Exotic Birds During the 1999 Outbreak in New York City, New York.” VeterinaryPathology, Vol. 37 (May 3, 2000), pp. 208-24. Asnis, D., R. Conetta, A. Teixeira, and others. “The West Nile Virus Outbreak of 1999 in New York: The Flushing Hospital Experience.” Clinical InfectiousDiseases, Vol. 2000, No. 30 (Feb. 29, 2000), pp. 413-18. Centers for Disease Control and Prevention. “Outbreak of West Nile-Like Viral Encephalitis—New York, 1999.” MorbidityandMortalityWeekly Report, Vol. 48, No. 38 (Oct. 1, 1999), pp. 845-49. -----. “Update: West Nile-like Viral Encephalitis—New York, 1999.” Morbidity andMortalityWeeklyReport, Vol. 48, No. 39 (Oct. 8, 1999), pp. 890-92. Cheng, G.S. “West Nile Virus: Physician Reports Will Be Crucial.” Family PracticeNews, Vol. 30, No. 1 (Jan. 1, 2000), p. 12. “Exotic Diseases Close to Home.” Editorial, TheLancet, Vol. 354, No. 9186 (Oct. 9, 1999), p. 1221. Briese, T., J. Xi-Yu, C. Huang, L.J. Grady, and I.W. Lipkin. “Identification of a Kunjin/West Nile-like Flavivirus in Brains of Patients With New York Encephalitis” (Letter). TheLancet, Vol. 354, No. 9186 (Oct. 9, 1999), pp. 1261-62. Enserink, M. “Groups Race to Sequence and Identify New York Virus.” Science, Vol. 286, No. 5438 (Oct. 8, 1999), p. 206. -----. “New York’s Lethal Virus Comes From Middle East, DNA Suggests.” Science, Vol. 286, No. 5444 (Nov. 19, 1999), p. 1450. Lanciotti, R.S., J.T. Roehrig, V. Deubel, and others. “Origin of the West Nile Virus Responsible for an Outbreak of Encephalitis in the Northeastern United States.” Science, Vol. 286, No. 5448 (Dec. 17, 1999), p. 2333. Shieh, W.J., J. Guarner, M. Layton, A. Fine, J. Miller, D. Nash, G.L. Campbell, J.T. Roehrig, D. J. Gubler, and S.R. Zaki. “The Role of Pathology in an Investigation of an Outbreak of West Nile Encephalitis in New York, 1999.” EmergingInfectiousDiseases, Vol. 6, No. 4 (May-June 2000), pp. 370-72. Smithburn, K.C., T.P. Hughes, A.W. Burke, and J.H. Paul. “A Neurotropic Virus Isolated From the Blood of a Native of Uganda.” AmericanJournalof TropicalMedicineandHygiene, Vol. 20 (1940), p. 471. Tsai, T.F., F. Popovici, G.L. Cernescu, and N.I. Nedelcu. “West Nile Encephalitis Epidemic in Southeastern Romania.” TheLancet, Vol. 352 (Sep. 5, 1998), pp. 767-71. “West Nile Virus Similar to Israel ’98 Virus.” FamilyPracticeNews, Vol. 30, No. 1 (Jan. 1, 2000), p. 12. Holloway, M. “Outbreak Not Contained.” ScientificAmerican, Vol. 282 (April 2000), pp. 20-22. Moran, M. “West Nile Outbreak Sends Wake-up Call for Surveillance.” AmericanMedicalNews, Vol. 43, No. 3 (Jan. 24, 2000), p. 1. Centers for Disease Control and Prevention. PreventingEmerging InfectiousDiseases:AStrategyforthe21stCentury. Atlanta, Ga.: U.S. Department of Health and Human Services, 1998. Centers for Disease Control and Prevention, National Center for Infectious Diseases, Division of Vector-Borne Infectious Diseases. Guidelinesfor ArbovirusSurveillanceProgramsintheUnitedStates. Atlanta, Ga.: Centers for Disease Control and Prevention, April 1993. Public Health Service. AddressingEmergingInfectiousDiseaseThreats:A PreventionStrategyfortheUnitedStates. Atlanta, Ga.: U.S. Department of Health and Human Services, 1994. Schoch-Spana, M. “A West Nile Virus Post-Mortem.” BiodefenseQuarterly, Vol. 1, No. 3, www.hopkins-biodefense.org/pages/news/quarter1_3.html (cited Dec. 1999). Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction. “Excerpts, First Annual Report to the President and the Congress.” ww.rand.org/organization/nsrd/terrpanel/ terror.pdf (cited Dec. 15, 1999). Centers for Disease Control and Prevention. “Biological and Chemical Terrorism: Strategic Plan for Preparedness and Response: Recommendations of the CDC Strategic Planning Workgroup.” Morbidity andMortalityWeeklyReport, Vol. 49, No. RR-4 (April 21, 2000). Ember, L. “News Focus/Bioterrorism: Combating the Threat.” Chemical andEngineeringNews, Vol. 77, No. 27 (July 5, 1999), pp. 8-17. Institute of Medicine. ChemicalandBiologicalTerrorism:Researchand DevelopmenttoImproveCivilianandMedicalResponse. Washington, D.C.: National Academy Press, 1999. Lasker Charitable Trust. “Bioterrorism/Domestic Preparedness Suffers From Neglect of Public Health Infrastructure.” www.laskerfoundation.org/ fundingfirst (cited Sept. 16, 1999). McDade, J.E. “Addressing the Potential Threat of Bioterrorism—Value Added to an Improved Public Health Infrastructure.” EmergingInfectious Diseases, Vol. 5, No. 4, (July-Aug. 2000), pp. 591-92. National Intelligence Council. “The Global Infectious Disease Threat and Its Implications for the United States.” NationalIntelligenceEstimate99- 17D(Jan. 2000). Novick, L. (ed.). JournalofPublicHealthandManagementPractice(July 2000). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
|
Pursuant to a congressional request, GAO provided information on the West Nile virus outbreak, focusing on: (1) establishing a thorough chronological account of the significant events and communications that occurred, from doctors and others who first saw the symptoms and from the officials mounting a response; and (2) identifying lessons learned for public health and bioterrorism preparedness. GAO noted that: (1) the analysis of the West Nile virus outbreak began as two separate investigations--one of sick people, the other of dying birds; (2) on the human side, the investigation began quickly after a physician at a local hospital reported the first cases, and the original diagnosis, while incorrect, led to prompt mosquito control actions by New York City officials; (3) the ongoing investigation involved the combined efforts of many people in public health agencies and research laboratories at all levels of government; (4) a consensus that the bird and human outbreaks were linked, which was a key to identifying the correct source, took time to develop and was initially dismissed by many involved in the investigation; (5) when the bird and human investigations converged several weeks after initial diagnosis, and after laboratory research was launched independently by several of the participants to explore other possible causes, the link was made and the virus was correctly identified; (6) there are several key lessons that emerged from the investigation and response to this outbreak; (7) the local disease surveillance and response system is critical; (8) in this outbreak, many aspects of the local surveillance system worked well, in that the outbreak was quickly spotted and immediately investigated; (9) assessments of the infrastructure for responding to outbreaks suggest that surveillance networks in many other locations may not be as well prepared; (10) better communication is needed among public health agencies; (11) as the investigation grew, lines of communication and decision-making were often unclear, and efforts to keep everyone informed were awkward; (12) links between public and animal health agencies are becoming more important; (13) the length of time it took to connect the bird and human outbreaks of the West Nile virus signals a need for better coordination among public and animal health agencies; (14) ensuring adequate laboratory capabilities is essential; (15) even though this was a relatively small outbreak, it strained resources for several months; (16) because a bioterrorist event could look like a natural outbreak, bioterrorism preparedness rests in large part on public health preparedness; and (17) the ensuing investigation and post-outbreak assessments illustrate the challenges in identifying the source of an outbreak, supporting public health officials' views that public health preparedness is a key element of bioterrorism preparedness.
|
Over 600 organizations develop, market, or sell offsets in the United States, and the market involves a wide range of participants, prices, transaction types, and projects. While the exact scope of the U.S. voluntary market is uncertain because of a lack of complete data, available information shows that the supply of offsets generated in the United States has increased by about 66 percent over the last 3 years, from about 6.2 million tons in 2004 to about 10.2 million tons in 2007. The federal government plays a small role in the U.S. market. While no single regulatory body oversees the market, FTC and EPA, among others, have undertaken some consumer protection and technical assistance efforts. In addition, certain federal entities participate in the market as providers and consumers. For example, the Forest Service works with a nonprofit partner that solicits donations to support forestry projects. A wide range of participants are involved in the U.S. voluntary market, including providers of different types of offsets, developers of quality assurance mechanisms, third party verifiers, and consumers who purchase offsets from domestic or international providers. According to available data, more than 600 entities are involved in the supply of offsets in the United States, including companies, governments, colleges and universities, and other organizations. Offset providers include project developers and intermediaries. We identified 210 offset providers of various types, including 87 U.S.-based providers. Project developers implement individual projects and may sell offsets directly to consumers or to intermediaries. Intermediaries are further subdivided into retailers, aggregators, and brokers, among other categories. Retailers generally sell smaller quantities of offsets to individuals or organizations. Aggregators, also known as wholesalers, sell in bulk and often own a portfolio of offsets. Brokers facilitate transactions between sellers and buyers. Providers obtain the rights to the offsets they sell in a number of ways, including developing their own projects or purchasing directly from project developers, sometimes through brokers. Other providers purchase and retire offsets through CCX on behalf of customers. Providers may also play multiple roles in the offset market. For example, a single company may develop projects, aggregate offsets from other projects or providers for resale, and sell offsets directly to consumers. In addition, other entities, including investment banks and other financial institutions, support the development of projects through financing. Quality assurance providers include those involved in activities such as verification and monitoring of offset projects, and the development of quality assurance mechanisms such as accounting standards for calculating offsets. Project developers may use a third party verifier to confirm that offsets generated by a project were accurately calculated. Once verified, the offset might then be recorded by another independent party in a registry to track its sale and ownership. Multiple registries operate in the United States to help market participants track the ownership and retirement of offsets, although not all offsets are listed on registries. A wide variety of consumers buy offsets, including individuals, businesses, nonprofits, governments, research institutions, universities, religious congregations, utilities, and other organizations. Consumers’ motivations for purchasing offsets may include corporate responsibility and public relations, among others. Consumers may purchase offsets to compensate for emissions that result from a variety of activities including flying, driving, and purchasing consumer products. Offsets sell on the market at a wide range of prices. In 2007, prices on the global voluntary market ranged from $1.83 per ton to about $306 per ton, with an average of about $6 per ton, according to one recent market study. We purchased offsets from 33 retail providers, both domestic and international, and prices ranged from about $5 per ton to about $31 per ton. CCX prices were at their lowest in 2004, at $0.79 per ton, but recently peaked at $7.40 per ton in June 2008. There are also different types of carbon offset transactions, including direct purchase and payment or donation in support of a service. The difference between these transactions is whether the offsets are sold as a commodity. In a direct purchase, consumers pay for the delivery of offsets as a commoditized economic good. Direct purchases may allow the consumer to evaluate the parameters of the offset project, including how verification and monitoring methodologies were employed to create the offset. When the transaction does not involve the exchange of a commodity, consumers pay or donate money to a provider to support the retirement of offsets or the development of new offset projects, but the consumer does not own an asset after the transaction has been completed. In this case, the payment or donation amounts to a promise by the provider to supply the service of purchasing offsets or supporting offset projects. Donations may be tax deductible, effectively reducing the cost of the carbon offset. Another key distinction involves the timing of an offset’s creation. In cases where offsets are sold before they are produced, the quantity of offsets generated from projects can be calculated using what is known as ex-ante (or future value) accounting. On the other hand, when offsets are sold after they are produced, the quantity of offsets can be calculated using ex- post accounting. Using future value accounting, consumers may purchase an offset today, but it may take several years before the offset is generated. In addition to a range of participants, project developers generate offsets from different types of projects by either reducing emissions at the source or through sequestration. Emission reduction projects involve either fossil fuel projects based on changes in energy production and use practices— such as energy efficiency, fuel switching, power plant upgrades, and certain renewable energy projects—or greenhouse gas destruction projects, including projects that capture and destroy methane from coal mines, landfills, and agricultural operations. Sequestration projects include biological sequestration projects that pull carbon dioxide out of the air by, for example, planting trees or enhancing the management of agricultural soils, and geological sequestration projects that capture and store carbon dioxide in underground formations. See figure 3 for a diagram of common types of carbon offset projects, and see appendix II for descriptions of offset project types. The U.S. voluntary market is part of an expanding global market, with an estimated 65 million tons sold in 2007, valued at approximately $337.3 million. Complete data on the volume of offsets traded in the United States are not available, and the market’s transparency is limited. Efforts to quantify and report on the voluntary carbon market have focused on the global market and include limited information focused solely on the United States. It is also difficult to separate out the U.S. portion of the global market because U.S. market participants buy and sell across domestic and international boundaries and transactions are private. However, according to one study, an estimated 23 percent of the volume sold in 2007 on the global market came from U.S. providers. While the exact scope of the U.S. voluntary carbon offset market is uncertain because of a lack of complete data, available information shows that the supply of offsets based in the United States is growing rapidly. In the last 3 years, the supply of offsets from projects based in the United States increased approximately 66 percent, from about 6.2 million tons in 2004 to about 10.2 million tons in 2007. By comparison, EPA data show that U.S. greenhouse gas emissions have averaged about 7 billion tons annually since 2000. In addition, in 2007, at least 211 projects produced offsets in the United States, as compared to 93 projects in 2004, an increase of about 125 percent. See figure 4 for data on the U.S. supply of offsets. Of the total U.S. offset supply in 2007, about 85 percent was generated from three categories of projects: methane, carbon capture and geological storage (CCGS), and biological sequestration. About 49 percent of U.S. supply was produced from projects that capture and destroy methane from coal mines, agricultural operations, or landfills. An additional 19 percent was produced from CCGS projects that capture emissions from industrial and energy-related emissions sources and then store these emissions in geologic formations. Also, 17 percent was produced from biological sequestration projects, including agricultural soil projects such as no-till farming and forestry projects. Figure 5 illustrates U.S. offset supply by project type in 2007. One factor influencing the quantity of offsets generated from a particular project is the type of greenhouse gas involved. This is because most greenhouse gases, including methane, have greater heat-trapping ability relative to carbon dioxide. Thus, the global warming potential of these greenhouse gases influences the volume of offsets generated. For example, reducing one ton of methane emissions has the same effect as decreasing 25 tons of carbon dioxide. Accordingly, projects that decrease gases with high global warming potential may be attractive from a developer’s perspective. Available data show that in 2007, 93 of the 211 projects that produced offsets in the United States were methane projects. Of these, 5 coal mine projects—2 percent of the total—accounted for 24 percent of the total volume generated in 2007. On the other hand, 62 biological sequestration projects (about 29 percent of the total) produced 17 percent of the supply. This includes 52 forestry projects that produced about 7 percent of total supply from U.S.-based projects. Table 1 presents U.S. project types by number, volume, and percentage of total supply in 2007. In the United States, projects are located in 40 states, but 34 percent of the supply in 2007 was produced by 14 projects in Texas and Virginia. Projects in these states include high-yielding projects such as coal mine methane projects. While California had the greatest number of projects in 2007, these 31 projects accounted for about 4 percent of the total supply. Figure 6 presents the volume and number of offset projects by state, and detailed data are provided in appendix III. While no single regulatory body has oversight of the U.S. voluntary carbon offset market as a whole, offset transactions are subject to applicable state fraud and consumer protection laws, which are generally enforced by each state’s attorney general. Certain federal entities provide some consumer protection and technical assistance efforts and also participate in the market as providers and consumers. The mission of the Commodity Futures Trading Commission is to protect market users and the public from fraud, manipulation, and abusive practices related to the sale of commodity and financial futures and options, and to foster open, competitive, and financially sound futures and option markets. The CFTC exercises limited oversight over the Chicago Climate Exchange due to its status as an Exempt Commercial Market (ECM), a category established under the Commodity Futures Modernization Act of 2000. Participants in such markets, in general terms, must be large, sophisticated traders. Moreover, ECMs are allowed to trade only exempt commodities. ECMs must abide by certain notification requirements and affirm annually that they continue to operate under the same parameters. The 2008 Farm Bill increases the CFTC’s oversight of ECM contracts that serve a significant price discovery function. The CFTC confirmed that CCX is eligible to operate as an ECM, but at this time, CCX’s contracts have not been determined by the CFTC to serve a significant price discovery function. In cases where contracts serve a significant price discovery function, ECMs must adhere to a number of core principles, including monitoring of trading and the submission of certain data to the CFTC. Generally, CCX operates with less oversight because participants in the market are experienced. However, if the CFTC receives complaints, it can take appropriate action. Department of Agriculture, Forest Service The mission of the Forest Service, an agency within the Department of Agriculture (USDA), is to sustain the health, diversity, and productivity of the nation’s forests and grasslands to meet the needs of present and future generations. The Forest Service works with a congressionally chartered nonprofit partner, the National Forest Foundation (NFF), to solicit donations to the Carbon Capital Fund, which provides financial support for carbon sequestration projects on lands managed by the Forest Service. The Carbon Capital Fund donations are invested in Forest Service reforestation projects to sequester carbon. According to the Forest Service, donations to the Carbon Capital Fund will be used to replant areas on national forests that have been damaged by wildfire and other natural disturbances and to demonstrate the role of forest carbon sequestration in addressing climate change. The Forest Service manages the reforestation projects and also selects the project sites by using a forest vegetation simulation model to estimate the amount of carbon that will be sequestered by prospective projects. NFF operates the fund and uses a private contractor to measure and verify offsets. The first demonstration project was planned for the summer of 2008 on the Custer National Forest in Montana and South Dakota, and projects tentatively scheduled for the summer of 2009 will take place on the Plumas and San Bernardino National Forests in California. According to Forest Service and NFF officials, they offer no guarantees about the performance of Carbon Capital Fund projects. Donations to the fund do not transfer rights or ownership of offsets. USDA officials said that contributions to the fund are donations and do not create tradable offsets. These officials also said that donations to the Carbon Capital Fund would enable the Forest Service to plant trees, which would, in the long term, lead to carbon reductions. NFF said that it would notify donors if the forestry projects fail and that it plans to send documentation to donors, including pictures, when projects are complete. As of January 2008, a total of about $55,000 had been donated to the fund. Ten percent of the donations was set aside for third party verification and monitoring, according to NFF. USDA also encourages the use of consistent forestry and agriculture offset methodologies by working with market participants such as the Chicago Climate Exchange and state and regional programs. For example, USDA’s Natural Resources Conservation Service provided a $750,000 grant to the Chicago Climate Exchange to promote the inclusion of agriculture projects in the offset market by lowering costs and developing methodologies for calculating reductions from no-till farming and establishing a pool of project verifiers. Department of Energy, Energy Information Administration The mission of the Energy Information Administration (EIA) is to provide policy-neutral data, forecasts, and analyses to promote sound policy making, efficient markets, and public understanding regarding energy and its interaction with the economy and the environment. EIA’s Voluntary Reporting of Greenhouse Gases Program, established under section 1605(b) of the Energy Policy Act of 1992, provides a means for organizations and individuals who have reduced their emissions to record their accomplishments in a registry. In 2006 and 2007, EIA revised the program to allow participants to report on offsets in certain circumstances. The revised guidelines have not yet been implemented. Department of the Interior, U.S. Fish and Wildlife Service The mission of the U.S. Fish and Wildlife Service (FWS), a bureau within the Department of the Interior, is to conserve, protect, and enhance fish, wildlife, and plants and their habitats for the continuing benefit of the American people. FWS partners with companies and nonprofits to develop carbon sequestration projects on national wildlife refuges in the southeastern United States. FWS enters into these partnerships to obtain funds to restore and enhance native forest and wildlife habitat on national wildlife refuges. FWS identifies refuge lands that are important for its overall conservation goals and manages sequestration projects on these lands, but does not play a role in the calculation, verification, or monitoring of carbon offsets. Carbon sequestration projects must support the purposes of each national wildlife refuge and be consistent with refuge forest management plans. FWS negotiates additional funding commitments with partners to meet long-term operations and maintenance needs as well. In return for funding carbon sequestration activities related to FWS conservation goals, partners retain rights to any carbon credits that may result from the restoration projects. The partners may in turn provide their clients or donors with the opportunity to offset their carbon emissions by contributing funds to these projects. Companies involved in partnership agreements with FWS may restore or reforest refuges, or buy land identified by FWS and then gift the land back to FWS and underwrite the restoration of that land. Partners include energy companies and nonprofit land trusts. According to FWS, these partnerships have led to the addition of 40,000 acres of land to the refuge system and restored a total of 80,000 acres of wildlife habitat with more than 22 million trees. The Solicitor’s Office of the Department of the Interior determined that FWS may accept donations of this kind as long as it complies with the Department of the Interior’s guidelines for accepting donations and applicable laws and regulations. The mission of the Environmental Protection Agency is to protect human health and the environment. EPA Climate Leaders, a voluntary emissions reduction program, provides technical assistance to companies on calculating and tracking greenhouse gas emissions over time, calculating emissions reductions from offsets, and incorporating offsets into emission reduction strategies. In the Climate Leaders program, partner companies commit to reduce their impact on the environment by completing a greenhouse gas emissions inventory, setting reduction goals, and annually reporting progress to EPA. EPA also provides guidance to partners on calculating emissions reductions from offsets. For offsets to be credible, according to EPA, they must meet four key accounting principles: the offsets must be real, additional, permanent, and verifiable. Partners may choose to develop their own offset projects or purchase offsets. Offset projects must meet Climate Leaders requirements for use toward meeting a greenhouse gas reduction goal, including the use of a performance standard-based approach to quantifying emissions reductions. EPA has developed accounting methodologies for certain offset project types, including landfill gas, manure management, afforestation, transportation, and boiler replacement projects. EPA is also developing protocols for additional project types, such as coal bed methane. The mission of the Federal Trade Commission is to protect consumers, strengthen free and open markets, and promote informed consumer choice. The Federal Trade Commission Act prohibits unfair or deceptive trade practices, including deceptive advertising. Among other things, the FTC enforces a wide variety of consumer protection laws and is evaluating the treatment of carbon offsets in its Green Guides, a publication designed to help advertisers avoid making false or misleading environmental marketing claims. The FTC announced in November 2007 that it would conduct a regulatory review of the Green Guides, which were last updated in 1998 and do not currently address carbon offsets. According to the FTC, carbon offset marketing claims may present a heightened potential for deception because it is difficult, if not impossible, for consumers to verify the accuracy of the seller’s claims. The FTC held a public workshop in January 2008 about carbon offsets to obtain input on consumer protection issues and to determine whether more direct guidance is needed. The workshop examined the emerging market for greenhouse gas emission reduction products and related advertising claims, among other issues. The FTC is reviewing the public comments obtained through the workshop but has not issued proposed changes to the guides and has not decided whether to issue guidance specifically regarding offsets. U.S. House of Representatives, Office of the Chief Administrative Officer The Office of the Chief Administrative Officer provides operations infrastructure and support services for the community of about 10,000 House Members, officers, and staff. The CAO purchased 30,000 metric tons of offsets through the Chicago Climate Exchange as part of the Green the Capitol Initiative, an effort to reduce the greenhouse gas emissions from House operations. Among other measures, the Green the Capitol Initiative outlines three strategies, including (1) purchasing electricity generated from renewable sources; (2) meeting the House’s heating and cooling needs by switching from using coal, oil, and natural gas at the Capitol power plant to natural gas only; and (3) purchasing offsets to compensate for any remaining carbon emissions. See appendix IV for more information about the purchase of offsets by the CAO. Multiple quality assurance mechanisms are available and used to ensure the credibility of carbon offsets available for purchase on the U.S. voluntary offset market, but a lack of centralized information makes it difficult to estimate the extent of their use. Participants in the offset market face several challenges to ensuring the credibility of offsets, including problems determining additionality, and the availability and use of many mechanisms for verification, and monitoring. Our purchase of offsets found that the information supplied by a nonprobability sample of retailers provides limited assurance of credibility. A wide range of quality assurance mechanisms, commonly described collectively as “standards,” are available to ensure the credibility of carbon offsets. Market participants and third parties apply these standards at different stages of the carbon offset supply chain for a variety of purposes. For example, accounting and reporting methods define how to measure emissions reductions from specific types of projects. In addition, verification and monitoring standards are used to confirm that offsets are calculated correctly and that a project was indeed implemented, and to monitor progress over time. End use product standards, applied later in the supply chain, can be used to certify product marketing claims. Certain mechanisms cover multiple aspects of quality assurance and specify the use of registries to track the ownership and disposition of offsets, while others focus on one aspect, such as ensuring that emissions reductions are calculated correctly. Figure 7 illustrates how quality assurance mechanisms relate to the various components of a simplified offset supply chain, and appendix VII describes selected offset standards used in the voluntary market. Our review of the available literature and discussions with stakeholders identified widely varying estimates of the extent to which market participants use quality assurance mechanisms. Available information suggests that many carbon offsets in the voluntary market were subject to a quality assurance mechanism, but the fragmented nature of the market and limited data preclude exact estimates of the use of such mechanisms. One study estimated that more than 85 percent of the offsets purchased on the retail market in 2007 were verified by third parties, but this estimate did not include data on verification for many transactions. In contrast, another study stated that the majority of voluntary offsets are currently not certified against a third party standard. The available information suggests that fewer providers use registries to track the ownership and disposition of offsets than use third party verification or other quality assurance mechanisms. For example, one study estimated that more than 50 percent of the offsets available on the retail market were not listed in a registry, but this estimate did not include data for many transactions. Because of incomplete and conflicting data on the use of quality assurance mechanisms, including registries, we cannot accurately gauge the extent of their use. In addition, these data limitations detract from the market’s transparency. Our interviews with stakeholders identified additionality and the presence of many different verification and monitoring methods as the two greatest challenges facing participants in the market. This is important because stakeholders and the available literature identify additionality and verification and monitoring as among the most important characteristics for establishing the credibility of offsets. (See app. V for more information about stakeholders’ ratings of characteristics of offset credibility and market challenges.) According to most stakeholders and key studies, additionality is fundamental to the credibility of offsets because only offsets that are additional to business-as-usual activities result in new environmental benefits. However, certain stakeholders said that additionality is not a critical factor at this early stage in the development of carbon markets and that the key goal should be to keep transaction costs and barriers to entry low to create financial incentives for reducing emissions. Several stakeholders said that there is no correct technique for determining additionality because it requires comparison of expected reductions against a projected business-as-usual emissions baseline (also referred to as a counterfactual scenario). Determining additionality is inherently uncertain because, it may not be possible to know what would have happened in the future had the projects not been undertaken. Stakeholders offered different definitions for additionality and preferred different methods for determining whether projects are additional. For example, some stakeholders said that additionality should be evaluated on a case-by-case examination of the unique circumstances of each project, while other stakeholders preferred evaluating projects against efficiency standards for a technology or sector, known as a performance benchmark approach. There are many other ways to determine whether projects are additional, and many stakeholders said that applying a single test is too simplistic because every project is different from others and operates under different circumstances. See table 2 for descriptions of selected additionality tests. Stakeholders also identified the existence of many different verification and monitoring methods as a key challenge to ensuring the credibility of offsets. There are many standards for measuring, verifying, monitoring, and tracking the distribution of carbon offsets but few standards, if any, that cover the entire supply chain. The proliferation of standards has caused confusion in the market, and the existence of multiple quality assurance mechanisms with different requirements raises questions about the quality of offsets available on the voluntary market, according to many stakeholders. The lack of standardization in the U.S. market may also make it difficult for consumers to determine whether offsets are fully fungible— interchangeable and of comparable quality—a characteristic of an efficient commodity market. The term “carbon offset” implies a uniform commodity, but offsets may originate from a wide variety of project types based on different quantification and quality assurance mechanisms. Because offsets are not all the same, it may be difficult for consumers to understand what they purchase. In addition, several stakeholders said that a standardized offset registration process would foster transparency and prevent double-counting. Because there is no single registry and because of a lack of communication among existing registries, it is difficult for consumers to determine the quality of the offsets they purchase. Certain stakeholders said that a single standard would bring greater credibility to the voluntary carbon offset market and result in projects that meet more stringent protocols. However, some stakeholders said that they did not expect that a single standard would emerge because of the wide variety and complexity of offset projects. Further, several stakeholders said that a single standard may not be desirable because it could stifle innovation and limit access to the market. Certain stakeholders said that the flexibility offered by multiple standards encourages the testing of new methodologies and emissions reduction technologies. While the concept of carbon offsets rests on the notion that a ton of carbon reduced, avoided, or sequestered is the same regardless of the activity that generated the offset, some stakeholders believe that certain types of projects are more credible than others. Specifically, the stakeholders identified methane capture and fuel-switching projects as the most credible, and renewable energy certificates (REC) and agricultural and rangeland soil carbon sequestration as less credible. Some stakeholders also pointed out that projects that use future value accounting practices to calculate offsets may be less credible. However, certain stakeholders said that this does not mean such projects should be categorically excluded from the offset market, only that they may require more rigorous quality assurance. Approximately one-third of the respondents said that credibility varies depending upon circumstances specific to the project. See table 5 in appendix V for more details about stakeholders’ rating of the credibility of different types of carbon offset projects. The stakeholders’ views on the credibility of different project types may stem from the fact that methane and fuel-switching projects are relatively simple to measure and verify, while RECs, forestry, and agricultural and rangeland soil carbon projects face challenges related to additionality, measurement, and permanence. According to several stakeholders, RECs and carbon offsets are not comparable environmental commodities and differ in their objectives, the actions they represent, and the standards by which they are defined. RECs certify that a certain quantity of electricity has been generated from a qualifying type of renewable generation technology, whereas carbon offsets represent an amount of carbon reduced in comparison with a projected business-as-usual emissions baseline. RECs may be bought and sold to satisfy state-level requirements to produce electricity from renewable sources—known as renewable portfolio standards—and also in the voluntary carbon offset market. The carbon benefits of RECs may be double-counted if sold in both markets, according to some stakeholders. With respect to agricultural and rangeland sequestration and forestry, certain stakeholders said it is difficult to accurately measure emissions reductions from these types of projects. In addition, forestry offset projects may not be permanent because disturbances such as insect outbreaks and fire can return stored carbon to the atmosphere. Projects using future value accounting practices to calculate offsets may also be less credible than those that do not, according to some stakeholders. Ensuring the credibility of offsets purchased before they are produced inherently involves a higher degree of uncertainty than purchasing an offset that has already been generated. Some stakeholders told us that future value accounting practices expose consumers to more risk that the offsets will not materialize because it is more difficult to verify and monitor such projects over time. Other stakeholders said that future value accounting is an important way to fund certain types of offset projects that might otherwise not be possible. The information provided to consumers about offset projects and quality assurance mechanisms offers limited assurance of credibility, according to certain stakeholders and analysis of documents obtained through the purchase of offsets. Several studies and stakeholders said that it is difficult for consumers to make educated choices about offset purchases because the information they need may not be provided by retail offset providers. However, one stakeholder said that the strengths and weaknesses of offsets could be determined with a reasonable amount of due diligence, which is important to any buyer of a commodity in an emerging market. To better understand the perspective of consumers, we purchased offsets from 33 retail providers and found that the information provided about the offsets varied considerably and offered limited assurance of credibility. We retrospectively analyzed information provided to us by the retailers directly as a result of the transaction as well as information provided on their Web sites. We expected that the information provided by retailers as a result of the transaction would yield detailed project-specific information related to credibility, and our review of Web sites was intended to supplement the information received directly from providers as a result of transactions. We found that retailers provided limited information about important characteristics for establishing the credibility of offsets, including additionality, verification, and the use of a registry to track offsets. We also found that few retailers identified specific projects associated with our transactions, and that the information provided on Web sites—in some cases general information about the retailers’ quality assurance approaches—could not be linked to particular transactions. As a result, we found it difficult, in many cases, to determine exactly what we had purchased, and consumers in the offset market may face similar challenges. With respect to information provided directly as a result of a transaction, 3 of 33 retailers said that their offsets were additional but only 2 explained how they defined additionality. The remaining 30 retailers did not provide information on additionality. With regard to verification, less than one- third of retailers (9 of 33) specified that their offsets were verified by a third party. The remaining 24 retailers did not provide information on verification. In addition, 5 of 33 retailers specified that the offsets were tracked in a registry and included the name of the registry, and 4 of these provided associated tracking numbers. The remaining 28 retailers did not provide information about the use of a registry. Further, as a direct result of the transaction, less than half of the retailers (13 of 33) provided information about whether the transaction resulted in the exchange of a good or the provision of a service. We also found that retailers provided limited information about the offset projects associated with our transactions. Less than half (13 of 33) provided information about the location of their projects, but the majority of retailers (24 of 33) provided information on the type of project, and 9 of these retailers identified multiple project types. In addition, 8 retailers provided information related to the timing of the project, specifically, when the project started or is scheduled to begin or when the offsets would occur. However, many provided more information on their Web sites that was not directly related to our transactions. We found that almost all of the retailers (30 of 33) provided some information related to verification on their Web sites. This information varied considerably among the retailers, with all 30 stating that the offsets were verified and 6 providing detailed information such as verification reports. With regard to additionality, 22 retailers provided information on their Web sites, including some explanation of how they define additionality. Finally, less than half of the retailers (12 of 33) said that their offsets are tracked in a registry, including 10 retailers that identified a specific registry, and 2 that operate their own. Increased government oversight of the voluntary market could address some concerns about the credibility of offsets by standardizing quality assurance mechanisms and registries, and this could encourage new projects and help protect consumers. However, more oversight could reduce flexibility and increase the administrative burden for government agencies and providers, which could raise costs and stifle innovation. Using offsets in a mandatory emissions reduction program would involve similar trade-offs. Offsets could lower the cost of compliance, encourage investment and innovation in sectors not required to reduce emissions, and provide time for regulated entities to change existing technologies. However, if the offsets used for compliance are not credible, the environmental integrity of a compliance system may be compromised. Increased oversight could address some concerns about the credibility of offsets by standardizing the use of quality assurance mechanisms and registries. Some stakeholders said that the voluntary offset market cannot operate efficiently without standardized mechanisms for ensuring the credibility of offsets. More government oversight could also help increase the fungibility and commoditization of offsets and improve the market’s transparency. Other benefits of oversight and standardization could include encouraging the development of new projects, improving consumer protection and awareness, and addressing concerns about weaknesses of the voluntary market spilling over into a future compliance market. Certain stakeholders said that enhanced oversight of the voluntary carbon market would provide it with increased legitimacy that would help to spur new offset projects and increase the size of the market. On the other hand, increased oversight would likely increase the cost of providing offsets in the voluntary market by introducing complex quality assurance requirements, which reduce flexibility and increase transaction costs. Oversight could also stifle innovation, according to some stakeholders, by requiring complex procedures with greater administrative costs, and by excluding some types of offset projects from the market. The federal government could also incur costs associated with increased oversight activities. Stakeholders held different opinions about whether the government should play a larger role in the U.S. voluntary market. Several said that organizations have already invested time, money, and expertise in developing standards and that increased oversight should rely on and build on these investments. Other stakeholders thought that standardized quality assurance methods and registries would evolve naturally over time as the result of market forces. Several stakeholders said that government should focus on creating a mandatory greenhouse gas reduction program instead of improving the voluntary market and that a future compliance market will largely drive the standards for the voluntary market. Certain stakeholders and available studies illustrated several policy options for enhancing oversight of the market. One option would involve requiring participants in the market to adopt standardized quality assurance mechanisms and use a specific registry. A second option would involve the federal government providing incentives or developing voluntary programs to encourage participants to take certain actions. Other options include prohibiting certain types of projects that are considered less credible and applying discounts or imposing insurance requirements on certain types of offsets with greater uncertainty or potential for failure. As an example of government oversight in the voluntary offset market, several stakeholders mentioned the United Kingdom Department for Environment, Food and Rural Affairs (DEFRA) framework for the Code of Best Practice for Carbon Offsetting. The code is designed to increase consumer confidence in the integrity of carbon offsets available for purchase in the United Kingdom. Offset products meeting the requirements of the code will be assigned a certification mark that providers may use for marketing purposes. The code initially covers only Certified Emissions Reductions that are compliant with the Kyoto Protocol, but voluntary emissions reductions could be included in the code in the future. Allowing offsets in a future compliance scheme could decrease the overall compliance costs because it could provide regulated entities with a wider variety of compliance options. In many cases, regulated entities may find it economically advantageous to buy offsets instead of reducing emissions themselves. Recent EPA analyses state that the cost of compliance with mitigation policies under consideration by the Congress decreases substantially as the use of offsets increases. Specifically, the agency’s recent analysis of the Climate Security Act of 2008 (S. 2191) reported that if the use of domestic and international offsets is unlimited, then compliance costs fall by an estimated 71 percent compared to the bill as written. Alternatively, the price increases by an estimated 93 percent compared to the bill as written if no offsets are allowed. A 2007 EPA study analyzing the economic impacts of the Climate Stewardship and Innovation Act of 2007 (S. 280) found similar results. Other quantitative studies by economists also show that the use of offsets will decrease the cost of achieving emissions reductions. In general, the carbon price is lower in quantitative models of a U.S. compliance system when domestic and international offsets are widely available and their use is unrestricted. Using offsets in a compliance scheme could also increase the administrative costs of the scheme because of increased government oversight of quality assurance mechanisms used to ensure the credibility of offsets. A lower carbon price due to the availability of offsets as a compliance tool may have several effects, according to available economic literature. In the short term, lower prices make compliance with a policy to reduce emissions less expensive. Lower prices may also facilitate agreements to limit emissions and enhance their environmental integrity by reducing the incentive for regulated sources to either cheat on the agreement or shift production to areas where carbon emissions are not regulated. Including offsets in compliance schemes could also encourage investment and innovation in unregulated sectors of the economy, possibly at the expense of investment and innovation in regulated sectors. According to several stakeholders and available economic literature, a market for offsets may support climate-related innovation in sectors that supply offsets. For example, unregulated facilities may devise new ways to limit greenhouse gas emissions because they could sell offsets in the compliance market. The availability of offsets in a compliance scheme could also provide time for regulated facilities to develop new technologies and processes. Some stakeholders said that access to offsets provides more flexibility in meeting short-term requirements, leaving more time to implement long- term plans for internal emissions reductions and technology development. Further, according to certain stakeholders, offsets may allow regulated sources to continue using assets such as power plants until the end of their useful lives, thereby reducing their premature retirement and the cost of emissions reductions overall. In addition, multiple stakeholders said that offsets may allow covered sources to avoid investing in long-lived assets that achieve only marginal improvements, instead focusing on more effective assets that take longer to develop. On the other hand, allowing the use of offsets could compromise the environmental integrity of a compliance system if nonadditional offsets are used as compliance tools. Certain stakeholders said that because offset programs increase the total quantity of compliance instruments available to regulated sources, the integrity of the system can be maintained only if offsets are additional. If a significant number of nonadditional offsets enter the market, emissions may rise beyond levels intended by the scheme, according to some stakeholders. Nonadditional offsets could thus increase uncertainty about achieving emissions reduction goals. This concern underscores the importance of using quality assurance mechanisms to ensure the credibility of any offsets allowed into a compliance scheme. In addition, these concerns could be minimized by limiting the use of offsets or including policy options for enhancing oversight of the market such as applying discounts or imposing insurance requirements on offsets with greater uncertainty or potential for failure. The available economic literature supports some of the environmental integrity concerns raised by stakeholders. Economic analyses of offsets acknowledge difficulties with their use, including baseline determination, additionality, permanence, double-counting, and verification and monitoring. If these criteria are more likely to be satisfied by internal reductions from regulated sources than by offsets, the use of offsets may result in greater emissions, according to these sources. Economists have also identified “leakage” as a potential problem for offsets, especially those created on a project-by-project basis. Leakage occurs when economic activity is shifted as a result of emission control regulation. Consequently, emissions abatement achieved in one location that is subject to emission control regulation is diminished by increased emissions in unregulated locations. For an offset project, leakage occurs when economic activity is shifted from the site of the offset project to another location or sector where emissions are not controlled. For example, an offset project that restricts timber harvesting at a specific site may boost logging at an alternative location, thus reducing the effectiveness of the offset project. Forestry projects are thought to be particularly vulnerable to these challenges, as are credits originating in developing countries, even though these offsets have been identified as sources of significant cost savings to compliance regimes in developed countries. Multiple stakeholders also said that including offsets in a compliance scheme could slow investment in certain emissions reduction technologies in regulated sectors and lessen the motivation of market participants to reduce their own emissions. According to some stakeholders, if more cost- effective offsets are available as compliance tools, regulated sources may delay making investments to reduce emissions internally, an outcome that could ultimately slow the development of, and transition to, a less carbon- intensive economy. For example, a senior representative of the Council on Environmental Quality said that there is a trade-off between short-term focus on the marginal cost of reductions and long-term investment in technology. This representative said that offsets may be a cheaper way to reduce emissions today, but that investment in technology, not offsets, builds emissions reductions into the economy for the long term. Other stakeholders and the available economic literature raise similar concerns. According to the literature, a market for offsets may support innovation in sectors that supply offsets at the expense of investment in technology to reduce emissions from regulated sources. Furthermore, certain stakeholders said that it may be more difficult for regulators to mandate the amount and timing of emissions reductions in specific economic sectors if offsets are part of a compliance scheme. Certain stakeholders suggested imposing limits on the use of offsets in a compliance scheme to address some of these challenges, but stakeholders held different opinions about the potential effectiveness of this approach. Some said it may be necessary to place restrictions on the use of offsets in order to achieve internal emissions reductions from regulated sources. If all the effort to reduce emissions is in the form of offsets, then the compliance system may not provide the price signals necessary for long- term investment in technology at domestic industrial facilities and power plants, according to multiple stakeholders. They said that domestic abatement is central to achieving the long-term goal of any emissions reduction system. However, other stakeholders said that incorporating offsets into a compliance scheme will enable greater overall climate benefits to be achieved at a lower cost, as long as offsets are additional and are not double-counted. Existing international programs to limit greenhouse gas emissions that allow the use of offsets for compliance may provide insights into trade-offs between cost and credibility. For example, the European Union’s program to limit greenhouse gas emissions enables regulated entities to use certain types of offsets for compliance. GAO is reviewing the European Union’s program, including the role of offsets, in a report that we will issue later in 2008. The voluntary market for carbon offsets provides a potentially low-cost way for purchasers of offsets to compensate for their emissions of greenhouse gases by paying others to undertake activities that avoid, reduce, or sequester greenhouse gas emissions. However, several factors contribute to challenges in understanding the market. First, while most markets involve tangible goods or services, the carbon market involves a product that represents the absence of something—in this case, an offset equals the absence of one ton of carbon dioxide emissions. Second, ensuring the credibility of carbon offsets poses challenges because of the inherent uncertainty in measuring emissions reductions or sequestration relative to a projected business-as-usual scenario. Any measurement involving projections is inherently uncertain. These challenges are compounded by the fact that project developers produce offsets from a variety of activities—such as sequestration in agricultural soil, and forestry projects, and methane capture—and do not use a single set of commonly accepted quality assurance mechanisms. Third, many transactions do not involve a central trading platform, exchange, or registry system. These factors limit the market’s transparency and pose challenges for market participants, especially consumers. Additional oversight of the voluntary market could address some of these challenges, but would also impose costs on government oversight bodies and increase costs for market participants. Some options for increased oversight include requiring the use of standard quality assurance mechanisms, mandating the use of a common registry, establishing product disclosure requirements that help consumers evaluate an offset’s quality, establishing best practices, developing a government certification system, providing incentives or developing voluntary programs to encourage participants to take certain actions, and limiting the allowable types of activities that can generate offsets. Consideration of these approaches involves trade-offs among cost, quality assurance, and consumer protection. The Federal Trade Commission’s efforts to update its Green Guides for environmental marketing claims may also enhance the existing oversight framework, which consists primarily of laws affecting contractual agreements and fraud. The options for enhanced oversight identified above may increase in importance in the context of a compliance market associated with any future policies that place binding limits on greenhouse gas emissions. While allowing carbon offsets for compliance with mandated reductions in emissions can decrease overall compliance costs for regulated entities, challenges with the credibility of offsets could compromise the integrity of a compliance scheme. In addition to the oversight options identified above, the government could consider further steps to address uncertainties with offsets such as limiting the extent of their use for compliance, discounting a percentage of all offsets, and imposing insurance requirements for offset providers and purchasers. GAO is not recommending executive actions. However, as the Congress considers legislation intended to limit greenhouse gas emissions that allows the use of carbon offsets for compliance, it may wish to incorporate provisions that would direct the relevant federal agency (or agencies) to establish (1) clear rules about the types of offset projects that regulated entities can use, as well as standardized quality assurance mechanisms for these allowable project types; (2) procedures to account and compensate for the inherent uncertainty associated with offset projects, such as discounting or overall limits on the use of offsets for compliance; (3) a standardized registry for tracking the creation and ownership of offsets; and (4) procedures for amending the offset rules, quality assurance mechanisms, and registry, as necessary, based on experience and the availability of new information over time. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to others who are interested and make copies available to others who request them. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. This report examines (1) the scope of the U.S. voluntary carbon offset market, including the role of the federal government; (2) the extent to which mechanisms for ensuring the credibility of voluntary carbon offsets are available and used, and what, if any, related information is shared with consumers; and (3) the trade-offs associated with increasing the oversight of the U.S. voluntary carbon offset market and incorporating offsets into broader climate change mitigation policies. In conducting our work, we reviewed available government and trade literature related to carbon offset markets and conducted structured and open-ended interview questions with nonprobability samples of 34 stakeholders, including 12 providers, 3 third party verifiers, 7 developers of standards, and 12 other knowledgeable stakeholders. We selected nonprobability samples of relevant stakeholders based on analysis of existing market literature, referrals from other stakeholders, and other criteria, such as participation in carbon offset trade conferences. In general, we selected stakeholders that were frequently cited in available studies of the offset market or participated in related conferences and meetings, and preferentially selected stakeholders based in the United States. We also conducted scoping interviews with several trade groups and other knowledgeable stakeholders. To describe the scope of the U.S. voluntary carbon offset market, including the role of the federal government, we interviewed officials responsible for offset-related programs at the Department of Agriculture (Forest Service), the Department of Energy (Energy Information Administration), the Department of the Interior (U.S. Fish and Wildlife Service), and the Environmental Protection Agency, and officials at the Federal Trade Commission and the Commodity Futures Trading Commission. To obtain an official administration position on carbon offsets, we met with the Council on Environmental Quality. We attended public meetings and congressional briefings and attended several conferences focused on the voluntary carbon offset market. We met with officials responsible for managing state and regional greenhouse gas mitigation programs, including California’s recently passed legislation to regulate greenhouse gases (Assembly Bill 32), the Regional Greenhouse Gas Initiative (RGGI), and the Western Climate Initiative. We met with representatives of the Chicago Climate Exchange, the Chief Administrative Officer of the House of Representatives, and other officials involved in the purchase of carbon offsets for the House of Representatives. To obtain perspectives on the role of the voluntary offset market in comparison and as a complement to compliance markets, we interviewed officials at the United Kingdom (UK) Department for Environment, Food and Rural Affairs (DEFRA). We also met with the UK National Audit Office, and a variety of other offset market participants and stakeholders in the UK. To obtain specific information about the supply of offsets in the United States, including the number and type of offset projects and the quantity of offsets by state, we analyzed data purchased from Point Carbon, a provider of independent news, analysis, and consulting services for European and global power, gas, and carbon markets. Data presented in this report on the supply of offsets refer specifically to offsets generated from projects located in the United States. Point Carbon estimates that its database accounts for approximately 80 percent of the offsets generated from projects located in the United States based on its analysis of domestic and global carbon markets. As such, our analysis may not have included all projects that are operating in the United States; however, we believe these data represent the best information available. To assess the reliability of the Point Carbon data, we (1) performed electronic testing of required data elements, (2) reviewed existing information about the data and the system that produced them, and (3) interviewed Point Carbon staff who are knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To analyze the extent to which mechanisms for ensuring the credibility of voluntary carbon offsets are available and used, and what, if any, related information is shared with consumers, we obtained about $100 worth of offsets from each of a nonprobability sample of 33 retail providers for a total expenditure of approximately $3,300. The information we obtained from the nonprobability sample of purchases does not address how the market may evolve over time or how consumers interpret the information they receive from providers. To select the sample of retailers from whom offsets would be obtained, we used available information to identify providers that sold or accepted donations for offsets online. To select the sample of retailers, we developed a list of providers based on primary sources, including reports, studies, surveys, and lists from membership organizations. We used information from providers’ Web sites to identify whether providers sold or accepted donations for offsets online and selected retailers that did and were identified in two or more primary sources. We conducted online transactions because they cater directly to individual consumers, a portion of the U.S. voluntary carbon offset market that is not well characterized in available studies. We analyzed the documentation directly related to each transaction, including (1) transaction documents—information provided while conducting the online transaction, (2) e-mail documents—any information received through e-mail after conducting the transaction, and (3) mail documents— any information received through the mail after conducting the transaction. We analyzed the documentation directly related to the transaction, if provided, to determine whether it contained information related to volume, price, project type and location, standards, registry, verification, monitoring, additionality, timing, and ownership. We also reviewed information presented on the retailer’s Web site to determine whether information was provided about the retailers’ offsets related to price, project type and location, standards, registry, verification, monitoring, additionality, timing, and ownership. To assess the trade-offs associated with increasing the oversight of the U.S. voluntary carbon offset market and incorporating offsets into broader climate change mitigation policies, we reviewed available economic literature and information collected through stakeholder responses to structured and open-ended interview questions. We conducted our work from July 2007 to August 2008. Appendix II: Description of Offset Project Types Projects that capture and combust or contain methane produced from agricultural operations. This involves the installation of complete-mix or plug-flow digesters or lagoon covers that collect aggregated waste from dairy, avian, and/or hog sources. Projects that sequester carbon in soil through the adoption of conservation tillage and activities such as planting grass or adopting certain tilling practices. Projects that capture and sequester greenhouse gases using biological techniques such as algae lagoons. Projects that separate CO2 emissions from industrial and energy-related emissions sources, transport the CO2 to a suitable storage site, and then isolate the CO2 by injecting it into an underground geologic formation such as active and abandoned oil and gas reservoirs, saline aquifers, or unminable coal seams. Projects that capture and burn or contain methane emitted by coal mines. Projects that reduce CO2 emissions by reducing on-site combustion of natural gas, oil, or propane for end use by improving the energy efficiency of fuel usage and/or the energy efficient delivery of energy services. Projects that occur on land managed in accordance with sustainable forestry practices and promote the restoration of native forests by using mainly native species and avoiding the introduction of invasive nonnative species. Projects that capture and burn or contain methane produced by landfills. Projects that involve the adoption of certain sustainable grazing practices on rangeland that include moderate livestock density and rotational and seasonal grazing techniques. Projects that reduce emissions by generating energy from renewable sources including but not limited to hydro, wind, and solar power. RECs are tradable certificates that represent the environmental attributes that result from one megawatt hour of electricity generated by a renewable source, such as wind power. Twelve projects occur across multiple states. The data for these projects are included under the category of multiple states and not included in the volume or number of projects for the individual states involved in these projects. On March 1, 2007, the Speaker and Majority Leader of the U.S. House of Representatives and Chairwoman of the Committee on House Administration directed the House Chief Administrative Officer (CAO) to develop a Green the Capitol Initiative to provide an environmentally responsible and healthy working environment for House employees. Among other measures, the CAO’s June 21, 2007, report recommended that the House operate in a carbon neutral manner by the end of the 110th Congress and identified three strategies to achieve this goal, including (1) purchasing electricity generated from renewable sources; (2) meeting the House’s heating and cooling needs by switching from using coal, oil, and natural gas at the Capitol power plant to natural gas only; and (3) purchasing offsets to compensate for any remaining carbon emissions. According to the CAO, using strategies one and two, the House would need to offset 24,000 short tons of carbon dioxide emissions to operate in a carbon neutral manner. The CAO recommended purchasing carbon offsets through the Chicago Climate Exchange (CCX), a voluntary greenhouse gas reduction and trading system through which members make commitments to decrease their emissions. If CCX members reduce emissions beyond their reduction goals, they may sell the extra reductions to other members of the exchange. In addition to emitting members, the CCX platform is also available to offset providers, who may register tons on CCX that represent greenhouse gas mitigation projects. To meet their commitments, CCX members may trade emissions reductions or offsets known as Carbon Financial Instruments (CFI). According to CCX, to verify the validity of offsets offered for sale on the exchange, and ensure that the underlying offset projects conform to CCX rules, all tons registered for sale on the CCX platform from offset projects must have been verified by CCX- approved outside verifier firms that are specialized in particular fields. The outside verification firms are to ensure that the projects are in accordance with CCX eligibility rules and methodologies, verify that projects have been implemented, conduct on-site inspections, and send verification reports to CCX. CCX staff and, in certain cases, the CCX Offsets Committee, review the verification reports and request corrective actions, if necessary. After completion of any corrective actions, CCX sends the verification reports to the Financial Industry Regulatory Authority (FINRA) for a final review to ensure project verification documentation is complete. Uniquely serialized Carbon Financial Instruments based on these offsets are then issued to the project owner’s CCX registry account, and may then be sold in the CCX market. The market participants’ registry accounts help the market participant track purchases and sales of offsets acquired or sold on the exchange that can be used to identify specific information about the offset projects, including verification documents. According to CCX, all participants have the option of buying CFIs anonymously and all transaction prices must be reported so that CCX can post prices on its trading platform. The House Appropriations Committee, in its June 19, 2007, report on the 2008 Legislative Branch Appropriations Bill, stated: “The Committee believes it is important to offset greenhouse gases generated by the House. In that regard, the Committee requests the Chief Administrative Officer purchase Carbon Financial Instruments to offset carbon produced by all House operations. These offsets should be fully transparent, verified, American, project-based offset credits.” The CAO requested and received approval from the Committee on House Administration on August 29, 2007, to purchase offsets and submit an application to CCX with the necessary fee. According to CAO officials, CCX was the best option for the House because it is well established relative to the rest of the industry, has clear verification and monitoring standards, and allows for the anonymous purchase of offsets. The CAO requested that CCX conduct a blind auction because the CAO did not want to decide or know which projects were selected. According to the CAO, this approach was adopted to eliminate any opportunity for House funds to be used to benefit one geographical region or congressional district over another. For example, the CAO decided not to purchase offsets on the retail market from domestic nonprofit groups because a decision to select specific vendors or offset projects in one location instead of another could be construed as a political act. On October 23, 2007, CCX made a public announcement to potential sell-side market participants that it would hold the reverse auction on behalf of the House of Representatives and stipulated that the projects sought had to be verified and approved CCX projects undertaken in the United States. The auction closed on November 1, resulting in the purchase of 30,000 metric tons for a total of $90,550 including transaction fees. Results of the auction were announced at a public ceremony on November 5, 2007. The CAO bought offsets before implementing the emissions reduction strategies specified in the Green the Capitol Initiative. Based on calculations performed for the Green the Capitol Initiative report by the Department of Energy and the Lawrence Berkeley National Laboratory, the carbon footprint of the House is approximately 91,000 short tons. According to the CAO, until the Architect of the Capitol’s metering program is complete, in March 2009, House emissions data are based on historical estimates. To reach the goal of carbon neutrality, the Green the Capitol Initiative called for two emissions reduction strategies and the purchase of carbon offsets to compensate for whatever emissions remained. Purchasing electricity generated from renewable sources would decrease emissions to 34,000 short tons. Switching from burning coal, oil, and natural gas at the Capitol power plant to burning only natural gas would further decrease emissions to 24,000 short tons. The third strategy to reach the goal of carbon neutrality was to purchase offsets for the remaining carbon emissions—24,000 short tons. However, the first two strategies had not been completed when the CAO purchased offsets through CCX in November 2007. Concerning the first two strategies, the Architect of the Capitol purchased renewable energy in June 2008, and the CAO, in written comments, told us that the Architect of the Capitol had purchased natural gas to account for the House’s portion of energy used at the Capitol Power Plant. According to the CAO, there was no benefit to waiting to purchase offsets. The CAO used data from 2006 that GAO developed as part of a broader characterization of greenhouse gas emissions from legislative branch agencies and later analyzed by Lawrence Berkeley National Laboratory to identify the amount of offsets the CAO would purchase to reach its goal of carbon neutrality by the end of 2008. The CAO stated that it does not have current emissions data and that the Architect of the Capitol does not have meters that enable it to directly monitor its energy use or emissions in real time. According to the CAO, emissions data projected from a 2006 baseline provide a reasonable estimate of current emissions. In November 2007, the CAO purchased 30,000 metric tons of offsets through CCX, which is more than the 24,000 short tons identified in the Green the Capitol Initiative report and a memorandum approving the CAO’s Chicago Climate Exchange application, which was signed by the Committee on House Administration in August 2007. The CAO purchased approximately 9,075 short tons (about 8,231 metric tons), more than identified in the Green the Capitol Initiative, an amount valued at about $24,447 based on the weighted average purchase price of $2.97 per metric ton paid by the CAO. According to the House CAO and CCX, the purchase of additional tons was an administrative error that resulted from the difference between short and metric tons and reference to the draft report rather than the final report. An April 2007 draft of the Green the Capitol Initiative report identified the need to purchase 34,000 tons, but the June 2007 final report identified the need to purchase 24,000 short tons. On March 27, 2008, the CAO requested that CCX retire 24,000 of the 30,000 metric tons. Currently, 6,000 metric tons remain in the CAO’s registry account, which, according to the CAO, may be used to offset additional emissions generated by the operation of the House. The CAO said that the initial purchase of carbon offsets was an approximation and plans to reconcile the purchase in fiscal year 2009. Because it retired 24,000 metric tons instead of short tons, the CAO retired about 2,460 short tons (about 2,231 metric tons) more than identified in the Green the Capitol Initiative report. These extra tons are valued at about $6,626 based on the CAO’s purchase price. According to the CAO, the retirement of extra tons may address uncertainties in the emissions calculations used to determine the amount of offsets to purchase. Following the auction, the CAO received information from CCX about the number and types of projects underlying its purchase. No other information was provided by CCX or requested by the CAO. The offsets purchased by the CAO came from a variety of project types, including agricultural methane, agricultural soil sequestration, coal mine methane, landfill methane, and renewable energy. The CCX auction notice required that offsets submitted to the auction originate from U.S.-based projects, and CCX officials said that they screened the registry accounts of auction participants to confirm that the sellers’ offsets were from U.S.-based projects. Registry accounts maintained by CCX for market participants track the type of information necessary to satisfy the criteria directed by the appropriations committee report. Thus, the CAO could verify that the offsets met the criteria, if necessary. The CAO can also request that CCX provide additional quality assurance documentation, including detailed verification reports. On September 27, 2006, the California Global Warming Solutions Act was signed into law. The act requires the California Air Resources Board (ARB) to establish a program to reduce the state’s emissions to 1990 levels by 2020. On June 26, 2008, ARB released a draft scoping plan for public comment that contains the strategies California will use to reduce emissions of greenhouse gases. The draft includes a discussion of the potential role of offsets in implementing AB 32. Specific commitments on the role of offsets in AB 32 will be available in a revised scoping plan that ARB will publish in early October 2008 for comment. This version of the plan will be presented to the Air Resources Board in November 2008 for possible adoption by the board. AB 32 requires the board to adopt a scoping plan by January 1, 2009. Regulations based on the final scoping plan must be adopted by January 1, 2011, and are to become effective on January 1, 2012. More information about implementation of the California Global Warming Solutions Act is available at http://www.arb.ca.gov/cc/cc.htm. The governors of Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Ohio, South Dakota, and Wisconsin, and the premiers of the Canadian provinces of Manitoba and Ontario participate or observe in the Midwestern Greenhouse Gas Reduction Accord, an agreement to establish greenhouse gas reduction targets and time frames, and to develop market-based mechanisms to reach these targets. The accord was established in November 2007. An offsets subgroup is expected to make recommendations about the role of offsets in a regional emissions reduction program by September 2008, according the subgroup’s work plan. More information about the Midwestern Greenhouse Gas Reduction Accord is available at http://www.midwesternaccord.org/. Regional Greenhouse Gas Initiative (RGGI) The Regional Greenhouse Gas Initiative is a cooperative effort by Northeast and Mid- Atlantic states to design a regional cap-and-trade program initially covering carbon dioxide emissions from power plants in the region. Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Rhode Island, and Vermont are participating in the RGGI effort. The District of Columbia, Pennsylvania, Ontario, Quebec, the Eastern Canadian Provinces, and New Brunswick are observers in the process. On August 15, 2006, the participating states issued a model rule that details the proposed RGGI program. Offset projects included in the program are initially limited to five types of projects, including landfill methane capture and sequestration, because these types occur within the borders of the RGGI states, among other factors. The model rule specifies offset project requirements including criteria for additionality, quantification and verification of emissions reductions, independent verification, and accreditation standards for independent verifiers. Each source required to reduce emissions would generally be able to use offsets to comply with up to 3.3 percent of its obligation in a single compliance period. If the compliance price hits certain levels, the use of offsets may increase to 5 or 10 percent of required reductions. The first 3-year compliance period will begin January 1, 2009. More information about RGGI is available at http://www.rggi.org/index.htm. On February 19, 2008, the United Kingdom (UK) Department for Environment, Food and Rural Affairs announced the framework for the Code of Best Practice for Carbon Offsetting to provide UK consumers with guidance on carbon offsets. The code is designed to increase consumers’ understanding of offsetting and its role in addressing climate change, increase consumer confidence in the integrity and value for money of the offset products available to them, and to provide signals to the UK offset sector on the quality and verification standards to which they should aspire. Offset products meeting the specifications of the code will be assigned with a certification mark, which providers may use on their Web sites and other materials. The code is voluntary and offset providers can choose whether to seek accreditation for all, or some, of their offsetting products. The code initially covers only Certified Emissions Reductions (CER), that are compliant with the Kyoto Protocol, because there is currently no definition or fully established common standard for voluntary offsets. DEFRA has asked the voluntary offset industry to jointly develop a standard that could be included in the code in the future. For more information about the DEFRA Code of Best Practice for Carbon Offsetting see http://www.defra.gov.uk/environment/climatechange/uk/carbonoffset/index.htm. The Western Climate Initiative was launched in February 2007 by the governors of Arizona, California, New Mexico, Oregon, and Washington to develop regional strategies to address climate change. Partners in the Initiative also include Montana, Utah, and the Canadian provinces of British Columbia, Ontario, Quebec, and Manitoba. Other U.S. and Mexican states have joined as observers. The WCI regional greenhouse gas emission reduction goal is an aggregate reduction of 15 percent below 2005 levels by 2020. On May 16, 2008, the WCI released recommendations about how to structure the region’s cap-and-trade emissions reduction program, including a series of recommendations about how to incorporate offsets into such a program. A more detailed version of the draft offset recommendations was released in July 2008, and WCI is striving to reach a final agreement on overall program design in August 2008. More information about the WCI draft design recommendations on offsets is available at http://www.westernclimateinitiative.org/. Appendix VII: Selected Carbon Offset Standards The California Registry serves as a voluntary greenhouse gas (GHG) registry to protect and promote early actions to reduce GHG emissions. The California Registry develops reporting standards and tools for organizations to measure, monitor, third party verify, and reduce their GHG emissions consistently across industry sectors and geographical borders. For more information about the California Registry, see http://www.climateregistry.org/. The CarbonNeutral Protocol, a proprietary standard developed by The CarbonNeutral Company, describes the requirements for achieving “CarbonNeutral” status and the controls employed by The CarbonNeutral Company to ensure the correct use of CarbonNeutral logos. The protocol sets out the quality requirements for projects and schemes that produce offset credits that may be applied to make activities or entities CarbonNeutral under this program. For more information about the Carbon Neutral Protocol, see http://www.carbonneutral.com/pages/cnprotocol.asp. CCX is a voluntary greenhouse gas reduction and trading system through which members make commitments to decrease their emissions. CCX participants may trade offsets generated from qualifying emissions reduction projects. CCX employs a central registry for recording emissions as well as holdings and transfers of its serialized emission units–Carbon Financial Instruments (CFI). The registry is linked with the CCX electronic trading platform. For more information about CCX, see http://www.chicagoclimatex.com/index.jsf. The Clean Development Mechanism (CDM) is part of the Kyoto Protocol to the United Nations Framework Convention on Climate Change (UNFCCC). CDM enables industrialized countries to achieve emissions reductions by paying developing countries for certified emission reduction credits. CDM projects must qualify through a registration and issuance process. The mechanism is overseen by the CDM Executive Board, answerable ultimately to the countries that have ratified the Kyoto Protocol. For more information about CDM, see http://cdm.unfccc.int/index.html. The Climate, Community, and Biodiversity Alliance (CCBA) is a partnership among companies, nongovernmental organizations, and research institutes seeking to promote integrated solutions to land management around the world. CCB standards are project design standards for evaluating land-based carbon mitigation projects in the early stages of development. For more information about the CCB standards, see http://www.climate- standards.org/. Climate Leaders is an EPA industry-government partnership that works with companies to develop climate change strategies. EPA Climate Leaders, a voluntary emissions reduction program, provides technical assistance to companies on how to calculate and track greenhouse gas emissions over time, calculate emissions reductions from offsets, and incorporate offsets into emission reduction strategies. EPA has developed accounting methodologies for certain offset project types including landfill gas, manure management, afforestation, transportation, and boiler replacement projects. Project protocols are being developed for additional project types, including coal-bed methane, methane end use from landfill and manure management projects, and forest management. For more information about Climate Leaders offset methodologies, see http://www.epa.gov/climateleaders/resources/optional-module.html. The Climate Neutral Network is an alliance of companies and organizations committed to developing products, services, and enterprises that have a net-zero impact on global warming. The Climate Neutral Network certifies companies whose products, services, and/or enterprises have a net-zero impact on global warming. The Climate Neutral Network is closing as a nonprofit and transferring its certification program to another nonprofit. For more information about the Climate Neutral Network, see http://climateneutralnetwork.org/. The Gold Standard offers a quality label to voluntary offset projects for renewable energy and energy efficiency projects with sustainable development benefits for the local community. Gold Standard projects are tested for environmental quality by third parties and the Gold Standard carbon credit label is granted after third party validation and verification of the offset project. For more information about the Gold Standard VER, see http://www.cdmgoldstandard.org/index.php. On February 19, 2008, the United Kingdom Department for Environment, Food and Rural Affairs (DEFRA) announced the framework for the Code of Best Practice for Carbon Offsetting to provide consumers with guidance on carbon offsets. Offset products meeting the requirements of the code will be assigned a certification mark that providers may use on their Web sites and other materials. The code is voluntary, and offset providers can choose whether to seek accreditation for all, or some, of their offsetting products. For more information about the DEFRA Code of Best Practice for Carbon Offsetting see http://www.defra.gov.uk/environment/climatechange/uk/carbonoffset/index.htm. Green-e Climate is a certification program for carbon offsets sold to consumers on the retail market. Green-e Climate sets consumer protection and environmental integrity standards and employs a three-step verification and certification service that ensures supply equals sales, offsets are independently certified, and consumer disclosures are accurate and follow program guidelines. For more information about Green-e Climate, see http://www.green-e.org/getcert_ghg.shtml. Greenhouse Friendly is an Australian government initiative aimed at providing businesses and consumers with the opportunity to sell and purchase greenhouse neutral products and services. For more information about Greenhouse Friendly, see http://www.greenhouse.gov.au/greenhousefriendly/index.html. ISO 14064 is a three-part international standard that provides guidance on developing organization-level emissions inventories; quantifying, monitoring, and reporting greenhouse gas emissions reductions at the project level; and validating and verifying greenhouse gas emissions reduction projects. More information about ISO 14064 standards is available at http://www.iso.org/iso/home.htm. Plan Vivo is a system for managing the supply of verifiable emission reductions from rural communities in a way that promotes sustainable livelihoods. Companies, individuals, or institutions wishing to offset greenhouse gas emissions can purchase voluntary emission reductions via a project trust fund in the form of Plan Vivo Certificates. Projects use the Plan Vivo management system to register and monitor carbon sequestration activities implemented by farmers. For more information about Plan Vivo, see http://www.planvivo.org/. Social Carbon has the objective of guaranteeing that the projects developed for the reduction of greenhouse gas emissions significantly contribute to sustainable development, incorporating transparent methods of access and measurement of the benefits that are returned to the parties involved and to the environment. The aim of the Social Carbon methodology is to provide offsets that also provide clear social and environmental benefits in the areas where projects operate. For more information about the Social Carbon methodology, see http://www.socialcarbon.com/ The VER+ Standard provides a global standard for voluntary greenhouse gas emission reduction projects. The criteria of the VER+ Standard are streamlined with those of CDM, including the requirements of project additionality and corresponding tests that prove the project is not a business-as-usual scenario. For more information about the VER+ standard, see https://www.netinform.de/KE/Beratung/Service_Ver.aspx. The Voluntary Carbon Standard (VCS) was initiated by The Climate Group, the International Emissions Trading Association, and the World Economic Forum in late 2005 to standardize and provide transparency and credibility to the voluntary offset market, among other objectives. To recognize credible work that has gone into developing greenhouse gas programs around the world, the VCS Program has a process for recognizing programs that meet VCS criteria. For more information about the VCS, see http://www.v-c-s.org/index.html. The International Carbon Investors and Services (INCIS) Voluntary Offset Standard (VOS) can be used as a minimum standard when purchasing verified emission reduction credits on behalf of organizations or individuals offsetting their greenhouse gas emissions. The Voluntary Offset Standard is intended to support the development of emerging carbon markets around the world, and support international policy convergence with a view to long-term carbon market integration. For more information about the VOS, see http://www.carboninvestors.org/documents. The Greenhouse Gas Protocol, a partnership between the World Resources Institute and the World Business Council for Sustainable Development, provides an accounting framework for greenhouse gas standards, programs, and inventories around the world. For more information about the Greenhouse Gas Protocol, see http://www.ghgprotocol.org/. This table summarizes and introduces the variety of standards available in the voluntary offset market. It is not an exhaustive list of standards, nor is it intended to provide precise descriptions. We do not summarize or compare the criteria of these standards because they exist for different purposes and apply to different portions of the carbon offset supply chain. For more specific information, please see standard documentation available at the referenced Web sites, if available. In addition to the contact named above, Michael Hix, Assistant Director; Janice Ceperich; Nancy Crothers; Cindy Gilbert; Richard Johnson; Ben Shouse; Ardith A. Spence; and Joseph Thompson made major contributions to this report. Richard Burkard, Terrell G. Dorn, Steve Gaty, Jim McDermott, Andy O’Connell, Dan Packa, Kate Robertson, Ray Rodriguez, Jena Sinkfield, and Sara Vermillion also made important contributions.
|
Carbon offsets--reductions of greenhouse gas emissions from an activity in one place to compensate for emissions elsewhere--are a way to address climate change by paying someone else to reduce emissions. To be credible, an offset must be additional--it must reduce emissions below the quantity emitted in a business-as-usual scenario--among other criteria. Assessing credibility is inherently challenging because it is difficult to make business-as-usual projections. Outside the U.S., offsets may be purchased on compliance markets to meet requirements to reduce emissions. In the U.S., there are no federal requirements and offsets may be purchased in the voluntary market. GAO was asked to examine (1) the scope of the U.S. voluntary carbon offset market, including the role of the federal government; (2) the extent to which mechanisms for ensuring the credibility of offsets are available and used and what, if any, related information is shared with consumers; and (3) trade-offs associated with increased oversight of the U.S. market and including offsets in climate change mitigation policies. This report is based on analysis of literature and data, interviews with stakeholders, and GAO's purchase of offsets. The scope of the U.S. voluntary carbon offset market is uncertain because of limited data, but available information indicates that the supply of offsets generated from projects based in the United States is growing rapidly. Data obtained from a firm that analyzes the carbon market show that the supply of offsets increased from about 6.2 million tons in 2004 to about 10.2 million tons in 2007. Over 600 organizations develop, market, or sell offsets in the United States, and the market involves a wide range of participants, prices, transaction types, and projects. The federal government plays a small role in the voluntary market by providing limited consumer protection and technical assistance, and no single regulatory body has oversight responsibilities. A variety of quality assurance mechanisms, including standards for verification and monitoring, are available and used to evaluate offsets, but data are not sufficient to determine the extent of their use. Information shared with consumers on credibility is also limited. Participants in the offset market face challenges ensuring the credibility of offsets, including problems determining additionality, and the existence of many quality assurance mechanisms. GAO, through its purchase of offsets, found that the information provided to consumers by retailers offered limited assurance of credibility. Increased federal oversight of the U.S. voluntary market could enhance the market's transparency and improve consumer protection, but may also reduce flexibility, increase administrative costs, and stifle innovation, according to certain stakeholders. Including offsets in regulatory programs to limit greenhouse gas emissions could also lower the cost of compliance, according to recent EPA analyses and economic literature. However, some stakeholders said that concerns about the credibility of offsets could compromise the environmental integrity of a compliance system.
|
VA policy specifies how VAMCs can purchase expendable medical supplies and RME. VAMCs can purchase expendable medical supplies and RME through their acquisition departments or through purchase card holders, who have been granted the authority to make such purchases. Purchase cards are issued to certain VAMC staff, including staff from clinical departments, to acquire a range of goods and services, including those used to provide care to veterans. According to VA, as of the third quarter of 2010, there were about 27,000 purchase cards in use across VA’s health care system. VA has two inventory management systems, which VAMCs use to track the type and quantity of supplies and equipment in the facilities. Each VAMC is responsible for maintaining its own systems and for entering information about certain expendable medical supplies and certain RME in the facilities into the appropriate system. Specifically, the Generic Inventory Package (GIP) is used to track information about expendable medical supplies that are ordered on a recurring basis. The Automated Engineering Management System/Medical Equipment Reporting System (AEMS/MERS) is used to track information about RME that is valued at $5,000 or more and has a useful life of 2 years or more. VAMC officials told us they use information about the items in their facilities for a variety of purposes, for example, to readily determine whether they have expendable medical supplies or RME that are the subject of a manufacturer or FDA recall or a patient safety alert. VA’s purchasing and tracking policies include the following three requirements for VAMCs: 1. A designated VAMC committee must review and approve proposed purchases of any expendable medical supplies or RME that have not been previously purchased by the VAMC. The committee, which typically includes administrative staff and clinicians from various departments, reviews the proposed purchases to evaluate the cost of the purchase as well as its likely impact on veterans’ care. For example, the committee that reviews and approves proposed RME purchases often includes a representative from the department responsible for reprocessing RME, in order to determine whether the VAMC has the capability to reprocess—clean and disinfect or sterilize—the item correctly and that staff are appropriately trained to do so. Proper reprocessing of RME is important to ensure that RME is safe to use and that veterans are not exposed to infectious diseases, such as Human Immunodeficiency Virus (HIV), during treatment. 2. All approvals for purchases of expendable medical supplies or RME must be signed by two officials, the official placing the order and the official responsible for approving the purchase. 3. VAMCs must enter information on all expendable medical supplies that are ordered on a recurring basis and all RME that is valued at $5,000 or more and has a useful life of 2 years or more into the appropriate inventory management system, either GIP or AEMS/MERS. VA does not require information about RME that is valued at less than $5,000 to be entered into AEMS/MERS. At the five VAMCs we visited, our preliminary work identified examples of inconsistent compliance with the three purchasing and tracking requirements we selected for our review. In some cases, noncompliance with these requirements created potential risks to veterans’ safety. We are continuing to conduct this work. VAMC committee review and approval. Officials at two of the five VAMCs we visited stated that VAMC committees reviewed and approved all of the expendable medical supplies the VAMCs purchased for the first time. However, at the remaining three VAMCs, officials told us that VAMC committees did not conduct these reviews in all cases. Officials from these three VAMCs told us that certain expendable medical supplies—for example, new specialty supplies—were purchased without VAMC committee review and approval. Specialty supplies, such as those used in conjunction with dialysis machines, are expendable medical supplies that are only used in a limited number of clinical departments. Without obtaining that review and approval, however, the VAMCs purchased these supplies without evaluating their cost effectiveness or likely impact on veterans’ care. At one VAMC we visited, officials told us that clinical department staff were permitted to purchase certain RME—surgical and dental instruments—using purchase cards and that these purchases were not reviewed and approved by a committee. Therefore, the VAMC had no assurance that RME purchased by clinical department staff using purchase cards had been reviewed and approved by a committee before it was purchased for the first time. As a result, these purchases may have been made without assurance that they were cost effective and safe for use on veterans and that the VAMC had the capability and trained staff to reprocess these items correctly. Signatures of purchasing and approving officials. At one of the five VAMCs we visited, VAMC officials discovered that one staff member working in a dialysis department purchased specialty supplies without obtaining the required signature of an appropriate approving official. That staff member was responsible for ordering an item for use in 17 dialysis machines that was impermeable to blood and would thus prevent blood from entering the dialysis machine. However, the staff member ordered an incorrect item, which was permeable to blood, allowing blood to pass into the machine. After the item was purchased, the incorrect item was used for 83 veterans, resulting in potential cross- contamination of these veterans’ blood, which may have exposed them to infectious diseases, such as HIV, Hepatitis B, and Hepatitis C. Entry of information about items into VA’s inventory management systems. At the time of our site visits, officials from one of the five VAMCs we visited told us that information about expendable medical supplies that were ordered on a recurring basis was entered into GIP, as required. In contrast, officials at the remaining four VAMCs told us that information about certain expendable supplies that were ordered on a recurring basis, such as specialty supplies, was not always entered into GIP. Since our visit, one of the four VAMCs has reported that it has begun to enter all expendable medical supplies that are ordered on a recurring basis, including specialty supplies, into GIP. By not following VA’s policy governing GIP, VAMCs have an incomplete record of the expendable medical supplies in use at their facilities. This lack of information can pose a potential risk to veterans’ safety. For example, VAMCs may have difficulty ensuring that expired supplies are removed from patient care areas. In addition, in the event of a manufacturer or FDA recall or patient safety alert related to a specialty supply, VAMCs may have difficulty determining whether they possess the targeted expendable medical supply. Officials at one VAMC we visited told us about an issue related to tracking RME in AEMS/MERS that contributed to a patient safety incident, even though the VAMC was not out of compliance with VA’s requirement for entering information on RME into AEMS/MERS. Specifically, because VA policy does not require RME valued under $5,000 to be entered into AEMS/MERS, an auxiliary water tube, a type of RME valued under $5,000 that is used with a colonoscope, was not listed in AEMS/MERS. According to VAMC officials and the VA Office of the Inspector General, in response to a patient safety alert that was issued on the auxiliary water tube in December 2008, officials from the VAMC checked their inventory management systems and concluded—incorrectly—that the tube was not used at the facility. However, in March 2009, the VAMC discovered that the tube was in use and was not being reprocessed correctly, potentially exposing 2,526 veterans to infectious diseases, such as HIV, Hepatitis B, and Hepatitis C. In addition, officials from VA headquarters told us that when information about certain RME is entered into AEMS/MERS, it is sometimes done inconsistently. The officials explained that this is because AEMS/MERS allows users to enter different names for the same type of RME. As a result, in the case of a manufacturer or FDA recall or patient safety alert related to a specific type of RME, VAMCs may have difficulty determining whether they have that specific type of RME. During our preliminary work, we discussed with VA headquarters officials examples of steps VA plans to take to improve its oversight of VAMCs’ purchasing and tracking of expendable medical supplies and RME. For example, VA plans to change its oversight of the use of purchase cards. Specifically, VA headquarters officials told us that designated VAMC staff are currently responsible for reviewing purchase card transactions to ensure that purchases are appropriate. However, one VA headquarters official stated that these reviews are currently conducted inconsistently, with some being more rigorous than others. VA headquarters officials stated that VA plans to shift greater responsibility for these reviews from the VAMCs to the VISNs, effective October 1, 2010. In addition, VA plans to standardize the reviews by, for example, adding a checklist for reviewers. Because this change has not yet been implemented across VA, we can not evaluate the extent to which it will address the appropriateness of purchases using purchase cards. Our preliminary work also shows that VA plans to create a new inventory management system. VA headquarters officials told us that they are developing a new inventory management system—Strategic Asset Management (SAM)—which will replace GIP and AEMS/MERS and will include standardized names for expendable medical supplies and RME. According to these officials, SAM will help address inconsistencies in how information about these items is entered into the inventory management systems. VA headquarters officials stated that SAM will help improve VA’s ability to monitor information about expendable medical supplies and RME across VAMCs. VA provided us with an implementation plan for SAM, which stated that this new system would be operational in March 2011. At this time, we have not done work to determine whether this date is realistic or what challenges VA will face in implementing it. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other members of the committee may have. For further information about this statement, please contact Debra A. Draper at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors to this statement were Randall B. Williamson, Director; Mary Ann Curran, Assistant Director; David Barish; Alana Burke; Krister Friday; Melanie Krause; Lisa Motley; and Michael Zose. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
VA clinicians use expendable medical supplies--disposable items that are generally used one time--and reusable medical equipment (RME), which is designed to be reused for multiple patients. VA has policies that VA medical centers (VAMC) must follow when purchasing such supplies and equipment and tracking--that is, accounting for--these items at VAMCs. GAO was asked to evaluate VA's purchasing and tracking of expendable medical supplies and RME and their potential impact on veterans' safety. This testimony is based on GAO's ongoing work and provides preliminary observations on (1) the extent of compliance with VA's requirements for purchasing and tracking of expendable medical supplies and RME and (2) steps VA plans to take to improve its oversight of VAMCs' purchasing and tracking of expendable medical supplies and RME. GAO reviewed VA policies and selected three requirements that GAO determined to be relevant to patient safety. At each of the five VAMCs GAO visited, GAO reviewed documents used to identify issues related to the three requirements and interviewed officials to gather further information on these issues. The VAMCs GAO visited represent different surgical complexity groups, sizes of veteran populations served, and geographic regions. GAO also interviewed VA headquarters officials and obtained and reviewed documents regarding VA headquarters' oversight. GAO shared the information in this statement with VA officials. During its preliminary work at the five selected VAMCs, GAO found inconsistent compliance with the three VA purchasing and tracking requirements selected for review. Noncompliance with these requirements created potential risks to veterans' safety. (1) Requirement for VAMC committee review and approval. At two of the VAMCs, officials stated that the required designated committee review and approval occurred for all of the expendable medical supplies and RME that the VAMCs had not previously purchased. These reviews are designed to evaluate the cost of the purchase as well as its likely impact on veterans' care. However, at the remaining three VAMCs, officials stated that the required committee review and approval of the expendable medical supplies, such as those used in conjunction with dialysis machines, did not always occur. As a result, these purchases were made without evaluating the likely impact on veterans' care. (2) Requirement for signatures of purchasing and approving officials. At one of the VAMCs, VAMC officials discovered that a staff member in a dialysis department ordered an expendable medical supply item for use in dialysis machines, without obtaining the required signature of an approving official. That staff member ordered an incorrect item, the use of which presented a risk of exposing veterans to infectious diseases, such as Human Immunodeficiency Virus. (3) Requirement for entering information in VA's inventory management systems. Officials from one of the five VAMCs told GAO that information about expendable medical supplies that were ordered on a recurring basis was entered into the appropriate inventory management system, as required. At the remaining four VAMCs, officials told GAO that information about certain expendable medical supplies--those used in a limited number of clinical departments such as dialysis departments--was not always entered into the system. This lack of information can pose a potential risk to veterans' safety; in the event of a recall of these items, these VAMCs may have difficulty determining whether they possess the targeted item. VA reports that it plans to improve its oversight of VAMCs' purchasing and tracking of expendable medical supplies and RME. For example, VA headquarters officials stated that, effective October 1, 2010, VA plans to shift greater responsibility for reviews of purchase card transactions from the VAMCs to the Veterans Integrated Service Networks, which are responsible for overseeing VAMCs. VA headquarters officials also told GAO that VA is developing a new inventory management system, which it expects will help improve VA's ability to track information about expendable medical supplies and RME across VAMCs. VA expects this new system to be operational in March 2011.
|
RBS operates loan programs that are intended to assist in the business development of the nation’s rural areas and the employment of rural residents. Within USDA, RBS is located in the Rural Development (RD) mission area. The agency’s national office in Washington, D.C., provides policy direction and guidance on the loan making and servicing aspects of the programs, and reviews and approves certain loans. Many of the loan making and servicing functions are performed by RD mission area staff who are physically located in field offices throughout the country. RBS operates the following loan programs: the business and industry (B&I) program, the intermediary relending program (IRP), and the rural economic development (RED) program. The following is a general description of each program. B&I loans. A B&I loan can be either a direct government-funded loan or a loan made by another lender on which RBS guarantees repayment in the event of a loss. These loans are made to finance almost any business project that creates or retains jobs in rural areas and to finance projects in all segments of the economy, such as mining, manufacturing, and wholesale and retail sales. There are only a few activities for which B&I loans cannot be used, such as funding gambling facilities, race tracks, and golf courses. Additionally, RBS’ regulations, which, according to the agency’s officials, are being revised, provide that direct B&I loans cannot be used for constructing hotels and motels, and tourism and recreational facilities. However, guaranteed B&I loans can be used for those purposes. The interest rate on a direct loan is based on the prime rate that was in effect in the quarter of a year prior to the quarter in which the loan is made. The interest rate on a guaranteed loan is the rate agreed to by the lender making the loan and the borrower. According to RBS officials, this rate is generally the lender’s prime rate—the rate a lender charges its best customers—plus 1 to 1.5 additional percentage points. IRP loans. IRP loans are direct government-funded loans made for relending, mostly to nonprofit community development organizations, and, to a lesser extent, to other borrowers, such as for-profit and nonprofit cooperatives. Specifically, the IRP loan funds are deposited into a revolving fund that an RBS borrower—an intermediary—has established. The intermediary relends the money to its borrowers—which may be individuals, public or private organizations, or any other legal entity—for financing business or community development projects in rural areas. IRP loan funds are not allowed for certain purposes, including funding gambling facilities, race tracks, and golf courses. RBS’ approval is required for the intermediary’s relending of the IRP loan funds. RBS charges its borrowers a 1-percent interest rate on IRP loans. The interest rate on a loan from the revolving fund is the rate agreed to by the intermediary and its borrower. RBS does not specify what this rate should be. RED loans. RED loans are also direct loans made for relending. The loans are made only to borrowers that have outstanding electricity or telecommunications loans from USDA’s Rural Utilities Service (RUS) and to former RUS borrowers that repaid their electricity loans early at a discount. Unlike IRP loans, a RED loan, when approved, is targeted to a specific project. The RED loan funds are deposited into a fund that the RUS borrower has established. The RUS borrower relends the money to other borrowers, which may be any public or private organization or other legal entity, for an economic development and job creation project. These projects include new business creation, existing business expansion, community improvements, and infrastructure development. RED loan funds cannot be used for certain purposes, including the RUS borrowers’ electricity or telecommunications operations or a community’s television system or facility, unless tied to an educational or medical project. RED loans are interest free, and RBS requires that loan funds be relent interest free. (App. I provides more descriptive information on each of RBS’ loan programs.) RBS approved more than 2,900 rural business loans during fiscal year 1993 through the first half of fiscal 1998. The total amount of these loans was more than $3.2 billion, or approximately $1.1 million, on average, per loan. Specifically, RBS approved the following loans during this 5.5-year period: 2,299 guaranteed B&I loans totaling almost $2.9 billion and averaging $1.2 million, and 58 direct B&I loans totaling about $17 million and averaging about $300,000 and 315 IRP loans totaling about $280 million and averaging about $900,000, and 256 RED loans totaling about $70 million and averaging about $275,000. The total number and value of rural business loans approved by RBS increased during these years. In fiscal year 1993, RBS approved 298 loans totaling about $234 million, while in fiscal 1997, it approved 788 loans totaling $878 million. Most of the increase in loans stems from increases in guaranteed B&I loans, which rose by almost 250 percent over this period. At almost $2.9 billion, guaranteed B&I loans constituted the largest category of loans approved by RBS during the period of fiscal year 1993 through March 31, 1998. Furthermore, the level of guaranteed B&I loan activity increased substantially during this 5.5-year period. For example, RBS approved 190 guaranteed B&I loans in fiscal year 1993 with a total value of more than $187 million. As table 1 shows, this compares with 663 loans in fiscal year 1997 and 377 loans in the first half of fiscal 1998, which have total values of more than $816 million and about $540 million, respectively. While guaranteed B&I loans were approved for borrowers in every state, a large number of the approved loans were concentrated in a few states. Specifically, 33 percent of the loans approved during this 5.5-year period were for borrowers in eight states; these loans accounted for 38 percent of the $2.9 billion loan amount. In each of the top three states—California, Florida, and North Carolina—more than 100 loans were approved. In total, 400 loans with a total value of approximately $508 million were approved in these three states. On top of the rapid growth in guaranteed B&I loans, RBS reported, in its appropriation request for fiscal year 1999, having a large amount—about $935 million—of pending guaranteed B&I loan requests as of September 30, 1997. However, our review disclosed that many of these requests were not ready to be approved or funded. Specifically, a large part of the backlog consisted of preapplications for loans; these are cases in which lenders expressed an interest in applying for loans and submitted some documentation but had not submitted formal applications. There were 363 preapplications, which accounted for about 71 percent of the more than 500 requests reported as pending and about 72.5 percent of the total loan amount. These preapplications included 166 cases in which the preapplicants had been told to develop and submit an application; however, in 60 cases, the notifications to submit applications were more than 6 months old, including some that were almost 3 years old. Additionally, in mid-1998, RBS found that 127 loan requests (preapplications and applications) that it had on hand, which totaled $259.6 million, were inactive. An inactive loan request is one in which, among other things, additional information that had been requested from the lender and/or the borrower had not been provided; the loan request would not be approved because, for example, the project as proposed was not eligible for loans in the program; or the borrower no longer wanted the loan. RBS’ experience with direct B&I loans is quite different from its experience with guaranteed B&I loans. Specifically, RBS did not approve any direct loans during fiscal years 1993 through 1996 because USDA’s appropriation acts did not authorize it to do so. However, as table 2 shows, in fiscal year 1997 and the first half of fiscal 1998, the agency approved 58 loans valued at $17.2 million. The direct B&I loans that RBS approved during this 1.5-year period were for borrowers located in 24 states, Puerto Rico, and the Western Pacific Islands. Five states and Puerto Rico accounted for 60 percent of the loans and 52 percent of the loan obligations. Missouri was the top state in terms of the number of loans—12—and Puerto Rico had the highest dollar amount of loans—$2.5 million. The other four states were Arkansas, Hawaii, South Carolina and Texas. IRP loans accounted for the second largest category of loans that RBS approved during fiscal year 1993 through the first half of fiscal 1998. Fiscal year 1995 was the peak year for IRP loans, when 89 loans, which totaled more than $85 million, were approved. Since then, as table 3 shows, the total value of IRP loans approved each year has declined. Many of the IRP loans were for borrowers in only a few of the 43 states, Puerto Rico, and the U.S. Virgin Islands, where loans were approved. Specifically, 109 of the 315 loans, or almost 35 percent, were for borrowers in six states. These 109 loans totaled over $106 million, which is over 38 percent of the total value of all IRP loans approved during this 5.5-year period. Two states—Minnesota and Oregon—accounted for 51 of these loans and $50.9 million. The other four states, which accounted for 58 loans and $55.6 million, were Arkansas, California, Maine, and Mississippi. RED loans ranked third in terms of the number of loans and value approved during fiscal year 1993 through the first half of fiscal 1998. The number of RED loans approved declined each year during this period, from 65 loans in fiscal year 1993 to 39 in fiscal 1997, and to 19 in the first half of fiscal 1998. However, as table 4 shows, the total dollar value of loans was relatively stable, ranging from more than $12 million to $13.5 million for the full fiscal years during the period. RBS approved a total of 256 RED loans for 192 borrowers during this 5.5-year period; these borrowers were located in 32 states. A majority of the loans—136 loans, or 53.1 percent—were for borrowers in six states: Minnesota, Tennessee, North Dakota, Kansas, Iowa, and Wisconsin. The loans to these borrowers accounted for $34.2 million, or slightly less than half—48.6 percent—of the total loan obligations. Additionally, some borrowers had both RED and IRP loans approved from the start of fiscal year 1993 through March 31, 1998. Specifically, eight borrowers had 13 RED loans, valued at about $4.1 million, approved during this period; these borrowers also had 9 IRP loans, valued at about $7.9 million, approved. This occurred because RED and IRP loans are both available to certain nonprofit cooperatives for relending purposes. RBS’ estimated cost for the business loan programs totaled about $290 million during fiscal year 1993 through fiscal 1997. The cost of operating a federal credit program consists of two components: subsidy costs, which involve the estimates of default costs, interest rate subsidies, fees, and other costs and revenues; and administrative costs, which cover salaries and other expenses. About $195 million of RBS’ total costs was the agency’s estimated subsidy costs associated with its loans. In addition, as table 5 shows, RBS incurred an estimated $95 million in administrative costs associated with operating the loan programs. As shown in table 5, IRP loans, which had total costs exceeding $154 million for fiscal year 1993 through fiscal 1997, were the most expensive of the rural business loans that RBS provided. The estimated subsidy costs constituted most of the total costs for these loans, reflecting the high interest subsidy on IRP loans. Specifically, IRP loans are made at a 1-percent interest rate, which is far below the agency’s cost of money. The overall subsidy rate for IRP loans over this 5-year period was 54.8 percent, or about 55 cents per dollar of loan. This was the highest subsidy rate for any of the rural business loan programs. Guaranteed B&I loans were the second most expensive of RBS’ loans during this period. Unlike the costs for IRP loans, the estimated administrative costs accounted for a larger part of the total costs than the estimated subsidy costs. This difference is in part explained by the fact that the overall subsidy rate for guaranteed B&I loans is considerably lower than the subsidy rate for IRP loans. More specifically, over this 5-year period, the overall subsidy rate for these guaranteed loans was 1.3 percent, or more than 1 cent per dollar of loan. The costs associated with RED loans totaled $19.3 million, which included $16.2 million of RBS’ estimated subsidy costs. At 25.5 percent, or about 26 cents per dollar of loan, the overall subsidy rate for RED loans over this 5-year period was second only to IRP loans. The subsidy costs of these loans were funded with appropriated funds; since fiscal year 1997, the subsidy costs of RED loans have been funded from earnings received on advance payments made by RUS’ borrowers on their RUS loans. Lastly, RBS’ costs for direct B&I loans was small, reflecting the low level of activity in the program during the 5-year period. Specifically, the estimated costs totaled about $770,000; this total applies to the loans approved in fiscal year 1997—there were no direct B&I loans approved during fiscal year 1993 through fiscal 1996. RBS’ estimated subsidy costs for the fiscal year 1997 loans was over $60,000. The subsidy rate for these loans was 0.5 percent, or less than 1 cent per dollar of loan. The outstanding principal on RBS’ B&I, IRP, and RED loans totaled about $2.2 billion as of March 31, 1998. Borrowers that were delinquent (at least 30 days past due on loan repayment) held about $116 million, or 5.4 percent, of the total outstanding principal. Of the $116 million, about $112 million was held by delinquent borrowers with guaranteed B&I loans, about $2 million was held by delinquent borrowers with direct B&I loans, and another $2 million was held by delinquent borrowers holding IRP loans. More of the outstanding principal on RBS’ loans is at risk, however, because it is held by other borrowers that the agency’s officials have identified as being problem borrowers, which include those likely to default on loan repayment in the future. RBS’ records show that such borrowers owed about $73.8 million on guaranteed B&I loans and over $400,000 on IRP loans as of March 31, 1998. Furthermore, RBS had written off some borrowers’ debts in recent years. Specifically, the agency lost $263.8 million on guaranteed B&I loans during fiscal year 1993 through March 31, 1998. The agency also wrote off about $2 million on IRP loans during this period. The agency did not write off any direct B&I loans or RED loans during this period. According to RBS’ automated files, over $112 million, or 6.1 percent of the more than $1.8 billion in outstanding principal on guaranteed B&I loans as of March 31, 1998, was held by 76 borrowers that were delinquent. These 76 borrowers made up 5 percent of the 1,534 total borrowers having guaranteed B&I loans. As table 6 shows, there has been a reduction in the amount of principal owed by delinquent borrowers and in the number of delinquent borrowers each year during fiscal year 1993 through the first half of fiscal 1998. Many of the loans held by delinquent borrowers were made in recent years. Specifically, as of March 31, 1998, these borrowers were past due on principal and/or interest payments on 47 loans that were made during the 1990s—17 from fiscal year 1990 through 1993 and 30 from fiscal year 1994 through 1997. A small number of borrowers in a few states accounted for a disproportionate share of the outstanding principal on guaranteed B&I loans held by delinquent borrowers. Specifically, a total of 12 delinquent borrowers in four states—Mississippi, North Dakota, New York, and Louisiana—owed about $55 million of outstanding principal on 17 loans, or almost 50 percent of the amount owed by all delinquent borrowers, as of March 31, 1998. In addition to the delinquent borrowers, 56 other borrowers were identified by the agency’s field office officials as being problem borrowers as of March 31, 1998; these borrowers owed about $73.8 million in outstanding principal on guaranteed B&I loans. Specifically, the field office officials reported that 51 borrowers were not in full compliance with the terms and conditions of their loans or that they expect noncompliance to occur in the future. RBS officials said the agency anticipates that some of these borrowers will likely default on scheduled loan payments. This assessment was made on the basis of information provided by the lenders that made the loans and/or the borrowers. These 51 borrowers owed about $69.3 million as of March 31, 1998. Additionally, the field office officials reported that another five borrowers were involved in liquidation and/or bankruptcy proceedings; these borrowers owed about $4.5 million as of March 31, 1998. Borrowers that failed to repay their guaranteed B&I loans caused RBS to incur losses of $263.8 million during fiscal year 1993 through March 31, 1998. Specifically, RBS incurred losses on guaranteed B&I loans for 169 borrowers during this 5.5-year period. Generally, the loans on which these losses were incurred had been made many years ago—as far back as the 1970s. However, the agency has experienced some losses on newer loans. For example, as of July 24, 1998, RBS lost $24.2 million on 53 loans that had closed since the start of fiscal year 1990, including losses of $6.6 million on 15 loans closed since fiscal 1993. The outstanding principal owed by 27 borrowers with direct B&I loans totaled $10.1 million as of March 31, 1998. Of this amount, as table 7 shows, two delinquent borrowers owed principal of $1.8 million, or 17.4 percent. Concerning the two delinquent borrowers, one, located in Kentucky, owed slightly more than $900,000 on two loans that had been made in the early 1980s. The other, located in Oregon, owed about $850,000 on a loan made in mid-1997. According to RBS-provided information, the agency’s field office officials have not identified any nondelinquent borrowers as being problem borrowers as of March 31, 1998. Also, RBS did not write off the debt of any direct B&I loan borrowers during fiscal year 1993 through March 31, 1998. The outstanding principal on IRP loans totaled $268.5 million as of March 31, 1998. As table 8 shows, $1.9 million, or less than 1 percent, was owed by three borrowers that were delinquent. The three borrowers that were delinquent at the end of March 1998 had loans made by the Department of Health and Human Services before the transfer of the IRP loan program and portfolio to USDA. Specifically, a delinquent borrower in Michigan owed $700,000 of outstanding principal on a loan made in 1983. Two other delinquent borrowers had outstanding loans that were made from 1980 through 1983—a Louisiana borrower owed about $673,000, and a Washington State borrower owed about $550,000. In addition to the delinquent borrowers, one other borrower had been identified by the agency’s field office officials as being a problem borrower as of March 31, 1998. This borrower, which received a loan in 1989, owed about $416,000 in outstanding principal. RBS experienced losses on two IRP loans during fiscal year 1993 through March 31, 1998. Specifically, in November 1992, the agency wrote off about $1.2 million that was owed by a borrower in Puerto Rico and, in February 1997, about $1 million owed by a borrower in Florida. Both these write-offs involved loans that had been made by the Department of Health and Human Services in the 1980s. The outstanding principal on RED loans totaled $47.6 million as of March 31, 1998. There were no delinquencies on these loans. Table 9 shows the outstanding principal on RED loans at the end of fiscal year 1993 through March 31, 1998. No RED loan borrower had been identified by the agency’s field office officials as being a problem borrower as of March 31, 1998. Also, RBS did not write off the debt of any RED loan borrowers during fiscal year 1993 through March 31, 1998. We provided USDA with a draft of this report for review and comment. USDA made a number of technical comments and suggested several adjustments to the financial information in the report. We incorporated these comments and suggestions as appropriate. USDA’s comments and our response are in appendix III. We performed our review of RBS’ business loan programs from May through October 1998 in accordance with generally accepted government auditing standards. Our scope and methodology are discussed in appendix IV. As agreed, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate Senate and House committees; interested Members of Congress; the Secretary of Agriculture; the Administrator of RBS; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix V. The Rural Business-Cooperative Service (RBS), an agency within the U.S. Department of Agriculture’s (USDA) Rural Development mission area, operates loan programs that are intended to assist in the business development of the nation’s rural areas and the employment of rural residents. The agency was established by the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994 (P.L. 103-354, Oct. 13, 1994). This appendix provides information on RBS’ three loan programs: the business and industry (B&I) program, the intermediary relending program (IRP), and the rural economic development (RED) program. A B&I loan can be either a direct government-funded loan or a loan made by another lender on which RBS guarantees repayment in the event of a loss. These loans are made to finance almost any business project that creates or retains jobs in rural areas and to finance projects in all segments of the economy, such as mining, manufacturing, and wholesale and retail sales. There are only a few activities for which B&I loans cannot be used, such as funding gambling facilities, race tracks, and golf courses. Additionally, RBS’ regulations, which, according to the agency’s officials, are being revised, provide that direct B&I loans cannot be used for constructing hotels and motels, and tourism and recreational facilities. However, guaranteed B&I loans can be used for those purposes. Direct B&I loans are made to any legal entity, such as an individual operating a sole proprietorship, a cooperative, or a corporation, including local governmental bodies. The maximum loan currently allowed by RBS is $10 million, which is also the amount of outstanding debt that a direct loan borrower may owe. The interest rate on a direct loan is based on the prime rate that was in effect in the quarter of a year prior to the quarter in which the loan is made. Guarantees are provided on loans made by traditional lenders, such as commercial banks, and, to a lesser extent, on loans made by nontraditional lenders, which are entities using investment capital for lending and which are authorized by state law to engage in lending. The loans are made to most types of legal entities, including for-profit and nonprofit cooperatives, corporations, partnerships, individuals, public bodies, and Indian tribes. The maximum loan currently is $25 million, which is also a borrower’s maximum debt level. In addition, RBS provides the following guarantee percentages: 80 percent on loans of $5 million or less, 70 percent on loans between $5 million and $10 million, and 60 percent on loans of more than $10 million. However, a guarantee of up to 90 percent can be provided on a loan of $10 million or less if RBS’ Administrator approves the higher percentage. The interest rate on a guaranteed loan is the rate agreed to by the lender making the loan and the borrower. According to RBS officials, this rate is generally the lender’s prime rate plus 1 to 1.5 additional percentage points. A business financed with a B&I loan is required to be located in a rural area, which is one that can have a population of no more than 50,000. Section 310B of the Consolidated Farm and Rural Development Act, as amended (7 U.S.C. 1932), contains the basic authority for the B&I loan program. IRP loans are direct government-funded loans made mostly to nonprofit community development organizations, and, to a lesser extent, to for-profit and nonprofit cooperatives, public bodies, and Indian tribes. For example, some electricity cooperatives that borrow from USDA’s Rural Utilities Service (RUS) have obtained IRP loans. Individuals are not eligible to obtain IRP loans nor are for-profit commercial companies. All IRP loans are made for relending. Specifically, the IRP loan funds are deposited into a revolving fund that the RBS borrower—an intermediary—has established. The intermediary relends the money in the revolving fund to its borrowers, which may be individuals, public or private organizations, or any other legal entity. A recipient can use the funds it obtains from the revolving fund to finance just about any project related to business or community development in rural areas. IRP loan funds are not allowed for certain purposes, including funding gambling facilities, race tracks, and golf courses. RBS’ approval is required for the relending of the IRP loan funds. The maximum loan currently allowed is $2 million for the first loan that an IRP borrower obtains and $1 million per fiscal year for any subsequent loans. The maximum total IRP debt that a borrower can have outstanding is $15 million. RBS charges borrowers a 1-percent interest rate on all IRP loans. The interest rate on a loan from the revolving fund is the rate agreed to by the intermediary and its borrower. RBS does not specify what this rate should be; the agency’s regulations state that intermediary borrowers should charge the recipients the lowest rate necessary to cover the debt service costs on outstanding IRP loans, a reserve for bad debts, and administrative costs. However, RBS does not track the rates charged by its borrowers to revolving fund recipients. An IRP borrower does not have to be located in a rural area to obtain a loan. The intermediary’s borrower, however, is to be located in a rural area, which, for this program, is an unincorporated area or an incorporated area that has a population of no more than 25,000. The basic authority for the IRP loan program is 42 U.S.C. 9812, as amended by section 1323 of the Food Security Act of 1985 (P.L. 99-198, Dec. 23, 1985), as amended. RED loans, which are also direct loans, are made to entities that have outstanding RUS electricity or telecommunications loans or to former RUS borrowers that repaid their electricity loans early at a discount. RED loans are not available to former RUS borrowers that repaid their loans with scheduled payments. All RED loans are made for relending, and the loan funds are targeted to a specific project. Specifically, the RED loan funds are deposited into a fund that the RUS borrower has established. The RUS borrower relends the money to other borrowers, which may be any public or private organization or other legal entity, for an economic development and job creation project. These projects include new business creation, existing business expansion, community improvements, and infrastructure development. RED loan funds cannot be used for certain purposes, including the RUS borrowers’ electricity or telecommunications operations or a community’s television system or facility, unless tied to an educational or medical project. RBS’ approval is required for the relending of the RED loan funds. The maximum RED loan in any year to an RBS borrower is 3 percent of the appropriated loan level for the year, rounded to the nearest $10,000. For example, the maximum RED loan in fiscal year 1998 is $750,000, which is 3 percent of the $25 million appropriated loan level for the year. There is no maximum number of RED loans that an RBS borrower may receive nor a maximum debt level that an RBS borrower may accumulate. RED loans are interest free, and RBS requires that loan funds be relent interest free. However, RBS’ borrowers are allowed to charge a loan-servicing fee equal to 1 percent of the unpaid principal owed on the loan. Section 313 of the Rural Electrification Act of 1936, as amended (7 U.S.C. 940c), which authorizes the RED loans, provides that RUS’ borrowers are allowed to make advance payments to USDA on their RUS loans and to earn interest at a rate of 5 percent on the advance payments. RBS is authorized to use the differential between the earnings on these advance payments and the 5-percent interest or to use other available funds to cover the costs of the RED loans. Rather than allowing RBS to use the differential to cover the subsidy costs of RED loans during fiscal years 1993 through 1997, the Congress provided USDA with separate appropriations. A rural area for a loan in the RED program parallels a rural area for an initial RUS electricity loan, which is an area that has less than 2,500 residents. However, RBS officials said that a RED loan can be made for a project that is located in an area that has a higher population level if the project serves or provides employment for residents of an area that meets the 2,500-population threshold. This appendix contains information on RBS’ estimated subsidy cost for loans made during fiscal year 1993 through fiscal 1997. The appendix also includes estimates of the administrative cost of operating each of the loan programs during these 5 fiscal years. Information that describes the credit reform procedures in the Federal Credit Reform Act of 1990 is also provided. Tables II.1 through II.3 contain information on RBS’ estimated subsidy cost for making and guaranteeing rural business loans and its estimated administrative costs for operating the business loan programs during fiscal year 1993 through fiscal 1997. For example, table II.1 shows that a large part of the estimated subsidy costs in each year were for IRP loans. Table II.2 shows that the estimated administrative costs were highest with the guaranteed B&I loans. Table II.3 shows that RBS’ estimated costs totaled about $290 million. The two key principles of credit reform contained in the Federal Credit Reform Act of 1990 center on the (1) definition of cost in terms of the present value of the estimated cash flow over the life of a credit instrument and (2) inclusion in the budget of the costs of credit programs before direct or guaranteed loans are made or modified. Credit reform requirements separate the government’s cost of extending or guaranteeing credit, called the subsidy cost, from administrative and unsubsidized program costs. Administrative expenses receive separate appropriations; they are treated on a cash basis and reported separately in the budget. The unsubsidized portion of a direct loan or loan guarantee is expected to be recovered from the borrower. The Credit Reform Act defines the subsidy cost of direct loans as the present value of estimated loan disbursements, repayments of principal, and payments of interest and other payments by or to the government—over the loan’s life—after adjusting for projected defaults, prepayments, fees, penalties, and other recoveries. It defines the subsidy cost of loan guarantees as the present value of cash flows from estimated payments by the government (for defaults and delinquencies, interest rate subsidies, and other payments) minus estimated payments to the government (for loan origination and other fees, penalties, and recoveries). Permanent, indefinite appropriations are available should the appropriated subsidy cost be less than the estimates in a later fiscal year. Before credit reform, credit programs—like other programs—were reported in the budget on a cash basis. As a result, it was difficult to make appropriate cost comparisons between direct loan and loan guarantee programs and between credit and noncredit programs. Credit programs had different economic effects than most budget outlays, such as the purchase of goods and services, income transfers, and grants. In the case of direct loans, for example, the fact that the loan recipient was obligated to repay the government over time meant that the budgetary impact of a direct loan disbursement could be much less than other budget transactions of the same dollar amount. This lower budgetary impact also created a bias in favor of loan guarantees over direct loans. Loan guarantees appeared to be free, while direct loans appeared to be expensive because the budget did not recognize that at least some of the loan guarantees would default and that some of the direct loans would be repaid. The Credit Reform Act changed this treatment for direct loans and loan guarantees made on or after October 1, 1991. The following are GAO’s comments on USDA’s letter dated November 9, 1998. 1. The final report was revised to reflect USDA’s comment. 2. USDA’s comment on the number and total dollar value of guaranteed B&I loans approved in fiscal year 1993 overlooks 94 loans valued at $87,401,900, which were approved under an emergency supplemental appropriation authorization. These additional loans are contained in the automated obligation records that we obtained from the Finance Office of USDA’s Rural Development (RD) mission area and were reported in a USDA appropriation request as having been approved in fiscal 1993. We have added a note to table 1 stating that the fiscal year 1993 information includes loans made under this emergency appropriation provision. 3. Our report is based on the automated obligation records that the RD mission area’s Finance Office provided us for each fiscal year. These automated records show that RBS obligated $421.6 million of guaranteed B&I loans for fiscal year 1995. However, USDA commented that it obligated $2 million more, for a total of $423.6 million. We rechecked the automated records that we had been provided and did not identify additional obligations totaling $2 million. Subsequently, officials in the Finance Office said that the fiscal year 1995 automated record that we were provided did not include adjustments that had been made to the agency’s master loan file, which increased the total obligations by approximately $2 million. We have adjusted the financial statistics on the guaranteed B&I loans to include these additional obligations. 4. We agree and have adjusted the financial statistics accordingly. Reports from the RD mission area’s Finance Office show that 19 loans with a total value of $6,825,000 had been obligated in the first half of fiscal year 1998. The draft reviewed by USDA included four additional loans valued at $1,680,000 that had been approved but not obligated in the first half of fiscal year 1998. 5. We agree and have adjusted the subsidy cost amount and other appropriate statistics. The draft reviewed by USDA had the subsidy costs that USDA reported in its annual appropriation requests, including $2.6 million for fiscal year 1996. 6. USDA states that, as more loans are made, the delinquency rate decreases, implying that the most recent loans are more financially secure. Our view is that there has simply been less time for delinquencies to occur on loans made most recently, and the delinquency rate for these loans may well increase as time passes. 7. USDA states that only a small portion of loans made in recent years has resulted in losses. It is reasonable to expect a low level of losses on recently made loans. (See comment 6.) 8. We correctly rounded $148,447,000 as $148.4 million. 9. Differences between the amounts that we present and those suggested by USDA are due to rounding as stated in the report. This appendix contains information on our objectives, scope, and methodology in conducting this review. Concerned about the financial status of RBS’ business loan programs, the former Chairman of the House Committee on Agriculture asked that we report on (1) the number and dollar value of loans approved by the agency, (2) the federal government’s costs associated with the agency’s loans, and (3) the financial condition of the agency’s loan portfolio, including the losses incurred. In order to provide relatively current information on RBS’ lending and portfolio, we focused on fiscal year 1993 through the first 6 months of fiscal 1998; for information on the subsidy and administrative costs of the loan programs, we focused on fiscal year 1993 through fiscal 1997 (the latest cost information readily available when we conducted our work). To compile background information and to gain an understanding of how the business loan programs operate, we interviewed numerous RBS officials, including the Deputy Administrator for Business Programs, the Directors of the Processing and the Servicing Divisions, and the Acting Director and a Rural Development Specialist in the Speciality Lenders Division. We reviewed the basic statutory authority for the programs—the Consolidated Farm and Rural Development Act contains the basic statutory authority for the B&I loans; 42 U.S.C. 9812, as amended by the Food Security Act of 1985, authorizes the IRP loans; and the Rural Electrification Act authorizes the RED loans. We also reviewed RBS’ implementing regulations and operating instructions, and its various publications, pamphlets, and reports that describe the loan programs. Additionally, we reviewed USDA’s Budget Explanatory Notes for Committee on Appropriations for fiscal years 1995 through 1999. Furthermore, we reviewed prior reports addressing the loan programs that were issued by USDA’s Office of Inspector General and by us. Finally, we reviewed the provisions that apply to RBS and its loan programs that are contained in the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994. To compile information on the number and dollar amount of loans that RBS approved (both direct and guaranteed) during fiscal year 1993 through the first half of fiscal 1998, we used RBS’ automated files and its various loan reports. To compile information on RBS’ estimated subsidy costs and administrative costs, we used USDA’s budget explanatory notes. Where information on the administrative costs was not available in the notes, we made estimates that were based on, for example, reported transfers of funds from the Rural Development Insurance Fund to RBS or obtained estimates from the Budget Division in the Rural Development mission area. The descriptive information on credit reform was extracted from our prior reports. To compile information on preapplications and applications for guaranteed B&I loans at the end of fiscal year 1997, we used a report from the Rural Community Facilities Tracking System, which is an automated information system used by RBS to manage the loan programs. Our analysis of the financial conditions of RBS’ portfolio covered fiscal year 1993 through the first half of fiscal 1998. To determine the financial condition of the three loan programs, we reviewed data contained in RBS’ automated files, the agency’s financial loan reports, and other information that RBS provided us. We used these data sources to compile information on the outstanding principal in each program and the portion of outstanding principal that was owed by delinquent borrowers at the end of fiscal year 1993 through the first half of fiscal 1998 and the losses that RBS has incurred during these years. We did not adjust the outstanding loan amounts to reflect the allowance for losses that RBS includes in its financial statements nor did we assess the adequacy of reserves on the loans. Additionally, to obtain information on borrowers that were not delinquent but were identified by the agency’s field office officials as being problem borrowers, we obtained reports from the Rural Community Facilities Tracking System. We also cross-matched these borrowers with RBS’ automated loan files to determine the outstanding principal owed by each borrower. Most of the financial data presented in this report are unaudited information that we extracted from RBS’ reports and automated records. We did not verify the accuracy of the information contained in the agency’s reports and automated records. However, we did cross-match information in the various automated files that we used, and we also cross-matched the information we developed with the agency’s financial loan reports. We conducted our review from May through October 1998 in accordance with generally accepted government auditing standards. USDA reviewed a draft of this report. The Department’s comments are contained in appendix III. Charles M. Adams, Assistant Director Jerry D. Hall Patrick J. Sweeney Larry D. Van Sickle The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on the: (1) number and dollar value of business assistance loans approved by the Department of Agriculture's Rural Business-Cooperative Service (RBS); (2) federal government's costs associated with the agency's loans; and (3) financial condition of the agency's loan portfolio, including the losses incurred. GAO noted that: (1) RBS approved more than 2,900 rural business loans during fiscal year (FY) 1993 through the first 6 months of FY 1998; (2) these loans totalled about $3.2 billion; (3) more than three quarters of these loans and almost 90 percent of the total loan amount were guaranteed business and industry loans; (4) only 2 percent of the loans were direct government-funded business and industry loans; (5) the remaining loans were direct loans under the intermediary relending program and the rural economic development program; (6) the estimated total cost of these loan programs was about $290 million during FY 1993 through FY 1997; (7) of this amount, the subsidy costs of the loans, which primarily involve the estimates of default costs and interest rate subsidies, were almost $195 million; (8) administrative costs, which cover estimates of salaries and other expenses associated with operating the programs, totalled about $95 million; (9) as of March 31, 1998, the unpaid principal on the RBS's outstanding guaranteed and direct loans totalled about $2.2 billion; (10) delinquent borrowers held about $116 million--$112 million on guaranteed business and industry loans and about $4 million on direct business and industry loans and intermediary relending loans--or 5.4 percent of the total outstanding principal; (11) furthermore, from the start of FY 1993 through March 31, 1998, the agency incurred loan losses totalling about $266 million: about $264 million on guaranteed business and industry loans and about $2 million on intermediary relending loans; and (12) the agency did not experience any losses on debt associated with direct business and industry loans or with rural economic development loans.
|
In the context of international corporate taxation, countries determine their method to tax on two factors: the residence of the taxpayer and the source of the income to be taxed. The U.S. government taxes U.S. corporations largely on a residence basis, meaning that the worldwide income (both domestic and foreign) of corporations that are incorporated (have residence) in the United States is taxed by the United States. Alternatively, most other countries, including most OECD member countries, use a largely source-based or territorial approach that exempts certain foreign-earned income of their domestic corporations from taxation. In this latter case, they assert jurisdiction to tax income that is sourced within the taxing country and not the income earned abroad. Regardless of the tax system, some counties—often called tax havens— assess little or no corporate income tax. The U.S worldwide approach is sometimes called a hybrid system because it has some features that resemble the territorial approach such as deferring tax on some foreign income earned by foreign corporate subsidiaries until that income is remitted or “repatriated” to the U.S. parent company (often in the form of a dividend payment). Both the worldwide and territorial systems provide incentives for corporations to shift income to low tax jurisdictions: under the worldwide system, the incentive is to take advantage of the deferral of taxation until income is repatriated, while, under the territorial system, the incentive is to take advantage of permanent exemption. To avoid double taxation, countries, like the United States, that tax on a worldwide basis provide a credit against domestic corporate tax liability for foreign taxes paid. In addition, countries maintain tax treaties with each other that cover a wide range of tax issues but have two primary purposes: (1) avoiding double taxation—when two or more countries levy taxes on the same income—and (2) enforcing the domestic tax laws of treaty partners. Treaties can prevent double taxation, which can occur, for example, when more than one country, under its domestic laws, considers a taxpayer to be a resident. In these cases, double taxation is often avoided through tax treaties that outline which country has jurisdiction to tax under specific circumstances. Large U.S. MNEs are often made up of groups of separate legal entities that have complicated ownership relationships. A parent corporation may directly own (either wholly or partially) multiple subsidiary corporations, which in turn may own subsidiaries themselves. Large MNEs have an incentive to shift profits among entities to reduce overall taxes by exploiting differences in countries’ laws and regulations defining taxable income, tax rates, and when tax is owed. Transfer pricing is the area of international tax law that involves setting prices for tax purposes for transferring property, both real and intangible, between foreign related parties such as from a parent MNE to a subsidiary. MNEs can use transfer pricing to artificially shift profits from one jurisdiction to another to reduce taxation. When an MNE transfers a product from one party, such as the parent corporation, to another, such as a foreign subsidiary, it has to determine the price of that product— effectively “selling” products within the same MNE. To lower their taxes, MNEs can shift profits by underpricing products or assets transferred from an entity located in a high-tax jurisdiction to a related party located in a lower-tax jurisdiction, thus increasing the profits reported by entities located in low-tax countries. The international standard for determining transfer prices is the “arm’s length” principle. A transaction between related parties meets the arm’s length standard if the results of the transaction are consistent with the result that would have been realized if unaffiliated taxpayers executed a comparable transaction under comparable circumstances. That is, the transfer price should correspond to the price that unrelated parties would agree upon in an open market. While there is no single approach to determining a transfer price, in theory, the arm’s length principle provides an objective measure of the value of goods by relying on market forces to determine the price. To illustrate, table 1 shows the example of a hypothetical chocolate company we call “Chockolet” that sells the right to use its trademarked name to its distributing subsidiary at an arm’s length price and at a price below market value—that is, below the arm’s length price. As the table shows, the MNE group is able to reduce its overall effective tax rate by shifting profits (through the lower price) to the distributing subsidiary located in a lower-tax jurisdiction. In this example, the Chockolet parent corporation spends $100 developing its trademark, which is an intangible asset that allows Chockolet to charge $160 for a license to distribute its product (a markup of 60 percent over its development costs). This is the “arm’s length price”—the price at which it would be willing to license the use of the trademark to an unrelated distributing company. The Chockolet subsidiary has distribution costs of $30 on which it earns a normal return of $1.5 (a markup of 5 percent over its distribution costs). The sales price of the product is $191.50, which covers (1) the costs of development and distribution, (2) the 60 percent return on the trademark intangible asset and (3) the 5 percent normal return on the distributer’s costs. In the arm’s length pricing case in table 1, the Chockolet parent has a net profit of $60—the revenues of $160 from charging the arm’s length license fee to its subsidiary less its development cost of $100.The distributer has a net profit of $1.50—the sales price of $191.50 less the license fee of $160 and its distribution cost of $30. Each corporation pays the local country tax rate on its net profits which together add up a total tax liability for the MNE group of $18.2 and an average tax rate of 29.6 percent. However, because these corporations are related, there is an incentive to underprice the license so that the Chockolet parent could shift profits to its subsidiary located in the low-tax jurisdiction. As the underpricing example in table 1 shows, by agreeing on a transfer price of $100 that is equal to the cost of developing the trademark, the Chockolet parent reports zero net profits, while the $60 profits accorded to the trademark gets shifted to the related distributing subsidiary, which is located in the lower tax jurisdiction. This underpricing of its trademark results in an average tax rate of 15 percent for the MNE group. To address this base erosion and profit shifting, OECD issued a plan in October 2015 with 15 separate action items that would address different areas of potential weakness in international tax enforcement. Countries that adopted the BEPS plan agreed that they would implement four minimum standards: 1) countering harmful tax practices (action 5), 2) preventing treaty abuse (action 6), 3) increasing transfer pricing documentation with CbC reporting (action 13), and 4) increasing dispute resolution effectiveness (action 14). In addition to implementing country- by-country reporting, IRS is implementing procedures and practices to meet the minimum standards of improving dispute resolution. IRS is also implementing procedures for meeting the minimum standard of exchanging rulings pursuant to preventing harmful tax practices. According to Treasury, no additional steps need to be taken to meet the other minimum standards. Other countries may require legislation to implement the agreement. OECD revised its guidance on how risk bearing should be accounted for in transfer price contracts. Transfer price contracts can include compensation for bearing the economic consequences of an uncertain event should it occur, such as paying the cost of a product recall. OECD’s revised guidance emphasizes that the transfer price should reflect actual economic activities, such as who controls decisions related to risk and who has the financial capacity to bear risk. Prior OECD guidelines included risk analysis based on functions performed. However, OECD was concerned that an emphasis on the terms of the contract for risk allocation could allow for manipulation and continued base erosion and profit shifting. This concern stems from the fact that a contract may specify which of the related parties is responsible for assuming the risk of a particular event occurring but the specified party may not correspond to the party that is actually bearing the economic risk. Without examining the functions of each party and the economic substance of the contract, risk allocation could be used to shift profits between parties to reduce taxes furthering base erosion. To illustrate how the specifications of a transfer price contract can be used to allow BEPS, consider the example of Chockolet the company described earlier that licenses an intangible asset—the Chockolet trademark—to its wholly-owned subsidiary. In this case, the contract specifies that the parent corporation receives royalty payments and that the subsidiary bears the risk (i.e., would bear the cost) of a product recall, were one to occur. Under this contract, the royalty payment the subsidiary pays to Chockolet would be less than it would be if the parent was specified in the contract as bearing the risk. In other words, by agreeing to bear the risk of a recall, the subsidiary can pay a lower royalty for the license. However, if the parent is actually making major decisions that mitigate recall risk, then it is apparent that the specifications of the contract do not represent the economic reality. In such a case, although the contract specified that the subsidiary assume risk, in reality the parent corporation would bear the cost of managing recall risk. Contractually assigning risk without economic substance is one way a parent could shift profits to lower-tax jurisdictions: the parent receives less revenue in the form of lower royalty payments and the subsidiary has lower costs due to the smaller royalty payments. Therefore, risk can be used as part of a contract for the transfer price of an intangible asset to shift profits from one party in a high-tax jurisdiction to a related party in a lower-tax jurisdiction, resulting is base erosion. According to OECD and other subject matter specialists, the revised guidelines are likely to be an improvement over prior guidelines in reducing BEPS if it encourages MNEs and tax authorities to ensure that transfer prices are set based on real economic activity. The guidelines underscore the focus on supporting contractual agreements with economic activity. In particular, the revised guidelines focus on the ability of the parties to control risk by making decisions about risk taking functions and on their financial capacity to bear risk. In the example above, focusing on the actual functions of both parties would reveal that the parent continued to manage recall risk, not the subsidiary as specified by the contract. Exhibiting control and the financial capacity to absorb risk would be stronger indicators of who bears the risk than the terms of the contract alone. The scope for profit shifting is likely to be lower under an enforcement regime that considers the true allocation of economic activity rather than just the terms of contractual agreements. According to OECD, the revisions are intended to address the concern that taxpayers have sought to transfer risk, and the potential for profits associated with an allocation of risk, through contractual allocation alone, without accompanying economic substance. By increasing the emphasis on the actual conduct of the parties, the revised guidance could increase the ability of tax authorities and MNEs to better align the taxation of income with the creation of value. However, challenges remain for tax authorities and MNEs in applying the arm’s length principle because the guidance does not account for all the ways that entities can bear risk. For example, even if a parent corporation’s transfer prices align with economic activity, it cannot fully transfer risk to its subsidiary because any costs incurred by the subsidiary will be reflected in a change in the market value of the parent corporation. In general, related corporations do not have the same ability to transfer risk as unrelated corporations. This is the case even when the ability to control and the capacity to absorb risk has been isolated in one of the related parties to the transaction. The parent that transfers an intangible asset to its subsidiary has an equity interest in the subsidiary that ensures that it will gain or lose from any future anticipated or unanticipated profits and losses in a way that an unrelated corporation does not. The Chockolet Company described above illustrates the limitations on the ability of related parties to transfer risk. Chockolet’s subsidiary, acting as its distributor in another country, is assigned all risk of product recall in that country by contract. Following the new guidelines, the contract stipulates that the subsidiary has the authority to manage any recalls and the financial ability to absorb the cost of a recall should it occur. To show that Chockolet’s limitations on its ability to shift risk to the subsidiary, we examine below the consequences of a recall for the value of the parent corporation Chockolet under the two possible financial situations of its subsidiary. The subsidiary has sufficient profits to absorb the cost. If the recall occurs, Chockolet incurs the risk of a reduction or cessation of the royalty payments and damage to the value of its brand. However, because the subsidiary is owned by the parent MNE, Chockolet has additional risks because, as a shareholder, its assets decline as the subsidiary’s profits decline with the recall cost. Ultimately, the MNE as a primary, if not sole, shareholder will bear a loss in profits. The subsidiary does not have sufficient profits to absorb the costs. In this case, the subsidiary would require equity from the parent MNE or would have to borrow to pay for recall costs. If it receives equity from the parent to pay for the recall, then the MNE is directly affected with the loss in equity. If the subsidiary borrows from an unrelated party then that debt affects the amount of leverage the MNE group has and could restrict the ability of the parent to borrow for its operations. Thus, the parent still bears costs and risks of a recall that it does not bear when dealing with unrelated parties. Unrelated corporations on the other hand, have greater ability to transfer risk because the lack of an ownership relationship limits the potential impact of a recall on the asset value of the corporation. If Chockolet contracts with an independent distributer, it still incurs the risk of a reduction or cessation of the royalty payments and damage to the value of its brand. However, because the distributer is not owned by the parent MNE, Chockolet does not have the additional risks as a shareholder described above. As this example shows, the use of arm’s length principle (ALP) becomes problematic when allocating risk between related parties. As we noted earlier, the arm’s length price is based on treating transactions between related parties as if they were unrelated. According to OECD and Treasury, the ALP allows risks to be assumed by one entity or another in a related party transaction in the same way it can be assumed with unrelated parties. One entity can be protected (or isolated) from the risk in a contract that assigns risk bearing to the party that undertakes certain functions. However, as we show in the example, the application of the ALP is problematic in this situation because risk cannot be allocated between parties by the very fact that they are related. In the case of recall risk, the allocation of risk to a subsidiary based on functions of control and capacity does not protect the MNE from exposure to the risk of loss in asset value. Unrelated parties can isolate this risk while related parties cannot and this difference in the ability to transfer risk makes the application of the ALP more difficult. The difference between the way risk is incurred by related and unrelated parties is illustrated in figure 1. Chockolet, by being a shareholder of the distributing subsidiary, is affected by the subsidiary’s loss in profits. In the figure, both the parent and its subsidiary’s net assets decline by the recall cost of $170 and the total value of the MNE shrinks from $600 to $430 (as illustrated by the reduction in the relative size of the circles in the figure). Therefore, the parent is affected, which would not be the case between unrelated parties who do not own shares of each other’s assets. As the figure illustrates, the value of Chockolet is $300 when it does not own the distributor and this value is unaffected by the recall cost, while the value of the unrelated distributor declines by the $170 recall cost. OECD guidance may be less effective than it could be in reducing BEPS because, without considering all the ways that entities can bear risk, uncertainty about the correct transfer prices remains that could still allow opportunities for profit shifting and, consequently, base erosion. For related parties, the focus on functions related to risk may have little or no relation to how much of the risk is actually borne by the parent company and its subsidiary. Whatever the terms of a contract or however resources are allocated between the related parties (as clarified by the revised guidelines), the parent bears some or all of the costs of the event at risk, in this case, a product recall. Identifying who ultimately bears the burden of the risk is similar to determining the incidence of a tax because both require consideration of how market changes affect who bears the economic costs. With tax incidence, the parties that pay the tax may not economically bear the cost as when, for example, the payroll tax is collected and remitted by a business to the tax authority, but the employees bear the burden of the tax in the form of reduced wages. The tax incidence is determined by market adjustments, in this case, a drop in wages. Similarly, risk incidence is determined by market adjustments like the change in the asset value of the MNE in the recall case. The potential market adjustments that determine risk incidence depend on a number of factors such as the type of transaction, the degree of the MNE’s integration of corporate structure and function, and the MNEs market power in the products it buys or sells. Aligning profits with risk requires determining the economic incidence of the risk, which may not reflect either the explicit terms of the contract or the explicit assignment of risk to particular functions. IRS considers risk as part of its overall economic analysis when determining transfer prices during transfer pricing audits. According to Treasury and IRS officials, the expanded guidance on risk allocations under OECD guidelines is consistent with, but more detailed than, the current U.S. transfer pricing regulations under section 482 of the Internal Revenue Code. Under these regulations, the risk allocation between related parties should be examined by reviewing the terms of contracts between the parties and the economic substance of the transaction(s) between them. The regulations provide guidance on interpreting the terms of these intercompany contracts and on facts that are relevant to determining the economic substance of the transaction under the ALP method that treats related parties as unrelated. Treasury and IRS officials stated that among the factors relevant to determining whether a purported risk allocation has economic substance are (1) whether the pattern of the taxpayer’s conduct over time is consistent with the risk allocation, (2) whether the taxpayer has the financial capacity to assume the risk, and (3) the extent to which the parties exercise managerial or operational control over the business activities that directly influence the amount of income or loss realized. Treasury and IRS officials stated that risk allocations are critical to a proper application of the arm’s length standard. The section 482 regulations require review of many aspects of intercompany transactions when determining appropriate transfer pricing outcomes, notably the functions performed, the assets employed, and the risks assumed by the respective parties. However, the problems of applying the ALP to risk and to certain other aspects of the transaction complicate IRS’s task of adequately assessing transfer prices. While the ALP is widely accepted for evaluating transfer prices and applicable in many transactions, difficulties may arise in certain situations, such as what to do about risk and how to allocate profits among the entities in an integrated MNE. As OECD notes, the separate entity approach may not always account for economies of scale and interrelation of diverse activities created by integrated businesses. According to OECD, there are no widely accepted criteria for allocating the benefits of integration between associated entities. Our analysis indicates that risk is another area where the application of the ALP may be stressed because risk cannot be allocated between associated entities. These shortcomings of the ALP can have implications for economic efficiency and for the equity of transfer pricing administration. For example, if the issues concerning risk are not addressed, profit allocations may not be equitable because profits could be unfairly allocated to parties on the basis of risk bearing that do not actually bear the burden of risk. As we have said in our prior reports, one criteria of a good tax system is the equitable treatment of all taxpayers. In addition, the revised guidance may be less likely to be effective in helping to reduce BEPS because, without considering risk incidence, an uncertainty about the correct transfer prices remains that could allow profit shifting. OECD revisions to the transfer pricing guidelines are currently being implemented so no data are available to estimate prospective administrative and compliance costs for IRS and U.S. MNEs. The change in costs, including whether the revised guidelines result in decreased or increased costs, depends on a number of factors—as explained below— that make even a qualitative estimate of the effect on cost difficult at this time. According to Treasury officials, the revised guidance would not have a significant effect on U.S. tax administration because current U.S. regulations already embody both the arm’s length standard and the role of functions performed, assets employed, and risks assumed in determining arm’s length prices between related entities. However, the clarifications to the risk allocation guidance could increase the administration costs for other countries to the extent those countries were not already incorporating functional analysis in reviewing transfer pricing cases. The compliance costs of U.S. MNEs from the revised risk allocation guidelines are also uncertain because they depend on what, if any, new or additional actions are undertaken by MNEs to ensure compliance. For companies that set transfer prices based, at least in part, on relative risk, the costs can vary depending on whether the current specified risk allocation aligns with parties’ capacity to assume and ability to control risk. The costs can range from relatively small costs, if an MNE only needs to make changes to existing contract language, or would be significantly more costs if broader changes in corporate strategies are necessary. According to subject matter specialists, relying on function analysis as required by the clarified risk allocation guidance may induce MNEs to rely even more on complex tax planning techniques to rearrange what entities control which processes, increasing their costs. According to subject matter specialists, because the revised guidelines emphasize the importance of real business functions, such as employment and investment, they may encourage MNEs to better align their actual business activities with their reported profits. The net effect on the U.S. economy depends on whether MNEs adjust actual activities to support current profit allocations, move reported profits to where their business activities are occurring, or choose to make no adjustment. Furthermore, the effect also depends on whether these shifts happen among foreign countries or between the United States and other countries. According to subject matter specialists, because the revised guidance focuses on the location of decision making as support for allocating risk and profits, MNEs may be encouraged to decentralize decision making from the parent company to multiple jurisdictions to ensure that risk could be attributed to low-tax countries. This could result in some U.S. employees being relocated to tax-favored jurisdictions and reduced demand for employment in the United States. It is extremely difficult to predict how MNEs will respond. As subject matter specialists have noted in the literature, the complexity of transfer pricing administration and tax planning of those businesses makes it even more challenging to predict how they will respond to numerous countries changing guidelines in potentially different ways. However, given the limited scope of the revised guidelines relative to the entire tax system, as discussed below, it is unlikely that these changes would result in significant changes in U.S. investment or employment. We found no estimates of the effect of revisions to transfer pricing guidance on investment, employment, or revenue. However, estimates have been made of certain types of responses to other tax changes. According to the studies we reviewed, these measures can help provide a context for assessing the possible magnitude of any net change to the U.S. economy. Though evidence on how corporations shift profits in response to tax rates varies by study, the amount is generally low. Studies suggest that a 1 percentage point reduction in a country’s tax rate could lead to an increase in profits reported in that country by up to 5 percent, but the actual amount is likely to be much lower. The responsiveness of businesses’ allocation of investment to changes in tax rates is also likely to be low, according to the studies we reviewed. One study found that a 1 percent reduction in a host country’s tax rates leads to an increase of total foreign direct investment between 0.3 percent and 1.8 percent. Another study conducted a meta-analysis of studies on foreign direct investment and found the median measure of responsiveness to a percentage point reduction in the tax rate resulted in an increased investment by 2.49 percent. The profit and investment shifting responses discussed above are to a major change in a tax system—its tax rate. While it is difficult to predict how firms will respond to changes in tax administration, the responsiveness to a change in one aspect of the enforcement of a tax law—like a clarification of existing guidelines—could be much smaller and possibly even zero. Additionally, based on our analysis of the economic literature, the effect on labor is likely to be much smaller than the effect on investment because labor is considered generally much less able to move than investment, and thus would be even less likely to shift. The effect on U.S. tax revenue is also unclear. According to subject matter specialists, the revised guidelines increase the focus on functional analysis, which could result in increases in taxes assessed by higher tax countries like the United States, if more of the real business activities are concentrated in those countries. However, these costs may be mitigated to the extent that MNEs respond to OECD changes by re-locating business activities to lower-tax countries to better align real business activities with reported profit allocations. However, based on our review of the international tax policy literature, there may still be an effect on U.S. tax revenue even if MNEs make no significant adjustments to their activities. Any increased foreign taxes that could result from an increased focus on economic function and risk would reduce U.S. revenues in two ways: (1) by increasing foreign tax credits when the income from that country is repatriated, and (2) reducing U.S. shareholder’s capital gains taxes due to reduced MNE profits. One factor contributing to base erosion and profit shifting has been a lack of consistent information on MNEs’ business activities across tax jurisdictions. According to OECD, the new transfer pricing documentation addresses this deficiency and benefits tax authorities by providing information on MNEs’ business activities that can be used to assess the risk of profit shifting and improve the deployment of audit resources. MNEs will be required to report transfer pricing policies under this approach. OECD states that requiring MNE parent entities to submit a single country-by-country (CbC) report to their tax jurisdictions, which, in turn, will share the report through government exchanges, ensures consistent documentation across countries at the same time limiting compliance cost for MNEs. While Treasury officials said their current documentation and reporting requirements are sufficient for transfer pricing administration, they will be implementing one tier of the transfer pricing documentation. OECD’s transfer pricing documentation consists of a three-tiered approach: a CbC report filed with the tax jurisdiction of the MNE’s parent entity covering each jurisdiction in which the MNE operates, and a master file and local file submitted, where required, by the MNE to the tax administration in each jurisdiction in which it operates. On June 30, 2016, Treasury issued final regulations for implementing CbC reporting as one of the BEPS minimum standards. Treasury will not be requiring MNEs to submit master and local files. While U.S. MNEs are not required to prepare or file transfer pricing documentation, IRS officials stated that most provide some documentation voluntarily to IRS that is typically more detailed and informative than the information found in a CbC Report. Additionally, they said that the information OECD recommends for inclusion in the master or local file is generally available to IRS upon request. Figure 2 illustrates the new transfer pricing documentation mechanism. The type of data CbC reporting provides—aggregated information for each tax jurisdiction in which the MNE operates including revenues, pretax profits (or losses), income taxes, capital, accumulated earnings, number of employees, tangible assets, and business activities—will improve transparency of MNEs’ activities for tax authorities to the extent that such global information has not previously been reported. As we reported in 2012, IRS does not have information on how much business a non-U.S. MNE operating in the United States does in any particular country, including the United States. Master and local file documentation could potentially provide tax authorities substantially more information on global operations. However, tax jurisdictions are not bound to implement these reports. As outlined by OECD, the master file would consist of a high-level overview of the MNE’s global business, such as organizational structure, business operations, intangibles, and financial and tax positions. The local file would include more detailed information on the local entity and intercompany transactions with entities in different tax jurisdictions, including transfer pricing methods. Local tax jurisdictions would need to implement these reporting requirements through local legislation and administrative procedures presenting the jurisdiction the opportunity to tailor the transfer pricing documentation for their own purposes. Stakeholders we spoke with raised concerns about unintended consequences. Stakeholders noted the possibility that tax authorities may use the CbC reports in ways in which they were not intended. While OECD explicitly states the CbC Reports should be used for high-level risk assessment and not for assessing tax or as a substitute for a detailed transfer pricing analysis, stakeholders worried that the reports could be misused for these purposes. In particular, the format and data items of the CbC Reports provide key business factors, such as revenues and number of employees in each tax jurisdiction an MNE operates. The availability of such data may facilitate the ability to implement formulary apportionment, a factor-based tax system that would allocate an MNE’s global profits based on the share of their business factors, which could lead to double taxation. Varied stakeholders were concerned that, in extreme cases, concurrent misuse by several countries could result in assessments totaling more than 100 percent of MNE profits. Excessive taxation would lead to audit disputes and potentially requiring resolution among tax authorities which, in turn, result in additional cost for both the MNEs and the competent authorities. Another concern stakeholders we spoke with had is that countries may not be implementing transfer pricing documentation consistently. OECD officials stated OECD does not have any enforcement authority and will rely on a review mechanism and “peer pressure” to foster compliance with its recommendations. Nevertheless, countries generally implemented CbC information requirements that are consistent with the BEPS’ minimum requirements, while diverging from the BEPS master and local files recommendations. Countries varied on which entities are required to report and the threshold for reporting. For example, Germany set its threshold at €750 million for CbC reporting and €100 million for master file reporting while Netherlands has thresholds of €750 million and €50 million for CbC and master file respectively. IRS’s costs for implementing, exchanging, and analyzing CbC reporting are uncertain, but it has taken steps to mitigate costs. IRS’s use of existing data exchange systems should help to control the administrative costs of implementing country-by-country reporting. Other administrative costs related to CbC, such as form development and training, are similar to other comparable tax regulation changes. The compliance costs of firms are more uncertain and likely vary by size and type of firm. The main cost to firms of CbC is likely to be developing systems to provide consistent data to tax authorities. Because IRS does not intend to require the master file or local file reporting of the OECD approach, its cost of adopting BEPS transfer pricing documentation is essentially the cost of adopting CbC reporting. This cost is expected to include of costs of developing systems to collect, store, and exchange CbC reports, and integrate CbC data with existing IRS applications. IRS does not have estimates of these costs, but factors that impact these costs can be identified. IRS has issued final regulations for CbC reporting which requires U.S. MNEs with consolidated annual revenue of at least $850 million to file Form 8975 Country-by-Country Report electronically for the tax year beginning on or after June 30, 2016. MNEs with fiscal years beginning on January 1, 2016, or before the effective June 30, 2016 date may file voluntarily enabling IRS to provide the CbC Report to other tax authorities as appropriate. Form 8975 will be an attachment to the U.S. corporation income tax return Form 1120 and subject to statutory confidentiality and disclosure protections. IRS officials said they expect the form—currently under review—that U.S. MNEs are to file in 2017 to be generally consistent with the BEPS CbC template. IRS plans on issuing additional guidance in early 2017 to outline the process for voluntary filing according to officials. To try to minimize implementation cost, IRS plans to use its existing International Data Exchange Service (IDES) system with modifications to support CbC. IRS developed IDES to facilitate the secure transmission and exchange of Foreign Account Tax Compliance Act (FATCA) data. Pursuant to the 2010 law, foreign financial institutions are to report to IRS on accounts held by U.S. taxpayers. According to officials, IRS is examining IDES to determine the extent to which it can be used to exchange CbC Reports with other tax authorities. IRS officials stated that treaty and tax information exchange agreement (TIEA) partners enrolled to use IDES for FATCA reporting would be able to use IDES for CbC exchanges. Currently, Treasury has signed 87 FATCA intergovernmental agreements. According to IRS officials, the United States has treaties or TIEAs with all but 6 of the 44 jurisdictions that have agreed to exchange CbC Reports as of June 30, 2016. IRS officials said that the number of CbC Reports does not drive the system cost but rather the number of connections needed to exchange CbC Reports with other tax jurisdictions. They believe that these are significantly less than those required for FATCA. IRS spent about $50 million for FATCA operations in fiscal year 2015 and the first quarter of 2016. This cost excludes the development costs of IDES. According to IRS officials, the costs for developing and configuring IDES for FATCA were around $7 million annually in 2015 and 2016, and are expected to increase to about $9 million in 2017. While IRS has a plan for how it will collect CbC data, a decision is pending on whether to establish a new database to store CbC data or use an existing FATCA platform. According to IRS officials, their strategy for assessing transfer pricing risk is still evolving and will influence the decision. IRS still needs to decide how to use CbC data in conjunction with other international data. IRS officials said they expect the CbC costs will be less than the cost of implementing FATCA because CbC information is exchanged only between tax authorities, while FATCA information is exchanged with individual banks as well as tax authorities. In addition to developing systems to collect, store, and exchange CbC Reports, implementation costs also include integrating CbC data with existing IRS applications used in transfer pricing risk assessment, developing Form 8975 and its instructions, online taxpayer assistance, and staff training. IRS officials said they do not have an estimate for these costs and will be in a better position to provide an estimate after the IRS Information Technology Team develops its plan for filing and processing the CbC form. According to these officials, these costs are routine for implementing tax regulatory changes that involves a new IRS form, and that the costs for CbC are not expected to be significantly different. IRS will also incur some additional operational costs for annually collecting, exchanging, and using CbC. These operational costs will be affected by the number of CbC Reports IRS receives from U.S. and non- U.S. MNEs in addition to the number of CbC Reports from U.S. MNEs that IRS will need to transmit to other tax authorities. IRS estimated about 2,200 U.S. MNEs would meet the $850 million threshold for filing CbC Reports. IRS will need to transmit the CbC Reports of these U.S. MNEs to the tax jurisdictions in which the respective MNEs have business operations. Based on BEPS CbC reporting requirements, IRS will receive CbC Reports from non-U.S. MNEs with business operations in the United States, although these CbC Reports will come from the respective tax jurisdictions of the foreign parent entity. The European Commission estimates around 5,000 non-U.S. MNEs meet the BEPS €750 million threshold for CbC reporting. If all these large non-U.S. MNEs have business operations in the United States, IRS could potentially be receiving about 5,000 CbC Reports from other tax jurisdictions. However, there is no reason to assume that all non-U.S. MNEs have operations in the United States. Previously, our analysis of 2008 IRS data found that at least 2,356 non-U.S. MNEs had business operations in the U.S. at that time. IRS officials said they do not expect significant additional costs associated with exchanging CbC Reports. IRS is uncertain of whether it will need additional resources for CbC risk assessments. IRS officials said that they are studying the most efficient and effective means for using the CbC Reports and may be using the reports in conjunction with other data sources IRS uses for risk assessment, such as, for example, the data reported on Form 5471— Information Return of U.S. Persons with Respect to Certain Foreign Corporations. In addition to the initial startup and operating costs, adopting BEPS also entails indirect costs affecting other areas of IRS. According to IRS officials, due to budgetary constraints, CbC implementation may require adjusting agency spending priorities and redirecting resources from other IRS efforts. However, until IRS determines how it will use the CbC Reports, it does not know whether it would need to divert resources for CbC risk assessments. According to OECD, CbC reporting could reduce compliance costs by standardizing documentation across tax jurisdictions and limit the need for multiple filings. However, stakeholders we interviewed said that the reporting requirement will increase compliance costs because CbC information is not information U.S. MNEs routinely collect or report, and thus will require new data systems and processes. Moreover, stakeholders we spoke with believe that the new transfer pricing reporting requirements will increase audit activities and disputes, and potentially increase competitive and reputational risks. Stakeholders we spoke with were concerned that the new transfer pricing documentation requirements would increase compliance burden. While stakeholders we interviewed have generally commended Treasury officials for successfully reducing potential compliance burden by decreasing the amount of CbC information to be reported, they do not expect the additional reporting requirements to reduce compliance burden because of streamlined or consolidated reporting, as OECD has suggested. Rather, they said that the reporting requirements will add to and in some cases duplicate current reporting requirements. For example, certain items, such as revenues, number of employees, pretax profits or loss, and income tax, are reported in both the European Union’s (EU) CbC reporting requirements for banks and investment firms as well as the BEPS CbC Report. According to stakeholders interviewed, most transfer pricing documentation implementation cost for MNEs comes from developing the necessary information technology system and processes for CbC reporting even though U.S. MNEs will also have to submit local and master files to local tax authorities where the U.S. MNE has operations. These stakeholders noted that the initial cost for implementing CbC reporting varies widely depending on the size and complexity of the MNE and particularly on its ability to extract CbC information from existing data systems. For example, stakeholders explained, MNEs may need to extract CbC information from hundreds or thousands entities which use incompatible data systems with different accounting methods, in different languages. This may be the case if a U.S. MNE expanded through mergers and acquisitions without centralizing its financial systems. Accordingly, the relative burden of CbC reporting for MNEs will vary. More transparent data on MNE’s could help tax authorities better identify potential profit shifting and focus limited enforcement resources. However, increased audit activity would increase cost to MNEs. OECD specifies that MNEs do not need to reconcile the information reported in their CbC Reports with similar information reported for other purposes. However, stakeholders we spoke with pointed out that if global CbC information is not reconciled with master and local file information, tax authorities might misinterpret the information or request additional information for clarification. One stakeholder pointed out that the compliance cost for a local audit often entails contracting for local tax professionals to manage the audit including site visits and interviews, which may be in different languages, because MNEs generally do not have local tax personnel. For example, one large MNE with operations in about 120 countries has tax personnel in only a fourth of those countries. Stakeholders we spoke with also expect audit disputes to increase where local tax authorities may misinterpret or use global CbC information in ways inconsistent with OECD’s recommendations. Of particular concern is the use of CbC Reports for formulary allocation or tax assessment. Formula apportionment is a factor-based tax system that would allocate an MNE’s global profits based on the share of its business factors, such as sales, employment, or physical capital, located in a given country. The data included in CbC Reports are the type of factors that have been proposed for use under formulary tax systems. MNEs would incur additional costs to resolve disputes with the local tax authority and may require resolution among authorities of different tax jurisdictions. According to OECD, disputes under mutual agreement procedures (MAP) were taking on average 2 years to resolve. Furthermore, OECD reported that at the end of 2014—the most recent data available—there were 5,423 unresolved MAP disputes among OECD members, more than double the number of cases for 2006. Recognizing the need to improve the effectiveness and efficiency of the dispute resolutions, tax jurisdictions adopting BEPS minimum standards agreed to change their mutual agreement approach to dispute resolution. Whether the changes will actually result in greater efficiency remains to be seen. MNEs could also incur costs through the inadvertent or deliberate disclosing of transfer pricing information, resulting in competitive or reputational risks. According to stakeholders, CbC information could potentially reveal where an MNE is expanding or contracting especially if the MNE has a single product in a particular country or is just entering a market. Such marketing information could be valuable to a competitor. Stakeholders also expressed concern about the exposure of confidential information in master files. For example, transfer pricing policies and cost contribution arrangements are considered confidential and not publicly available. While MNEs need to prepare master files with accurate information, a balance must be struck so that information reported would not be harmful if publicly disclosed. Although CbC reporting is protected from disclosure by law, MNEs could also incur costs if disclosure occurs and ultimately damages the MNE’s reputation. Stakeholders said differences in the way items are reported on the CbC Report and tax returns—such as profits and taxable income, which includes allowable deductions and credits—could lead to reputational risk. One such risk is public perception that the MNE is not paying its fair share of taxes to the local jurisdiction. IRS safeguards CbC information through treaty and TIEA provisions governing tax information exchanges, but dissemination of CbC Reports to the many local tax authorities where U.S. MNEs operate increases the risk of disclosure. IRS officials maintain that their ability to pause exchanges if foreign tax jurisdictions fail to meet the confidentiality requirements and data safeguards creates an incentive for foreign tax jurisdictions to safeguard CbC information. Stakeholders we spoke with expressed concern about how effective this strategy would be in deterring disclosure. The CbC Report includes information on where an MNE’s income, capital, and employment are located. Stakeholders pointed out that in cases where there is an apparent disparity between the location of profits and the location of employment and investment, MNEs may have an incentive to realign their profits with their real business activities. As we noted earlier, MNEs could respond by: (1) relocating profits to align with current location of real activity or (2) adjusting the location of their business activities, such as employment and physical capital, to better support the current location of reported profits. Based on international tax policy literature, the implications for U.S. revenue depend on how and whose profits are relocated. To illustrate, if U.S. MNEs relocate profits out of low-tax jurisdictions into non-U.S. higher-tax jurisdictions, that would result in higher foreign tax credits and lower corporate profits, which, in turn, would reduce U.S. corporate tax revenues. However, if U.S. MNEs relocate profits from low-tax foreign countries into the United States, then that would result in increased U.S. corporate tax revenues. Additionally, if foreign MNEs relocate profits from low-tax jurisdictions into the United States, U.S. corporate tax revenues would also increase. Alternatively, MNEs may relocate employment and investment to support the distribution of profits across jurisdictions. If U.S. MNEs relocate real business activities among foreign countries, there would be likely no effect on U.S. employment and investment. However, if MNEs relocate U.S. operations abroad to support reported profits, then that would lower U.S. investment, particularly to the extent that most BEPS by U.S. MNEs is done to avoid U.S. taxes. While some business groups suggested that MNEs would be more likely to move employment and investment to align their real activity with their reported profits, others thought that both reactions were likely. They also noted that some MNEs rely on differences in the taxation of entities across jurisdictions, such as the use of debt, to support their profit allocations. Such MNEs may be less affected by transfer pricing revisions than MNEs that rely on uncertainty in transfer pricing outcomes. MNEs that rely on the differences in taxation would be less likely to relocate their employment and investment. As we noted earlier, it is extremely difficult to predict how individual MNEs will react to the increased scrutiny, or if that scrutiny is even sufficient to change their behavior. Each MNE would make decisions weighing the relative costs of moving business functions against the tax savings. Thus, the net effect is unknown. OECD’s revised guidance expands prior guidance on transfer pricing to try to better ensure that profits are aligned with economic activities. The revisions address the concern that taxpayers have sought to transfer risk, and the compensation for bearing the risk, through contractual arrangements alone without accompanying economic substance. The guidelines are intended to emphasize that the ability to control risk and the financial capacity to absorb risk are key functions for supporting the contractual arrangements. By increasing the emphasis on the actual conduct of the parties, the revised guidance could increase the ability of tax authorities and MNEs to better align the taxation of income with the economic activity that creates that income. IRS considers risk as part of its overall economic analysis when determining transfer prices during exams, and views the OECD revised guidance as consistent with but more detailed than its own regulations. The arm’s length principle (ALP) is widely accepted for evaluating transfer prices and is applicable in many transactions. In particular, OECD guidance on risk allocation is based on the ALP. However, this principle has limitations that make its application to risk allocation problematic. Because of these limitations, uncertainty about the correct transfer prices could allow for profit shifting. We provided a draft of this report to the Commissioner of Internal Revenue and the Secretary of the Treasury for comment. In its written comments, reproduced in appendix III, IRS agreed with the importance of addressing risk allocation between related parties. IRS and Treasury also provided technical comments, which we incorporated where appropriate. Subsequently, we met with Treasury and IRS officials to discuss these technical comments and based on new information they provided we removed a recommendation that we had included in the draft. We had recommended that IRS clarify its guidance on risk allocation between related parties. We determined that the challenges of risk allocation are inherent in the arm’s length principle that is the international standard for determining transfer pricing and that clarifying IRS guidance would not necessarily address this issue. We are sending copies of this report to appropriate congressional committees, the Department of the Treasury, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9110 or by email at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Action 1 – Addressing the Tax Challenges of the Digital Economy The Organisation for Economic Co-operation and Development (OECD) proposes rules and implementation mechanisms for cross-border business-to-consumer transactions in the digital economy to facilitate the efficient collection of value-added tax. The digital economy’s features and business models such as e-commerce, online advertising, and online payment, exacerbate profiting shifting risk. OECD rules and mechanisms, based on the country where the consumer is located, are also intended to level the playing field between domestic and foreign suppliers. As the digital economy continues to develop, work on this issue will also continue and developments monitored. OCED expects a report on these efforts to be produced by 2020. Action 2 – Neutralizing the Effects of Hybrid Mismatch Arrangements OECD provides recommendations for designing domestic rules and treaty provisions to address hybrid mismatch arrangements—arrangements which exploit differences in tax treatment of entities or instruments between two or more tax jurisdictions to achieve little or no taxation. The rules and provisions are intended to increase the coherence of global corporate income taxation. For example, mismatches have resulted in multiple deductions for a single expense, deductions without corresponding taxation, and the generation of multiple foreign tax credits for a single amount of foreign tax paid. OECD also provides guidance on asset transfer transactions (such as stock-lending and repo transactions), imported hybrid mismatches, and the treatment of payment that is included as income under a controlled foreign company tax system. Action 3 – Designing Effective Controlled Foreign Company Rules OECD provides recommendations for controlled foreign company (CFC) rules, or rules that apply to foreign companies that are controlled by shareholders in a parent jurisdiction. The CFC rules are to prevent the shifting of profits from the parent entity into foreign companies, in particular shifting mobile income such as those from intellectual property (such as copyrights and patents), services, and digital transactions. Existing CFC rules have design features which do not effectively prevent base erosion and profit shifting (BEPS) and need to be revised to address changes in the international business environment. Action 4 – Limiting Base Erosion Involving Interest Deductions and Other Financial Payments OECD developed best practices for designing rules to prevent the use of interest deductions to shift profits. Since money is mobile and fungible, favorable tax outcomes can be achieved by adjusting the amount of debt among entities of the multinational enterprise MNE. Moreover debt at the MNE’s individual entity level can be multiplied through intra-group financing. Additionally, financial instruments, which are economically equivalent to interest but have different legal forms, can be used to avoid restrictions on the deductibility of interest. OECD analyzed several best practices and recommends an approach to address these risks. Technical work continues on specific areas of the recommended approach and is expected to be complete in 2016. Additional work on the transfer pricing aspect of financial transactions also continues during 2016 and 2017. Action 5 – Countering Harmful Tax Practices More Effectively, Taking into Account Transparency and Substance OECD developed a methodology for assessing preferential regimes that provides preferential tax treatment to determine their risk for profit shifting and a framework for mandatory spontaneous exchange of information on rulings that could give rise to such concerns. Such rulings include those for preferential regimes, cross border unilateral advance pricing arrangements, and permanent establishments. Concern over harmful tax practices currently relates primarily to the risk of using preferential regimes to artificially shift profits and the lack of transparency on rulings. The information exchange commences from April 1, 2016 for future rulings. Exchanges for certain past rulings need to be completed by December 31, 2016. Both agreement with the methodology for assessing preferential regimes and the framework for compulsory spontaneous exchange of ruling is one of the four BEPS minimum standards. Action 6 – Preventing the Granting of Treaty Benefits in Inappropriate Circumstances OECD provides new treaty anti-abuse provisions with safeguards to prevent treaty abuse, including treaty shopping strategies through which a taxpayer who is not a resident of a particular tax jurisdiction attempts to obtain benefits of a tax treaty concluded by that tax jurisdiction. However, the adoption of treaty anti-abuse provisions is not sufficient to address tax avoidance strategies that circumvent domestic tax law, and such abuse must be addressed through domestic anti-abuse rules. Accordingly OECD also provides recommendations for designing domestic rules to prevent granting of treaty benefits inappropriately. Inclusion of anti-abuse provisions in tax treaties including a minimum standard to counter treaty shopping as well as flexibility for implementation is one of the four BEPS minimum standards agreed upon by countries adopting BEPS. Action 7 – Preventing the Artificial Avoidance of Permanent Establishment Status OECD provides recommendations for changes to the definition of permanent establishment in its Model Tax Convention—widely used as the basis for negotiating tax treaties—to address some commonly used tax avoidance strategies. As generally specified in tax treaties, a foreign enterprise’s business profits are taxable in a jurisdiction only to the extent that the foreign enterprise has a permanent establishment in that jurisdiction to which the profits are attributable. For example, one strategy used to avoid taxes replaces subsidiaries that traditionally acted as distributors with commissionaire arrangements where the entity would begin selling products in its own name (on behalf of the foreign enterprise). This arrangement would technically eliminate the foreign enterprise’s permanent establishment status without any substantive change to the functions the former subsidiary performed. Profits could, then, be shifted out of the jurisdiction where sales occurred. Action 8-10 – Aligning Transfer Pricing Outcomes with Value Creation OECD revised the existing transfer pricing rules used for tax purposes to determine the conditions, including price, for transactions between related entities within an MNE which result in the allocation of profits among the MNE entities in different countries. The misapplication of the transfer pricing rules can lead to profit allocations that are not aligned with where the economic activity that produced the profits occurred. OECD focuses on: transfer pricing issues relate to intangibles which are inherently mobile and hard-to-value; contractual allocation of risk and the resulting allocation of profits based on those risks; other high-risk areas such as transactions that are not commercially rational, profit diversion from the most economically important activities for the MNE and the use of certain payments, like management fees, among related entities to erode the tax base. Further work on the transfer pricing guidance will be undertaken related to profit splits and financial transactions. Additionally the guidance will be supplemented following OECD’s work on the impact of BEPS on developing countries. Action 11 – Measuring and Monitoring BEPS OECD recommends improving access to and enhancing the analysis of existing data to measure and monitor BEPS, as well as evaluate the impact of actions taken to address BEPS. OECD points out that some useful information already collected by tax administrations is not analyzed or made available for analysis. Although currently available data across jurisdictions and MNEs are limited, the country-by-country information (required under action item 13) has the potential for significantly enhancing economic analysis of BEPS. Action 12 – Mandatory Disclosure Rules OECD provides a framework for designing mandatory disclosure rules to obtain early information on potentially aggressive or abuse transactions, arrangements, or structures and their users. With timely, comprehensive, and relevant information, tax authorities have the opportunity to respond quickly to tax risks through informed risk assessments, audits, or changes to legislation or regulations. For countries that already have mandatory disclosure rules, the framework can also be used to enhance the effectiveness of those rules. Action 13 – Transfer Pricing Documentation and Country-by-Country Reporting Taking into consideration the compliance cost for business, OECD developed a three-tiered standardized approach to its revised transfer pricing documentation to enhance transparency for tax administration. The country-by-country (CbC) report provides aggregated information such as the MNE’s global allocation of income, taxes paid, and number of employees by each tax jurisdiction in which the MNE operates. A high- level overview of the MNE’s global business operations and transfer pricing policies is reported in the master file. The local file provides detailed transactional transfer pricing documentation specific to the tax jurisdiction. Tax authorities can use information from these three documents to assess transfer pricing risks, determine most effective deployment of audit resources, as well as determine whether to initiate an audit. CbC reporting is one of the four BEPS minimum standards. Action 14 – Making Dispute Resolution Mechanisms More Effective OECD developed measures to strengthen the effectiveness and efficiency of the mutual agreement procedure (MAP)—a mechanism independent of the ordinary legal remedies available under domestic law, through which the competent authorities of the tax jurisdictions may resolve differences regarding the interpretation or application of the OECD Model Tax Convention on a mutually-agreed basis. Improving the dispute resolution mechanism should reduce uncertainty for MNEs as well as unintended double taxation. Commitment to the effective and timely resolution of disputes through MAP including the establishment of an effective monitoring mechanism to ensure its effective implementation is one of the four BEPS minimum standards. Action 15 – Developing a Multilateral Instrument to Modify Bilateral Tax Treaties Based on its analysis of tax and public international law, OECD concludes that a multilateral instrument on tax treaty measures for BEPS is desirable and feasible. Given the number of bilateral treaties, conforming to BEPS changes by updating the current tax treaty network would be highly burdensome especially considering the substantial amount of time and resources required for most bilateral tax treaties. OECD began developing a multilateral instrument to streamline the implementation of the tax treaty-related BEPS measures in May 2015, and plans to open the instrument for signature by December 31, 2016. Participation in the development of the multilateral instrument is voluntary and open to all interested countries on an equal footing. Moreover participation does not entail a commitment to sign the resulting instrument. The practical challenges with using arm’s length principle (ALP) have led to employing other methods to value the sale of goods and services within a multinational enterprise (MNE). ALP is difficult to use as a method for valuing market power, synergies, and other firm related benefits unique to related party transactions within an MNE. These challenges drive the demand for continued refinements to transfer pricing guidelines that rely on the ALP and for alternative methods when the ALP proves difficult. As we have reported in the past, the arm’s length principle works well to the extent that a market exists for the sale of the good in question or for a good that is comparable. As products become increasingly differentiated, finding a comparable product being sold in the market place becomes increasingly difficult. The difficulty of finding comparables becomes even greater in the case of unique intangible assets, such as goodwill, trademarks, and production techniques. The relationship between unrelated parties is fundamentally different than that between related parties. For example, intangible assets that are licensed to independent parties are more at risk to lose value than if they are licensed to affiliates. Additionally, MNE’s greater size and centralized control can result in greater efficiencies and thus greater cost savings than if the companies were separate. Thus, transfer prices in such cases will not be the same as the arm’s length price used by independent companies. For these reasons, tax authorities and MNEs have difficulty applying the arm’s length principle for these assets and MNEs. One method used as an alternative to the arm’s length approach is the profit split method. The profit split method can be used in the absence of comparable arm’s length prices to allocate profits for certain transactions between two related parties by reference to the relative value of each party’s contribution to the combined profit of the parties. The profit split method allows for greater flexibility by taking into account specific, possibly unique, facts and circumstances of the related parties that are not present with unrelated parties. However, the profit split method also can be used to allocate profits to lower tax jurisdictions. A division of profits based on easily measureable factors like the share of expenditures may not, in every case, reflect where value is created. Profit splits that are more complex and reflect a variety of contributing functions, such as research and development (R&D), advertising, use of another intangible assets, and formulas may better measure the actual contribution to value of the different parties. To illustrate how the profit split method can be used to shift profits from high tax to low tax jurisdictions, we return of the example of the Chockolet. As before, Chockolet has developed an intangible asset in terms of its trademarked brand “Chockolet” that allows the company to sell its product at a higher price than a generic. In this case, Chockolet does not have a comparable arm’s length price for the trademark license that can be used to allocate profits. In the absence of such a price, Chockolet adopts the profit split method to allocate profits between the parent company and its distributing subsidiary. Under this method, the profits in excess of the normal mark-up (what a generic would get), which are the profits that reflect the value of the trademark, are split according to the relative contribution of the two parties.) In this simple example shown in table 2, relative contributions are measured as their share of total expenditures. As table 2 illustrates, Chockolet’s product receives a 60 percent (or $160) mark-up—higher than what a generic would earn. Its subsidiary earns a 5 percent mark-up of $32 on its expenditures, which are routine in nature. In this case, the total profit in excess of the generic return is $55 ($192 - $137) and is split by multiplying it by each party’s share of expenditures giving Chockolet $47 (its 77 percent share) and the subsidiary $14 (its 23 percent share). As tables 1 and 2 show, the profit split method results in a lower overall effective tax rate for the MNE group than the effective tax rate of 29.6 percent, which would be achieved by using a price based on the arm’s length principle, were it available. This profit split method, which is based on relative expenditures by the two related parties, assigns too much profit to the distributing subsidiary. Because the subsidiary performs routine distribution functions that did not contribute to creating the intangible asset, the contributions of the distributing subsidiary should only earn a normal return. But, under this method, the subsidiary is allocated some of the profits that reflect the value of the intangible. More complex profit split methods could alleviate this misallocation, but would reduce the simplicity that is often a key attraction of profit split methods. For example, an adjustment to the expenditure method shown in table 2 that distinguished between the type of expenditures and whether those expenditures earn a normal return, or contribute to the value of the intangible asset may result in an arms’ length pricing outcome, but would require being able to determine the types of expenditures that contribute to the value of the intangible. OECD’s final report issued in October 2015 did not provide guidance on the profit split method but instead provided a plan for future guidance. This plan raised concerns about whether the future revisions will ensure that the application of this method is not subjective or arbitrary. In July 2016, OECD issued a discussion draft for public comment to clarify and strengthen its guidance on the profit split method. The additional guidance includes, for example, the recognition that profit split methods may enable tax authorities and MNEs to better align creation of value with profits than relying on the ALP in situations where related parties both contribute intangible assets. Additionally, the draft provides guidance on potential limitations in the use of some common profit splitting factors. For example, cost-based profit splitting factors should be used when there is a strong correlation between expenses incurred and relative value contributed. The discussion draft also cautions that cost-based profit splitting factors can be sensitive to country variations such as price levels or wages, which could distort the relative contribution and final profit allocation. In addition to providing more guidance for applying profit splits, the OECD final report and discussion draft introduced a new distinction between the types of profits to which a profit split method would apply. That new distinction is based on risk by recognizing two different types of profits that could be split—actual or anticipated profits. In a profit split of actual profits, the profits of the parties to the transaction are combined and the actual profits are split based on the relative contributions of each party. While the basis (i.e., formula or share) of the split of combined profits is established before profits are realized, the split is applied to actual, combined profits resulting from the transaction. Alternatively, the combined profits may be split based on the anticipated profits according to the contributions of each party. In the latter case, one party to the transaction receives a fixed payment for tax purposes based on its share of anticipated profits regardless of what actual profits are, while the other party receives whatever actual profit remains after the payment is made. The actual profit split method results in a greater sharing of the uncertain profit allocations than under a profit split of anticipated profits To illustrate how these methods could be applied, we consider again the example of the Chockolet where, in this case, the expected mark-up over costs for Chockolet is 60 percent. Under a profit split based on anticipated profits, the expected Chockolet profits of $55 ($192-$137) would be split according to the relative shares of expenditures, resulting in a fixed payment of $44 (and a net profit of $14) to the distributing subsidiary. The Chockolet parent would be assigned the residual Chockolet profits from actual final sales of $117 ($162-$44) resulting in a net profit of $17. In this type of profit split, the distributing subsidiary is assigned a guaranteed payment while the transfer pricing outcome for Chockolet depends on actual outcomes of the business activities and risks. However, under a profit split based on actual profits in this example, the actual total profits from the producer and the subsidiary are combined and split according to the relevant share of expenditures between them. In this case, the transfer pricing outcome for the distributing subsidiary would also depend on factors influencing the final realized profits and could be assigned less profit, as in the case below where profits were less than expected. The choice of splitting actual or anticipated profits could be manipulated to shift profits to lower-tax jurisdictions. For example, if the Chockolet parent in table 3 suspected that profits might end up lower than the anticipated profits specified in a contract, a profit split based on anticipated profits would be preferable as a way to shift profits to its lower-taxed distributing subsidiary. This information asymmetry could happen when the parent corporation knows more about the likely profit outcomes than tax authorities. For example, the expected profits could be the result of a high probability of slightly lower profits combined with a low probability of very high profit. If the parent knows about the likely outcomes, while tax authorities only know the expected profit, the parent corporation would have an incentive to pursue the anticipated profit split as the appropriate method for allocating profits so that it will have a higher chance of shifting profits to a low-tax subsidiary. The OECD discussion draft recommends that a profit split based on actual profits (one that “shares risk”) require a higher level of integration of activities. OECD asserts that MNEs that are highly integrated share risk, and therefore a profit split method that facilitates the ability to allocate (or share) risk across parties is the more appropriate method. However, as discussed previously, related parties cannot be treated the same as unrelated parties. The more closely related the parties are, such as a parent and its wholly owned subsidiary, the less likely the parent is able to export or share risk. Thus, care should be taken in using risk as an implicit factor for splitting profits. The United States does not differentiate between actual or anticipated profits as the OECD discussion draft has put forth. According to Department of Treasury regulations, profits splits must be based on two methods — (1) the comparable profit split or (2) the residual profit split. The comparable profit splits allocates profits based on the division of profits of unrelated parties whose transactions and activities are similar to those of the related parties in question. The residual profit split allocates profit or loss from the relevant business activity following a two-step process. The first step allocates operating income to each party to provide a market return for its routine contributions. In the second step, any residual profit associated with intangible assets is divided among the parties based on the relative value of their contributions of intangible property that were not accounted for as a routine contribution. The value of these contributions may be determined by factors such as market benchmarks, capitalized values of R&D, or other expenditures. Specialists have also expressed concerns that OECD’s discussion draft may signal an additional reliance on profit split methods. They are concerned that if the guidance is not explicit on when a profit split method is appropriate, what factors should be used to allocate the profit, and how it can be applied, countries may increasingly rely on profit split methods as a way of moving away from arm’s length pricing and toward adopting formula apportionment. Formula apportionment is a factor based tax system that would allocate an MNE’s global profits based on the share of their business factors, such as sales, employment, or physical capital, located in a given country. Profit split methods can have features of formulary apportionment. The concern is that, to the extent that a country relies on profit split methods over arm’s length pricing, the allocation of profits based on some kind of functional analysis becomes more routine, making the adoption of a tax system that effectively taxed MNE’s profits based on formulary apportionment more likely. While it is uncertain whether an increased use of profit split methods would lead to the adoption of a formulary apportionment tax system, adopting formulary apportionment in an uncoordinated fashion, as we have reported in the past, would increase the probability of double taxation. James R. McTigue, Jr. (202) 512-9110 or [email protected]. In addition to the above named contact, Kevin Daly (Assistant Director), Ann Czapiewski, Bertha Dong, Edward Nannenhorn, Cynthia Saunders, Andrew J. Stephens, and Jennifer G. Stratton made key contributions to this report.
|
Globalization has increased incentives for multinational corporations to shift profits from country to country to use differences in the countries' corporate tax systems to reduce taxes. This profit shifting can lead to the erosion of U.S. and other countries' corporate tax bases, reducing tax revenues. OECD did a comprehensive analysis of corporate base erosion and profit shifting and, in the fall of 2015, issued 15 action plans to address the problem. GAO was asked to analyze the effects on the U.S. economy of adopting OECD actions. GAO analyzed the potential effects of the two actions furthest along in implementation: revised transfer pricing guidelines and new transfer pricing documentation, including country-by-country reporting. For these actions, GAO examined (1) how likely it is that the action would reduce BEPS, (2) what is known about the potential administrative and compliance costs of the action, and (3) what is known about the potential effects the actions could have on the U.S. economy. GAO reviewed documents, conducted a literature review, and interviewed officials from IRS, the U.S. Department of the Treasury, OECD, and trade groups of industries likely to be affected by the actions. In 2015, the Organization for Economic Co-Operation and Development (OECD) issued revised guidelines, including 15 actions to help reduce base erosion and profit shifting (BEPS) of multinational enterprises (MNEs). One action focuses on transfer pricing guidance with the intent of aligning MNE profits with the location of economic activity, and preventing corporations from shifting and assigning profits to lower-taxed related corporations by artificially setting below-market transfer prices of property and services. Another action makes MNE activities more transparent, through documentation and reporting shared among countries. Transfer Pricing Guidance : OECD's guidance emphasizes that transfer price analysis should reflect actual economic activities, such as who controls decisions related to risk and who has the financial capacity to bear the risk. This clarifies prior guidelines, which also included risk analysis based on functions, but that now focus on the parties' ability to control and finance risk. GAO found that OECD's revised guidance may reduce BEPS if it encourages MNEs and tax authorities to ensure that transfer prices are based on real economic activity. U.S. regulations consider risk as part of the analysis of transfer prices. The arm's length principle, which treats transactions between related parties as if they were unrelated, is widely accepted for evaluating transfer prices. However, its application to risk is problematic because related parties cannot transfer risk the way unrelated parties can. Without addressing the application of the arm's length principle under these situations, uncertainty about the correct transfer prices may allow for continued BEPS. Administration costs of implementing the guidelines will be minor according to Internal Revenue Service (IRS) officials because IRS's transfer price reviews are consistent with the revised guidance. However, taxpayer compliance costs are uncertain because they will depend on how MNEs respond to the revisions. According to stakeholders and industry literature, U.S. employment and investment are unlikely to be significantly affected because the transfer pricing guidance affects a relatively narrow area of the tax code. Transfer Pricing Documentation and Reporting : OECD's guidance includes new country-by-country (CbC) documentation and reporting actions where information on MNEs activities in different countries will be shared among the countries' tax authorities. GAO found that CbC reporting may decrease BEPS because more consistent information will be available to tax authorities on the worldwide activities of MNEs. According to IRS officials, CbC implementation costs are uncertain at this time, but can be mitigated by using existing systems and processes. However, MNE compliance costs would likely increase due to new data system needs, according to stakeholders. The economic effect of CbC reporting is uncertain because it depends on the extent to which MNEs move business functions to low-tax countries in response to the potential increased scrutiny of BEPS. GAO does not make recommendations in this report. GAO provided a draft of this report to IRS and Treasury for review and comment. IRS provided technical comments, which were incorporated, as appropriate.
|
The Great Lakes and their connecting channels form the largest system of freshwater on earth. Covering more than 94,000 square miles, they contain about 84 percent of North America’s surface freshwater and 21 percent of the world’s supply. The lakes provide water for a multitude of activities and occupations, including drinking, fishing, swimming, boating, agriculture, industry, and shipping for more than 30 million people who live in the Great Lakes Basin—which encompasses nearly all of the state of Michigan and parts of Illinois, Indiana, Minnesota, New York, Ohio, Pennsylvania, Wisconsin, and the Canadian province of Ontario. During the 1970s, it became apparent that pollution caused by persistent toxic substances, such as BCCs, was harming the Great Lakes and posing risks to human health and wildlife. On average, less than 1 percent of the Great Lakes’ water recycles or turns over each year, and many pollutants stay in place, settling in sediments or bioaccumulating in organisms. As a result, under the GLWQA of 1978, the United States and Canada agreed to a policy of prohibiting the discharge of harmful pollutants in toxic amounts and virtually eliminating the discharge of such pollutants. The two parties also pledged to develop programs and measures to control inputs of persistent toxic substances, including control programs for their production, use, distribution, and disposal. The concept of virtual elimination recognizes that it may not be possible to achieve total elimination of all persistent toxic substances. Some toxic substances may be produced by or as a result of natural processes, persist at background or natural levels, or cannot be eliminated for technological or economic reasons. In addition to agreeing to a policy calling for the virtual elimination of toxic pollutants, the 1978 GLWQA, as amended, also established a process and set of commitments to address the pollutant problem. Other joint United States and Canada toxic reduction efforts were initiated in subsequent years, in keeping with the objectives of the agreement. These included the 1991 Binational Program to Restore and Protect the Lake Superior Basin—which, among other things, established a goal of achieving zero discharge of designated persistent and bioaccumulative toxic substances from point sources in the Lake Superior Basin. In addition, recognizing the long-term need to address virtual elimination, the EPA Administrator and Canada’s Minister of the Environment signed the Great Lakes Binational Toxics Strategy in 1997, which provides a framework for actions to reduce or eliminate persistent toxic substances, especially those that bioaccumulate in the Great Lakes Basin. Agreements within the two countries also addressed the problem of toxic pollutants and the implementation of the GLWQA. In the United States, the Governors’ Agreement in 1986 developed by the Council of Great Lakes Governors recognized that the problem of persistent toxic substances was the foremost environmental issue confronting the Great Lakes, and they were committed to managing the Great Lakes as an integrated ecosystem. At that time, inconsistencies in state standards and implementation procedures became an increasing concern to EPA and state environmental managers. The Governors agreed to work together to, among other things, establish a framework for coordinating regional action in controlling toxic pollutants entering the Great Lakes Basin, increase federal emphasis on controlling toxic pollution, and expedite the development of additional national criteria or standards for toxic substances to protect both the ecosystem and human health. In Canada, the Canadian and Ontario governments entered into several agreements with each other over the last 30 years to address environmental problems in the Great Lakes. These agreements, each referred to as the Canada-Ontario Agreement Respecting the Great Lakes Basin Ecosystem, included a focus on the control of toxic chemical pollution and runoff. In addition, a 2002 agreement outlines how these two governments will continue to work together to focus efforts to help clean up the Great Lakes Basin ecosystem. Several priority projects are planned under the agreement, including reducing the amount of harmful pollutants, such as mercury, that find their way into the Great Lakes. To further control toxic substances in the United States, efforts on the GLI began in the late 1980s to establish a consistent level of environmental protection for the Great Lakes ecosystem, particularly in the area of state water quality standards and NPDES programs for controlling point sources of pollution. As authorized by the Clean Water Act, the NPDES permit program controls water pollution by regulating point sources that discharge pollutants into U.S. surface waters. Under NPDES, all facilities that discharge pollutants from any point source into U.S. waters are required to obtain a permit that provides two levels of control: (1) technology based limits (discharge limits attainable under current technologies for treating water pollution) and (2) water quality-based effluent limits (based on state water quality standards). Point sources are discrete conveyances such as pipes or constructed ditches. Individual homes that are connected to a municipal system, use a septic system, or do not have a surface discharge, do not need an NPDES permit; however, industrial, municipal, and other facilities must obtain permits if their discharges go directly to surface waters. As of May 2005, there were nearly 5,000 facilities in the Great Lakes Basin that had NPDES permits, and over 500 of these were considered major facilities. In 1989, the Council of Great Lakes Governors agreed to join EPA in developing GLI because it supported the goal of consistent regulations among the Great Lakes states. The effort to develop GLI was under way when Congress amended the Clean Water Act with the passage of the Great Lakes Critical Programs Act of 1990. This act required EPA to publish by June 1992, final water quality guidance for the Great Lakes System that conformed to the objectives and provisions of the GLWQA. It further required the states to adopt water quality standards, antidegradation policies, and implementation procedures consistent with the guidance no later than 2 years after it was published. If the states failed to adopt such water quality standards, policies, and procedures consistent with the guidance, EPA was to promulgate them not later than the end of the 2-year period. In making such a determination, EPA reviewed the states water quality standards, antidegradation policies, and implementation procedures for consistency with the guidance. To control toxic substances and protect aquatic life, wildlife, and human health, GLI sets forth water quality criteria for 29 toxic substances, such as PCBs, mercury, dioxin, and chlordane. These criteria include standards for 9 of 22 BCCs. GLI also contains detailed methodologies for developing criteria for additional pollutants and implementation procedures for developing more consistent, enforceable water quality-based effluent limits in discharge permits for point sources of pollution. The most common of the 22 BCCs currently being discharged from point sources in the Great Lakes Basin is mercury. Because mercury can be highly toxic and travel great distances in the atmosphere, it has long been recognized to have a wide range of detrimental effects for ecosystems and human health. When mercury is deposited within a water body, microorganisms can transform it into a very toxic substance known as methyl mercury. Methyl mercury tends to remain dissolved in water and can bioaccumulate in the tissues of fish to concentrations much higher than in the surrounding water. The primary way people are exposed to mercury is by eating fish containing methyl mercury. Poisoning can result from eating fish contaminated with bioaccumulated methyl mercury, which is dangerous to certain adults, children, and developing fetuses. Three general principles guided the development of GLI: (1) to incorporate the best science available to protect the Great Lakes Basin ecosystem; (2) to promote consistency in standards and implementation procedures in Great Lakes states’ water quality standards while allowing appropriate flexibility; and (3) to reflect the unique nature of the Great Lakes Basin ecosystem by establishing special provisions for toxic substances, such as BCCs. Although improved consistency in Great Lakes states’ water quality standards and NPDES programs was a primary goal of GLI, implementing and supplemental regulations published by EPA provided flexibility to states in adopting and implementing GLI provisions in several areas. These regulations included relief from GLI provisions for point source dischargers through the use of existing NPDES program provisions such as variances, mixing zones, and compliance schedules. For example, provisions in GLI allow the states to grant dischargers variances for up to 5 years from GLI water quality standards, which are the basis of a water quality based effluent limitation included in NPDES permits. According to GLI, variances are to apply to individual dischargers requesting permits and apply only to the pollutant or pollutants specified in the variance. GLI has limited potential to incrementally improve water quality in the Great Lakes Basin because first, it primarily focuses on point sources, which are not the major source of certain toxic pollutants that currently affect the Great Lakes Basin. Moreover, once GLI was implemented, few NPDES permits included limits for BCCs because they were not in discharges, and many of these BCCs were already regulated or banned before the GLI guidance was issued. Finally, for mercury, which is the BCC that is most frequently controlled in NPDES permits, GLI provisions provide flexible implementation procedures, including variances, that under certain circumstances are used by states to allow dischargers relief from the more stringent water quality standards. The stringent water quality standards may be either technically or economically unattainable by dischargers. A primary focus of GLI is to establish consistent water quality standards within the Great Lakes Basin, which apply to all sources of pollutants but mainly to point sources. Thirty-three years ago, point sources of pollution were the basis for the establishment of the NPDES program and the major cause of poor water quality in the Great Lakes Basin. In implementing this program, it was recognized that controlling point sources was an important means of reducing pollutants discharged into waterways by requiring permits that specified allowable levels of pollutants. Since the introduction of the NPDES program there have been significant water quality improvements in the Great Lakes Basin. Currently, however, nonpoint sources of certain toxic pollutants are a significant threat to overall water quality in the Great Lakes Basin and other areas within the United States and Canada. Nonpoint sources of pollutants often impact overall water quality through runoff from agricultural processes or releases into the air from industrial facilities, which are then deposited into the Great Lakes. For example, major sources of mercury released into the air include coal-fired power plants, industrial boilers, and waste incinerators that burn materials containing mercury. Much, if not most, of the mercury entering the Great Lakes is from atmospheric deposition. EPA Great Lakes National Program Office officials stated that air deposition is likely responsible for more than 80 percent of mercury loadings into the Great Lakes. Currently, nonpoint sources of pollution are more difficult to regulate than point sources because it is more difficult to determine the specific sources of pollutants. The dynamic nature of various source pollution is illustrated below. Several state and environmental officials commented that while GLI resulted in states becoming more aware of the need to attain water quality standards for BCCs from point sources, it did not specifically address the larger problem of nonpoint sources of pollution. For example, Minnesota officials stated that they do not anticipate any water quality improvements from GLI for mercury, the most prevalent BCC in the Lake Superior Basin, because GLI does not specifically address nonpoint sources, such as atmospheric deposition. A 2004 state study estimated that 99 percent of mercury in Minnesota lakes and rivers comes from atmospheric deposition. The study concluded that although 30 percent of mercury atmospheric deposition in Minnesota is the result of natural cycling of mercury, 70 percent is the result of human activities, such as the release of trace concentrations that are naturally present in the coal used by power plants, and in the mining and processing of taconite ore, which is used to produce iron and steel. Of the mercury atmospheric deposition in Minnesota, it is estimated that 10 percent of this is from emissions within Minnesota. The sources of mercury atmospheric deposition from within Minnesota are shown in figure 4. While the focus of GLI is on point sources, the importance of controlling nonpoint sources of pollution to improve overall water quality in the Great Lakes is recognized in GLI guidance. The guidance states that once GLI is implemented by the states, water quality criteria for pollutants and other provisions that are included in the guidance would be applied to nonpoint sources. However, according to the guidance, to be implemented, nonpoint source provisions would need to be enforced through the states’ own regulatory programs. GLI also promotes the use of total maximum daily loads (TMDL) as the best approach for equitably addressing both point and nonpoint sources. TMDLs for the Great Lakes are also addressed in the Great Lakes Strategy 2002, which was developed by the U.S. Policy Committee for the Great Lakes. The strategy has an objective that TMDLs for each of the Great Lakes and impaired tributaries will be completed by 2013; but according to EPA officials, TMDLs for BCCs have not been established for any of the Great Lakes, and only two TMDLs for BCCs have been completed for tributaries. While GLI identified many toxic pollutants, few NPDES permits currently limit the discharge of these pollutants, particularly BCCs, because they are either not present in discharge water or the pollutants are already restricted or banned. BCCs are still present in some facilities’ discharges and are regulated by NPDES permits, but while there are nearly 5,000 permits for facilities in the Great Lakes Basin, there are only about 250 discharge limits for BCCs, according to Great Lake states’ officials. Five of the eight states reported that they had discharge limits for BCCs in the Great Lakes Basin. Further, not only are there relatively few BCC discharge limits in permits, but most, 185, are for mercury—with Michigan issuing the most discharge limits of the five states. The number of BCC discharge limits by state and pollutant is shown in table 1. Several of the pollutants addressed by GLI had their use restricted or banned by EPA in the 1970s and 1980s and therefore are not used by facilities or found in their discharges. Of the 22 BCCs covered by GLI, at least 12 are either banned or are no longer produced in the United States. Some of the banned BCCs, such as toxaphene and dieldrin, are pesticides and insecticides that are likely to be present in the Great Lakes Basin water bodies as contaminated sediments from prior agricultural runoff rather than municipal and industrial point source discharges. Other BCCs, such as lindane, are no longer produced in the United States, while others, such as mirex and hexachloracyclohexane, are no longer produced or used in the United States. See appendix II for BCCs identified in GLI and whether they have been banned, restricted, or are still in use. While the preceding factors limit GLI’s potential to improve overall water quality in the Great Lakes, its effective implementation is still important because the virtual elimination of toxic pollutants in the Great Lakes Basin remains a goal for the United States and Canada. Controlling point source pollution is still needed to meet this objective. Although point source discharges of toxic pollutants are not as widespread as nonpoint sources, point source discharges may create localized “hot spots” of elevated concentrations of BCCs. These areas can have potentially adverse effects on aquatic life, wildlife, and humans. For example, while the major sources of mercury are nonpoint sources, it is still the most prevalent BCC found in point source discharges overall in the Great Lakes, and heavy concentrations of mercury in these hot spots may result in its bioaccumulation in fish to levels that are dangerous to both humans and wildlife that consume them. Achieving GLI’s objective to have consistent water quality standards for controlling point sources of toxic pollutants may prove difficult, however, because of flexible implementation procedures that allow discharge of pollutants at levels greater than GLI water quality standards. Many NPDES permits for facilities in the Great Lakes Basin allow the discharge of mercury at levels greater than the GLI water quality standard. Flexible implementation procedures such as variances are widely used to allow dischargers to exceed the strict GLI mercury water quality standard of 1.3 nanograms per liter of water (ng/L). GLI allows states to grant variances for complying with the mercury and other water quality standards under certain circumstances, such as if the imposition of water quality standards would result in substantial and widespread harmful economic and social impact. Variances are applicable only to the permit holder requesting the variance for up to 5 years and are only available for dischargers that were in existence as of March 23, 1997. New facilities are not eligible for variances and must comply with the water quality standard for mercury established under GLI. Officials in two states—Minnesota and Michigan—expressed concerns that new industrial facilities that discharge mercury may not locate in the state because of their inability to comply with the mercury standard. The use of variances for mercury became a more critical concern when new methods to measure the pollutant were approved by EPA in 1999, allowing mercury to be measured at a quantification level of 0.5 ng/L, below the GLI water quality standard of 1.3 ng/L. This method was 400 times more sensitive than the one previously used by EPA and allowed the very low GLI limits to be quantified for the first time, causing potentially widespread problems for Great Lakes Basin dischargers that discovered for the first time that they were exceeding the mercury water quality criteria, according to state NPDES program officials. Using the more sensitive method, many more facilities were found to have levels of mercury in their effluent that exceeded water quality standards. State and EPA officials also determined that no economically feasible treatment technologies existed to reduce mercury to the lower level, and states were unwilling to issue permits that placed facilities in noncompliance. Michigan officials stated that they knew of only one permitted facility that was able to comply with the lower standard. As a result, states issued variances under their GLI regulations that provide for the most efficient short-term relief to “ubiquitous” pollutants, and EPA encouraged states to consider variances for multiple dischargers on a watershed basis, where appropriate.EPA wanted to provide the states appropriate flexibility in adopting and implementing GLI’s requirements, while also maintaining a minimum level of consistency. To facilitate granting variances for numerous facilities exceeding the mercury standard, three states—Indiana, Ohio, and Michigan—adopted procedures that expedited and simplified the variance application and granting process. While variances are widely used under GLI, mixing zones and compliance schedules are also options that states may use under GLI. Mixing zones are areas around a facility’s discharge pipe where pollutants are mixed with cleaner receiving waters to dilute their concentration. Within the mixing zone, concentrations of toxic pollutants, such as mercury, are generally allowed to exceed water quality criteria as long as standards are met at the boundary of the mixing zone. Several Great Lakes states no longer allow the use of mixing zones for BCCs in their permits, and GLI authorization for their use by all existing BCC dischargers expires in November 2010. Mixing zones, as with variances, are not authorized for new dischargers. Compliance schedules are another option and grant dischargers a grace period of up to 5 years before they must comply with certain new or more restrictive permit limits. Similar to mixing zones, compliance schedules are also not available to new dischargers in the Great Lakes Basin and are only available for existing permits reissued or modified on or after March 23, 1997. According to state officials, Minnesota uses compliance schedules for existing dischargers to issue permits for facilities that have mercury levels above GLI water quality criteria. These schedules extend no later than March 2007, and then the GLI water quality standard of 1.3 ng/L must be met, unless a variance is granted, according to a state official. By 1998, the Great Lakes states largely completed adopting GLI provisions in their regulatory programs by incorporating GLI standards in their environmental regulations and NPDES permit programs. Upon reviewing state regulations, however, EPA found that several states had either failed to adopt some GLI provisions or adopted provisions that were inconsistent with GLI guidance. As a result, EPA promulgated regulations applying certain GLI provisions to some states, but issues surrounding the implementation of these provisions, particularly in Wisconsin, have not been fully resolved. Further, while GLI provisions have been adopted in most state programs, a significant obstacle exists to achieving GLI’s intended goals, in that many BCCs targeted by GLI cannot be measured at the low level of GLI water quality criteria because sufficiently sensitive measurement methods do not exist. Without the ability to measure to the water quality criteria, it is difficult to accurately determine whether there is a need for a pollutant permit limit for a facility’s discharge. GLI provisions have generally been incorporated into state regulations and NPDES programs, but this did not occur within the statutory time frame; and, as a result, two lawsuits were filed against EPA to implement the requirements of the Great Lakes Critical Programs Act of 1990. This act, which amended the Clean Water Act, required the Great Lakes states to adopt standards, policies, and procedures consistent with GLI within 2 years of its publication. The act further required EPA to issue GLI standards by the end of that 2-year period for any state that had failed to do so. EPA, however, did not issue GLI standards by the required date for those states that had failed to develop standards. Consequently, in July 1997, the National Wildlife Federation filed a lawsuit to force EPA to take action. In response, EPA negotiated a consent decree providing that it must make GLI provisions effective in any state that failed to make a submission by February 1998. EPA was never forced to take action, however, because all of the Great Lakes states adopted GLI standards into their regulations and submitted them to EPA for approval by the February deadline. For example, in July 1997, Michigan modified its administrative rules for water quality standards and added implementing procedures to the state’s administrative rules. Other states adopted GLI into their regulations for the Great Lakes Basin area of their states, and they later adopted aspects of the GLI provisions, or all of them, for the entire state. For example, according to state officials, when GLI was originally adopted by Ohio, most of its provisions only applied to the Lake Erie Basin, but in 2002, Ohio adopted GLI aquatic life criteria statewide. Further, Ohio applied GLI criteria for human health only to the Lake Erie Basin and based human health criteria for the remainder of the state on EPA national guidance. However, according to Ohio environmental officials, the two health criteria have been converging since the adoption of GLI. In addition to the requirements of the Great Lakes Critical Programs Act and the consent decree provisions, EPA’s GLI regulations bound the agency to publish a notice approving the submission within 90 days or to notify the state that all or part of their submission was disapproved and to identify changes required for EPA’s approval. Because EPA did not take the required actions on every state’s submission, in November 1999, the National Wildlife Federation and the Lake Michigan Federation filed a lawsuit to force EPA to take action on all Great Lakes states’ GLI submissions. EPA negotiated another consent decree providing that EPA would take the required actions by July 31, 2000, for six states—Illinois, Indiana, Michigan, Minnesota, Ohio, and Pennsylvania—and by September 29, 2000 and October 31, 2000, for New York and Wisconsin, respectively. EPA ultimately issued its final determinations for Michigan, Ohio, Indiana, Minnesota, Pennsylvania, and Illinois in August 2000. Determinations for New York and Wisconsin followed in October and November 2000, respectively. Although a few exceptions were identified, EPA determined that all the Great Lakes states had generally adopted requirements consistent with GLI; however, certain matters relating to the state submissions remained unresolved. While EPA determined that all the Great Lakes states had generally adopted requirements consistent with GLI, it disapproved certain elements of six states’ submissions as less protective than GLI. EPA promulgated final rules applying the relevant GLI provisions to the disapproved elements. For example, EPA disapproved four states’ rules relating to determining the need for permit limits on the aggregate toxicity of facility’s discharge—termed whole effluent toxicity (WET) reasonable potential. EPA disapproved certain elements of the state rules because they were deemed inconsistent with GLI provisions. In determining whether the states adopted policies, procedures, and standards consistent with GLI, EPA evaluated whether the states’ provisions provided at least as stringent a level of environmental protection as the corresponding provisions of the guidance. In 12 instances, EPA determined that state provisions were not as stringent or were absent. EPA then promulgated final rules specifying which state provisions it was disapproving as being inconsistent with GLI and applying the relevant GLI provisions. If the state later adopted requirements that EPA approved as being consistent with the GLI provisions, then EPA indicated that it would amend its regulations so that they would no longer apply for the state. The individual provisions disapproved by EPA vary from state to state, although the WET provisions were disapproved for four of the six states with disapproved elements. For Michigan and Ohio, the WET reasonable potential procedure was the only GLI provision that was disapproved. For Indiana, EPA disapproved its WET reasonable potential procedure and two additional provisions. Specifically disapproved were Indiana’s criteria for granting of variances from water quality standards and provisions preventing the inclusion of discharge limits in permits when a facility has applied for a variance. Illinois’ sole disapproved provision related to TMDL development while New York’s disapproved provisions related to chronic aquatic life criteria and mercury criterion for the protection of wildlife. GLI provisions disapproved by EPA are summarized in table 2. The Great Lakes states now have requirements, consistent with GLI, to follow that are either fully incorporated into their rules or that have been promulgated by EPA. However, in Wisconsin, the GLI provisions promulgated by EPA have not been implemented because state officials believe provisions that are not explicitly supported by Wisconsin law cannot be implemented and because material disagreements exist between state officials and EPA over the GLI provisions. This situation has resulted in delays in issuing renewals of some NPDES permits or issuing permits under state provisions that are inconsistent with GLI, according to state officials. Of the four requirements EPA found inconsistent for Wisconsin, one significant disagreement involved certain technical and scientific details relating to the consideration of intake pollutants and another involved the determination of WET reasonable potential under GLI. For the WET determination, Wisconsin Department of Natural Resources officials stated that the GLI requirements are a misapplication of statistical procedures and overly burdensome. Because of these differences in determining WET reasonable potential, Wisconsin uses both state and GLI procedures. If the Wisconsin procedures result in the need for a WET limit, but the GLI procedures do not, then the permit is issued with the WET limit. However, if GLI procedures result in the need for a WET limit, but the state procedures do not, the permit is backlogged until a solution can be negotiated. As a possible resolution to this issue, EPA has recently provided the state with a small grant to reevaluate their WET procedure and identify possible changes that would be as protective as the GLI and acceptable to Wisconsin officials. While the state has not implemented WET reasonable potential provisions that are consistent with GLI, it has only impacted a relatively small number of permits in the Great Lakes Basin. The disagreement involving Wisconsin’s provisions for intake pollutants that are inconsistent with GLI have a potentially greater impact and, according to state officials, they do not have the resources to use the more complex GLI approach. The GLI provisions for intake pollutants are important because, according to state officials, the most prevalent BCC, mercury, exists at levels exceeding its water quality criteria throughout the Great Lakes Basin. GLI provisions address the condition where pollutant levels in a water body contain “background” levels that exceed the water quality criteria for that pollutant. Specifically, provisions address the discharge of pollutants that are taken in through a facility’s source or intake water and are then returned to the same water body. GLI allows facilities to discharge the same mass and concentration of pollutants that are present in its intake water—a concept of “no net addition”—provided the discharge is to the same body of water and certain other conditions are met. EPA considers this practice to be environmentally protective and consistent with the requirements of the Clean Water Act when a pollutant is simply moved from one part of a water body to another that it would have reached regardless of its use by a facility. However, EPA determined that Wisconsin’s procedures allow pollutant discharges at background levels, regardless of whether the pollutant originated from the same body of water, a different body of water, or the facility generated the pollutant itself. Further, EPA found that the state’s procedures would allow granting of a permit without discharge limits in situations where one would be required by GLI. EPA therefore determined that the state’s procedure was inconsistent with GLI because it would allow facilities to discharge pollutants that were not previously in the water body at levels greater than the applicable water quality criteria, which EPA believed was inconsistent with the fundamental principles of GLI permitting procedures. Although the procedures were disapproved, state officials continue to disagree with EPA’s determination. The disagreement has remained unresolved since 2000, and EPA’s rule applying the GLI provisions to Wisconsin have not been followed by the state. EPA Region 5 officials stated that they have had some contacts with the Wisconsin officials, but these contacts have not resulted in resolving the differences. The introduction of GLI in the Great Lakes states has produced several benefits. GLI introduced new standards and methodologies that are based on the best science available for protecting wildlife, deriving numeric criteria for additional pollutants, developing techniques to provide additional protection for mixtures of toxic pollutants, and determining the bioaccumulative properties of individual pollutants. GLI also formalized a set of practices and procedures for states to use in administering their NPDES permit programs and resolved legal challenges to provisions similar to GLI in at least one state. Through its emphasis on BCCs, GLI played a large role in stimulating efforts to address these particularly harmful and problematic toxic chemicals. GLI’s impact on state water quality programs has also extended beyond the Great Lakes Basin, as a number of states have adopted GLI standards and procedures statewide. Also, according to EPA officials, parts of GLI have been used nationally and in other states, including implementation methods in California, wildlife criteria in New Jersey, and bioaccumulation factors in EPA’s revised national guidance for deriving human health water quality criteria. While GLI has provided benefits, developing the ability to measure pollutants at GLI water quality criteria levels remains a challenge to fully achieving GLI goals in the Great Lakes Basin. Several GLI pollutants cannot be measured near their water quality criteria, and without this ability it is difficult to determine whether a discharge limit is needed and to assess compliance. For example, if a pollutant has a water quality criteria of 4 ng/L but can only be measured at 40 ng/L, it cannot be determined if the pollutant is exceeding the criteria unless it is at or above the measurement level, which is about 10 times greater than the criteria level. Therefore, the ability to accurately and reliably measure pollutant concentrations is vital to the successful implementation of GLI water quality standards. Michigan and Ohio officials identified 23 GLI pollutants where the water quality criteria is lower than the level at which the pollutant’s concentration in water can be reliably measured. In addition, for Ohio, 11 of the 22 BCCs that are the central focus of GLI cannot be measured to the level of their water quality criteria. These include two of the more prevalent BCCs—PCBs and dioxin. Currently, using EPA approved methods, PCBs can be detected only at levels around 65,000 times greater than the levels established by their water quality criteria. Minnesota officials stated that, if methods existed to measure PCBs at low levels, it might be revealed that PCBs are as much of a problem as mercury. At the time GLI was developed, it was envisioned that more sensitive analytical methods would eventually be developed to allow measurement of pollutant concentrations at or below the level established by GLI water quality criteria, which would allow for the implementation of enforceable permit limits based on GLI criteria. Until this could be realized, EPA provided a provision in GLI requiring dischargers to implement a pollutant minimization program (PMP) to increase the likelihood that the discharger is reducing all potential sources of a pollutant to get as close as possible to the water quality criteria. A PMP sets forth a series of actions by the discharger to improve water quality when the pollutant concentration cannot be measured down to the water quality criteria. The Great Lakes states’ experience with mercury illustrates the impact that having sufficiently sensitive measurement methods can have on identifying pollutant discharges from point sources. Until 1999, methods to measure mercury at low levels were generally not available. Few mercury permit limits existed, and measurement sensitivity using EPA approved methods was about 400 times less sensitive than the currently used method. Then, in 1999, EPA issued a newly approved analytical method with the capability to reliably measure mercury concentrations down to 0.5 ng/L, well below the lowest GLI mercury water quality criteria of 1.3 ng/L. This development had a significant impact on discharging facilities and permitting authorities as the more sensitive measurement methods disclosed a more pervasive problem of high mercury levels in Great Lakes Basin waterbodies than previously recognized. Likewise, the new measurement methods showed that many facilities had mercury levels in their discharges exceeding water quality criteria; and, for the first time, permits could include enforceable discharge limits, based on these low criteria. The result was a significant increase in the number of permits needing mercury limits and monitoring requirements. The enhanced measurement capability also resulted in the development of statewide mercury strategies, including variances, to assist facilities in implementing the new measurement methods and eventually attaining the GLI water quality criteria. In conjunction with the use of variances for mercury, EPA encouraged the use of PMPs so that facilities could reduce potential sources of mercury and thus move closer to meeting the GLI water quality standards. While the development of more sensitive methods for measuring other BCCs may not have as significant an impact as it did with mercury, such a development would provide for a more meaningful assessment of comparing pollutant levels with GLI water quality criteria. When GLI was developed, EPA recognized that the relatively low water quality criteria levels for many pollutants would result in instances where limits were set below levels that could be reliably measured. Water quality criteria levels were based on the best science available for protecting wildlife, aquatic species, and human health whether or not methods were available for measuring pollutants at those levels. While EPA officials involved in developing GLI believed that measurement methods would eventually be available, developing EPA approved methods can be a time-consuming and costly process. EPA officials involved in the development of measurement methods explained that the development process is based on needs and priorities as well as development costs and resources. EPA is currently involved in developing a more sensitive analytical method for measuring PCBs, but EPA officials believe it will take 4 to 5 more years before it will be used because of the nature of the agency’s approval process and potential legal challenges. One class of pollutant that has not yet been included as a BCC under GLI is polybrominated diphenyl ethers or PBDEs—a flame retardant containing toxic chemicals with bioaccumulative characteristics. The agency has allocated $60,000 to develop an analytical method for this class of pollutant. EPA officials did not know when a method for this class of pollutant will be approved but may have a better idea at the end of 2005. At that point, if results are promising and funding is available, EPA would validate the method. To ensure the eight Great Lakes states implement GLI consistently, EPA stated in GLI that it would undertake certain activities, including issuing a mercury permitting strategy and developing and operating a Clearinghouse for the sharing of information by states to facilitate the development and implementation of GLI water quality standards. EPA began work on the mercury strategy but abandoned efforts because of a perceived lack of interest and other agency priorities. Further, EPA has yet to fully develop the Clearinghouse. Additionally, because EPA has not collected sufficient data, the agency cannot determine whether GLI is reducing pollutant discharges into the Great Lakes, whether GLI is improving water quality, or assess overall progress toward achieving GLI goals. To promote a uniform and consistent approach to the problems posed by mercury from point sources, EPA stated in GLI that it was committed to issuing a mercury permitting strategy for use by the Great Lakes states no later than 2 years after GLI’s publication. Although EPA believed that there was sufficient flexibility in GLI to handle the unique problems posed by mercury, such as variances and TMDLs, it intended to develop a mercury permitting strategy to provide a holistic, comprehensive approach by the states for addressing this pollutant. In June 1997, EPA published a draft of this strategy for public comment. The strategy described the flexibility in developing requirements for controls on the discharge of mercury. However, the strategy was not implemented because, according to EPA officials, few substantive comments were submitted on the draft strategy, and agency resources were directed to other GLI activities. Three states—New York, Michigan, and Wisconsin—that provided comments generally supporting the effort each provided additional observations. For example, New York noted that the strategy offered only administrative solutions rather than tangible technical solutions to the mercury problem. Wisconsin suggested that the strategy conformed to the basic framework and principles of a previously developed state strategy and therefore thought it unnecessary to substitute EPA’s strategy for their own. In lieu of a formal strategy, EPA participated in meetings with state officials and has approved mercury permitting strategies submitted by some of the Great Lakes states. However, in the absence of an EPA strategy on implementing water quality standards for mercury, most of the Great Lakes states developed their own approaches to ensuring that facilities meet the water quality criteria established in GLI, but these approaches have been inconsistent and create the potential for states to have different mercury discharge requirements. A major goal of GLI was to ensure that water quality standards of Great Lakes states were consistent within this shared ecosystem, however, the mercury permitting approaches adopted by the Great Lakes states contained different requirements for mercury. For example, limits in Ohio were set at 12 ng/L based on state standards existing before adoption of GLI, and limits established in Michigan were initially set at 30 ng/L primarily based on data from the state of Maine. EPA officials stated that while disparities exist, the overall limits are being lowered. Further, differences in states’ strategies for reducing mercury from point sources have emerged in states’ use of variances for existing facilities. Each state followed their own approach for mercury based on their needs and a consideration of the approaches taken by other Great Lakes states. While Ohio, Michigan, and Indiana based their mercury strategies on the use of streamlined processes for obtaining mercury variances, each state’s approach varies in significant ways. For example, Michigan uses a mercury permitting strategy where all existing facilities in the state are granted a variance in their NPDES permits if there is reasonable potential for the mercury standard to be exceeded. The variance exempts a facility from meeting the GLI water quality standard of 1.3 ng/L and establishes this water quality standard as a goal for a PMP. The variance establishes a universal discharge limit, based on all the facilities in the state, rather than on a facilities’ current discharge level or discharge level it could achieve individually. Michigan chose this approach after the new measurement method was approved in 1999, substantially increasing sensitivity for mercury in water, and most facilities found they could not meet the GLI water quality standard. As a result, Michigan established an interim discharge level of 30 ng/L, based on what could be achieved by the majority of the facilities in the state, and dischargers are considered to be in compliance with the mercury limit if they do not exceed the level in their permit and are implementing a PMP. Michigan has recently lowered this discharge level to 10 ng/L for permits issued or renewed in 2005. Conversely, Ohio’s mercury strategy requires dischargers to apply for a variance and submit detailed studies and action plans to identify and eliminate known sources of mercury. According to state officials, Ohio’s mercury permitting strategy allows dischargers to operate for 19 months using the new mercury measurement method to determine their discharge levels and evaluate whether they are able to comply with the water quality standard. If the discharger can comply with the GLI water quality standard, then the limit is included in their permit. If the discharger cannot comply they may request a variance. A variance establishes a monthly permit limit, based on the level currently achievable for that individual facility, and includes a required PMP. An annual permit limit of 12 ng/L is included as an annual discharge requirement for all facilities with a variance. According to state officials, Indiana’s NPDES permits for major facilities may contain monitoring requirements for mercury, and some will contain effluent limits that must be achieved after a 3 to 5 year compliance schedule. Additionally, Indiana developed a streamlined mercury variance rule. This rule establishes a process for dischargers to obtain temporary effluent limits, based on the level of mercury currently in their effluent, and requires dischargers to develop and implement a PMP in conjunction with a mercury variance. Other states have developed different mercury permitting approaches. Minnesota includes a discharge limit in permits, based on the standard of 1.3 ng/L and implemented through a compliance schedule allowing the facility up to 5 years to meet the limit. According to state officials, if dischargers are unable to meet the limit at the expiration of the compliance schedule, they will be required to apply for a variance on an individual basis. State officials also reported that Minnesota recently developed a draft statewide TMDL for mercury as a response to the mercury problem. Wisconsin has not granted variances, but it has granted PMP’s for about 20 facilities that are unable to comply with the mercury standard. According to a Wisconsin official, the state considers that granting PMPs without a limit is in essence a variance. However, it is referred to as an "alternative mercury limitation," and the state official explained that, if it were an official variance, the discharge limit would actually be in the permit, and the variance would be a part of that limit. New York and Pennsylvania only recently began using the more sensitive mercury testing method and therefore have yet to address how facilities will be granted variances. To promote a more consistent and shared approach to developing water quality standards among the Great Lakes states, EPA stated in GLI that Region 5 would develop a GLI Clearinghouse. As envisioned in GLI, this Clearinghouse would be a database containing all the information on the criteria and data used by the Great Lakes states in developing water quality standards. The Clearinghouse was to be developed in cooperation with EPA Headquarters, Regions 2 and 3, and the Great Lakes states. As envisioned, data included in the Clearinghouse could be quickly shared between the states to assist them in developing or updating numeric water quality criteria for toxic chemicals for aquatic life, wildlife, and human health. It could also be used to share data on any new pollutants that might be designated a BCC. When EPA developed GLI, it assumed that more chemicals would emerge as BCCs in the future and require development of additional water quality standards. GLI allows the Great Lakes states to designate additional chemicals for BCC controls without EPA sponsoring a public review and comment process. EPA was concerned that inconsistencies could arise among states when they identified future BCCs and believed the Clearinghouse would minimize this possibility. As envisioned in GLI, EPA Region 5 would operate the Clearinghouse, and if new information indicated a pollutant was a potential BCC, this information would be reviewed by EPA and the states and placed in the Clearinghouse to alert all the other Great Lakes states. Once alerted, states could then notify the public of any revisions to their water quality standards or permit requirements. The development of the Clearinghouse did not proceed as envisioned in the GLI. The Clearinghouse development effort was initiated in 1996 and EPA began entering data into the database at that time. However, the database was not available for use by the states until recently, because of other EPA priorities. Meanwhile, states developed their own water quality criteria for some GLI pollutants without centralized access to information from other states, likely resulting in longer development time and potential for inconsistencies among states. According to Minnesota state officials, without a GLI Clearinghouse, developing numeric criteria has been a problem since information on toxic chemicals or criteria are not readily available from other Great Lakes states. Currently, Minnesota is not close to developing criteria for all GLI pollutants. Officials stated that the availability of the Clearinghouse will help them in developing these criteria. Ohio officials expressed disappointment that EPA had not developed the Clearinghouse so many years after the guidance was issued because of its importance as a resource for developing water quality criteria. EPA renewed its efforts to complete the development of the Clearinghouse in late 2004. In early 2005, EPA Region 5 officials held conference calls with officials from the eight Great Lakes states, resulting in an agreed approach for jointly populating and maintaining the Clearinghouse. It is unclear, however, whether the Clearinghouse was jointly developed as planned with the active participation of EPA Regions 2 and 3, headquarters, and the eight Great Lakes states. As of April 2005, the Clearinghouse was still in the testing stage and, according to EPA Region 5 officials, by July 2005, all states had access to its information. However, currently, the states are not able to make additions or modifications to the data in the Clearinghouse. States were also providing comments to EPA Region 5 on the Clearinghouse’s operation, and EPA planned to make modifications based on these comments. EPA has yet to determine the most efficient approach for maintaining and updating information in the database. Until the database is fully operational and utilized, however, EPA cannot be assured that the Great Lakes states have adequate information for developing and updating consistent water quality standards. While monitoring the impact of GLI on water quality and pollutant loadings may be difficult and not required by the Critical Programs Act or GLI, it is important to determine if progress is being made toward GLI goals and the virtual elimination of toxic substances in the Great Lakes Basin. Currently, the effect of GLI in improving water quality and reducing loadings of toxic pollutants is unclear because EPA has been unable to assess GLI’s impact with existing data sources and has not gathered additional information to monitor progress on plans to reduce future loadings. EPA’s primary data source for the NPDES permits program is the Permit Compliance System (PCS), an automated system used for tracking compliance with individual permits. Information is entered into the system by states administering the program, and the system tracks when a permit is issued and expires, how much a facility is allowed to discharge, and what a facility has discharged. The system is useful for identifying noncompliance with GLI-based effluent limits by major NPDES dischargers through quarterly noncompliance reports. However, the system is inadequate for determining whether GLI has reduced pollutant loadings. EPA Region 5 officials attempted to use PCS to estimate the trends of point source loadings for specific pollutants in the Great Lakes Basin, but frequent errors occurred because of system limitations. These errors resulted from missing or inaccurate data, which distorted a clear estimate of pollutant loadings by facilities. For example, discharge quantities for some pollutants were reported as zero in some instances when they do not require monitoring, resulting in lower estimated total discharges. In addition, PCS data are primarily for major facilities, so calculated pollutant loadings do not reflect the sizeable universe of minor facilities. Inconsistencies in PCS also occur from the way state discharge monitoring report data are entered into the system. Because of these data limitations, EPA’s attempt to identify trends in point source loadings did not produce firm conclusions, rather, it produced only speculation as to why actual loadings increased or decreased in certain states. In addition, loading data that compared the years 1999 through 2000 to the years 2000 through 2001 was considered too short a time frame for comparative analysis since most of the permits had not been modified or reissued to reflect the new GLI standards during these time periods. Further hampering this effort was a lack of baseline data for loadings before GLI, which prevented comparisons between pollutant loadings before and after GLI implementation. The overall limitations of PCS to support the NPDES program were first recognized by EPA as an agency weakness in 1999. While EPA has attempted to modernize the system, the costs and time to complete the project have escalated significantly, as reported by the EPA Office of Inspector General. As of June 2005, the modernization project had not been completed. Officials from EPA Region 5 made two other attempts to determine GLI’s impact on Great Lakes water quality. One attempt involved using Toxics Release Inventory (TRI) data. However, EPA officials stated that for a number of reasons TRI did not lend itself to assessing the changes in water quality attributed to GLI. For example, TRI does not include information from publicly owned treatment works (POTW). Based on this effort, EPA concluded that any improvements in water quality resulting from GLI could not be isolated from the many other initiatives undertaken to improve water quality in the Great Lakes Basin. A second effort is currently under way and involves comparing a sample of individual permits before and after GLI implementation to determine its impact on permit limits. However, this effort has yet to yield preliminary results. Further, even when this effort is completed, EPA will only be able to make limited conclusions about how certain permit requirements have changed, and may incorrectly assume that the changes were a result of implementing GLI. This latest effort will not provide an ongoing monitoring of the impact of GLI, and EPA officials stated that in order to do a good analysis of GLI, all relevant data would have to be stored in a central database for analysis. Currently different types of information are stored in a variety of areas. In addition to attempts by EPA Region 5 to determine GLI’s impact, as part of its oversight of the NPDES program, regional staff review a sample of major NPDES permits issued by the six Great Lakes states in the region. The criteria for selecting permits for review varies from year to year and is typically based on issues that concern EPA staff. One factor in the selection of permits is whether the facility discharges within the Great Lakes Basin, thus requiring compliance with GLI. EPA officials stated that permits are reviewed in accordance with applicable federal rules and policies, including GLI implementation procedures. For selected permits issued by the state of Michigan, EPA specifically reviews the implementation of GLI requirements. For the other states, compliance reviews addressing GLI requirements are being phased in and will take significant time to fully implement, according to EPA officials. EPA’s reviews have not included a determination of whether GLI is being implemented consistently among states, but rather, focus on issues of compliance. Finally, EPA is not gathering information on how the implementation of PMPs or other GLI provisions is reducing pollutant discharges in the basin. EPA officials in Region 5 stated that GLI was intended to make the standards and goals of the Great Lakes states more consistent and implementing an elaborate monitoring scheme was not its intent. Without some type of monitoring, however, it is difficult to determine whether the standards and goals are having the desired environmental effect and whether GLI is being implemented consistently. This is particularly important because the use of flexible implementation procedures, such as variances and PMPs, adds uncertainty as to when facilities' discharge levels will ultimately attain GLI water quality standards. For PMPs, EPA Region 5 and the states cooperatively developed mercury PMP guidance for POTWs. This guidance was finalized in November 2004 and provides information on what elements should be in PMPs, including reporting of progress by the facility to the state in achieving PMP goals. The reported information, however, is not reviewed by EPA, and, therefore, the agency cannot determine what overall progress is being achieved. When EPA reviews a state-issued permit under a compliance review the agency checks only to see if PMP requirements are recorded appropriately in the permits and it does not determine if progress is being made to reduce pollutants under PMPs. EPA Region 5 officials stated that they could get a better understanding of GLI implementation if PMP data were collected and analyzed. Region 5 has not yet initiated a regional review process for these programs, but it will be developing a strategy to do so in its NPDES Program Branch. This strategy would involve working with the states on review criteria and compliance determination issues. Region 5 officials stated that their efforts are for the six states in their region. They do not have responsibility to gather information on PMPs or other activities regarding GLI implementation for New York or Pennsylvania, which are in EPA regions 2 and 3, respectively. While GLI has limited potential to improve overall water quality in the Great Lakes Basin because of its focus on point source pollution, it is important that GLI’s goals be achieved because they assist in the virtual elimination of toxic pollutants called for in the GLWQA. Several factors, however, have undermined progress toward achieving GLI’s goal of implementing consistent water quality standards. First, EPA has taken steps to implement GLI by ensuring that states adopt GLI standards or by issuing federal rules in the absence of state standards but has yet to resolve long-standing issues with the state of Wisconsin regarding the state’s adoption and implementation of GLI provisions. Second, EPA chose not to issue a mercury permitting strategy that it committed to do in GLI, and subsequently mercury was addressed in NPDES permits in different ways. Third, EPA’s efforts to complete the development of the GLI Clearinghouse have only recently been renewed, reflecting a lethargic approach to implementing actions it committed to in GLI. Finally, while EPA has made efforts to assess GLI’s impact on water quality, we believe additional efforts are needed to obtain information on the progress in implementing GLI and on reducing pollutant discharges from point sources in the Great Lakes Basin. In particular, information is needed to gauge dischargers’ progress in using PMPs to address pollutants that are exceeding GLI standards. To better ensure the full and consistent implementation of the Great Lakes Initiative and improve measures for monitoring progress toward achieving GLI’s goals, we are recommending that the EPA Administrator direct EPA Region 5, in coordination with Regions 2 and 3, to take the following three actions: issue a permitting strategy that ensures a more consistent approach to controlling mercury by the states, ensure the GLI Clearinghouse is fully developed, maintained, and made available to the Great Lakes states to assist them in developing water quality standards for pollutants covered by GLI, and gather and track information that can be used to assess the progress of implementing GLI and the impact it has on reducing pollutant discharges from point sources in the Great Lakes Basin. In particular, EPA should consider collecting better information on the impact of discharger programs to minimize pollutants that are exceeding GLI standards. In addition, we recommend that the EPA Administrator direct EPA Region 5 take the following action: increase efforts to resolve the disagreements with the State of Wisconsin over the implementation of provisions to ensure the equitable and timely implementation of GLI among all Great Lakes states. GAO provided EPA with a draft of this report for its review and comment. The agency generally agreed with the findings and recommendations in the report, but stated that our draft report has overlooked significant results or benefits of GLI, such as establishing a consistent and scientifically sound method to derive point source permit limits for mixtures of toxicants. We acknowledge the many benefits of GLI in our report, however, our review focused on the potential impact of GLI on water quality, implementation of GLI, and the steps taken by EPA to ensure consistent implementation and assessing progress toward achieving GLI goals. EPA also stated that while our report recognizes that many of the Great Lakes water quality problems are due to nonpoint sources, the benefits from GLI point source implementation procedures are not fully recognized in the report. Further, EPA stated that it was never expected that GLI would address nonpoint source discharges, and it is not authorized to develop and implement programs for nonpoint discharges. However, our report recognizes the importance of controlling point source pollution and that under the GLWQA of 1978, the United States and Canada agreed to a policy of prohibiting harmful pollutants in toxic amounts and virtually eliminating the discharge of such pollutants. GLI was an effort by the United States to further control these substances. Moreover, as we note above, our review focused on the potential impact of GLI on water quality and therefore, we note as a factual matter in our report that nonpoint sources are not addressed. Regarding the differences in the Great Lakes states approaches to mercury and our recommendation for EPA to develop a mercury permitting strategy, the agency stated that some differences exist in mercury requirements for individual facilities. However, EPA did not believe these differences represented an unacceptable level of inconsistency and believed that state approaches were similar. Further, EPA compares pre-GLI standards to post-GLI standards to illustrate the consistency in addressing mercury. While consistent standards are an expected outcome of GLI, the guidance does not ensure consistent implementation, particularly with the use of variances and PMPs by states in lieu of compliance with the stringent GLI water quality standards. EPA Region 5 has issued guidance for consistency in development of PMPs by the states for publicly owned treatment works, but states are not required to follow the guidance, and the regional guidance does not apply to the two Great Lakes states that are outside of the geographic boundaries of Region 5. EPA further states that mercury variances are temporary measures allowing time to transition to the stringent GLI standards. However, facilities with NPDES permits can apply to have a variance renewed with a permit renewal and, therefore, variances can be approved by the states for a 5-year period, which may be in addition to a previous 5-year variance. It is also not evident that time frames exist for when facilities are to meet these stringent GLI standards. EPA stated that a mercury permitting strategy would not improve consistency and, rather than focusing on a strategy, it would work with the states and provide assistance on the most effective approaches for reducing mercury loadings by point source dischargers. The agency, however, committed itself in the GLI to developing a strategy. An overall goal of GLI is to have consistency among the Great Lakes states, and mercury is clearly the most important pollutant regulated in NPDES permits. Regarding our recommendation on the GLI Clearinghouse, EPA stated that the Clearinghouse has a vital role to play in the GLI implementation. In early 2005, Region 5 and the eight Great Lakes states reached agreement for populating and maintaining the Clearinghouse. After further information updates and revisions by EPA, the states will review the Clearinghouse for accuracy and thoroughness, and then it will be functional, according to EPA. Regarding our recommendation on the need to gather and track information to assess the implementation of GLI, EPA stated that it will be working with the states to develop PMP oversight tools, and it will be tracking the permits issued for mercury requirements and biosolids data regarding trends in mercury levels. For resolving its differences with the state of Wisconsin regarding GLI, EPA stated that Region 5 is working with the state to resolve outstanding issues. Further, the state is evaluating its whole effluent toxicity reasonable potential procedures, and then EPA will work with the state to ensure that its procedures are at least as protective as EPA’s. EPA also provided specific comments on the draft report, and we have made changes in our report to reflect many of these comments. The full text of EPA’s comments is included in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate Congressional Committees, the EPA Administrator, various other federal and state departments and agencies. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the focus and its potential to affect water quality in the Great Lakes Basin we analyzed the published final rule on the Great Lakes Initiative (GLI), including its methodologies, policies, and procedures. Specifically, we reviewed the flexible implementation procedures allowed under GLI, such as those allowed for mercury, the most common bioaccumulative chemical of concern (BCC) regulated in permits for point sources of pollution. We also obtained opinions on GLI’s impact from officials representing environmental organizations that were involved in the formulation of GLI, such as the Lake Michigan Federation and the Great Lakes Water Quality Coalition. We also gathered and analyzed available data on the major sources of toxic pollutants in the Great Lakes Basin from water quality permit officials in the Environmental Protection Agency’s (EPA) Region 5, and state environmental agency officials in each of the Great Lakes states—Illinois, Indiana, Ohio, Michigan, Minnesota, New York, Pennsylvania, and Wisconsin. Specifically, for each state agency, we obtained information from state National Pollution Discharge Elimination System (NPDES) permit databases regarding the location and number of NPDES permits covered under GLI in each state, including those permits that included BCCs. We questioned officials knowledgeable about the data and systems that produced them and determined the data were sufficiently reliable for the purposes of this report. In two instances where we noticed inconsistencies in the data, we verified with state officials the correction of the data. To determine the status of GLI’s adoption by the states, we analyzed the Clean Water Act, as amended by the Great Lakes Critical Programs Act of 1990, and its requirements for the Great Lake states to adopt standards, policies, and procedures consistent with GLI. We also gathered and analyzed documentation from EPA on its approval process for states’ submissions of their standards, policies, and procedures and whether they reflected GLI requirements; and we interviewed EPA Region 5 and Great Lakes states’ officials on any unresolved matters regarding EPA’s rulings on state submissions. To identify any challenges that might exist to achieving GLI’s intended goals, we reviewed the water quality criteria established for pollutants in the GLI, particularly BCCs, and interviewed EPA Region 5 and state officials to determine how many pollutants covered by GLI did not have methods and water quality criteria yet developed. We also collected and analyzed data from officials of EPA’s Office of Science Technology to determine EPA’s current efforts in developing new methods for BCCs. To identify the steps EPA has taken for ensuring the full and consistent implementation of GLI, we reviewed the GLI to see what actions EPA had committed itself to taking. We obtained information from EPA Region 5 and EPA Headquarters on the status of these activities, such as the establishment of a database clearinghouse and mercury permitting strategy. We collected and analyzed opinions from several of the eight Great Lakes states on the need for these GLI requirements and any consequences resulting from delays in their implementation. To determine the steps EPA has taken for assessing progress toward achieving GLI’s goals, we interviewed EPA Region 5 officials on its processes for determining progress made under GLI in improvements to water quality, including the agency’s use of available databases in this exercise, and its monitoring of the states’ implementation of GLI. We performed our work from October 2004 to June 2005 in accordance with generally accepted government auditing standards. John Stephenson (202) 512-3841 ([email protected]) In addition to the individual named above, Kevin Averyt, Greg Carroll, John Delicath, John Wanska, and Amy Webbink made key contributions to this report.
|
The virtual elimination of toxic pollutants in the Great Lakes is a goal shared by the United States and Canada. While some progress has been made, pollution levels remain unacceptably high. The Great Lakes Initiative (GLI) requires stringent water quality standards for many pollutants in discharges regulated by states administering National Pollution Discharge Elimination System (NPDES) permit programs. As requested, this report examines the (1) GLI's focus and potential impact on water quality in the Great Lakes Basin, (2) status of GLI's adoption by the states and any challenges to achieving intended goals, and (3) steps taken by the Environmental Protection Agency (EPA) for ensuring full and consistent implementation of GLI and for assessing progress toward achieving its goals. GLI has limited potential to improve overall water quality in the Great Lakes Basin because it primarily focuses on regulated point sources of pollution, while nonpoint sources, such as air deposition and agricultural runoff, are greater sources of pollution. GLI's potential impact is further limited because it allows the use of flexible implementation procedures, such as variances, whereby facilities can discharge pollutants at levels exceeding stringent GLI water quality standards. Finally, many of the chemical pollutants regulated by GLI have already been restricted or banned by EPA and have a limited presence in point source discharges. By 1998, the eight Great Lakes states had largely adopted GLI water quality standards and implementation procedures in their environmental regulations and NPDES programs. However, EPA determined that some states had failed to adopt some GLS provisions or had adopted provisions that were inconsistent with GLI and EPA promulgated rules imposing GLI standards. Wisconsin officials, however, believe that the state cannot implement standards that are not explicitly supported by state law, and disagreements with EPA over the rules remain unresolved. As a result, GLI has not been fully adopted or implemented in the state. Finally, a major challenge to fully achieving GLI's goals remains because methods for measuring many pollutants at the low levels established in GLI do not exist. Consequently, some pollutants cannot be regulated at these levels. EPA has not ensured consistent GLI implementation by the states nor has the agency taken adequate steps toward measuring progress. For example, EPA did not issue a mercury permitting strategy to promote consistent approaches to the problems posed by mercury as it stated in GLI. In the absence of a strategy, states developed permits for mercury that vary from state to state. Attempts by EPA to assess GLI's impact have been limited because of inadequate data or information that has not been gathered for determining progress on dischargers' efforts to reduce pollutants.
|
Although states and localities have traditionally held the major responsibility for preventing and controlling crime in the United States, a significant aspect of the federal government’s role has been its provision of financial and other assistance to state and local law enforcement agencies. Grants are the largest funding source of crime technology assistance to state and local law enforcement agencies. Of the three agencies we reviewed, only Justice provided grants for crime technology purposes. These grants fall into one of two categories: single- purpose and multipurpose. Single-purpose grants were clearly or completely for crime technology assistance. For example, the purpose of the State Identification Systems Grant Program, which was authorized by the Antiterrorism and Effective Death Penalty Act of 1996 (P.L. 104-132), is to enable states to develop computerized identification systems that are compatible and integrated with certain FBI databases. Multipurpose grants can be used by state and local law enforcement agencies for a number of purposes including, but not limited to, crime technology. For example, the Anti-Drug Abuse Act of 1986 (P.L. 99-570) established formula and discretionary grants to state and local law enforcement agencies for addressing crime related to illegal drugs and other purposes. The Anti-Drug Abuse Act of 1988 (P.L. 100-690) expanded the grant programs and named them the Edward Byrne Memorial State and Local Law Enforcement Assistance Programs. Byrne programs are to be used to improve criminal justice systems, to enhance anti-drug programs, and for other purposes. More recently, the Violent Crime Control and Law Enforcement Act of 1994 (P.L. 103-322) authorized $30.2 billion for law enforcement and crime prevention measures. In addition to authorizing additional funding for Byrne programs through fiscal year 2000, the 1994 Act established what is known as the Office of Community Oriented Policing Services within Justice and authorized $8.8 billion in related funding. Also, the Omnibus Consolidated Rescissions and Appropriations Act of 1996 (P.L. 104-134) provided funding for the Local Law Enforcement Block Grants Program, which provides funding to state and local governments to reduce crime and improve public safety. Nongrant federal assistance—such as providing state and local law enforcement with access to support services and systems—may be specifically authorized by legislation. For example, use of the Treasury Department’s Federal Law Enforcement Training Center by units of state and local government is authorized by P.L. 90-351, as amended. There is no formal or standardized definition of “crime technology assistance” used throughout the federal government. Therefore, for purposes of this review, we developed a working definition and discussed it with Justice, Treasury, and ONDCP officials, who indicated that the definition was appropriate. Also, because there is no overall requirement for agency accounting systems to track crime technology assistance, we largely relied upon agency estimates. We did not fully or independently verify the accuracy and reliability of the funding data provided by agency officials. However, to help ensure data quality, among other steps, we (1) obtained information on and reviewed the processes used by agency officials to calculate the estimated amounts of crime technology assistance and (2) compared agency responses to primary source documents and attempted to reconcile any differences with agency officials. Also, as mentioned above, the funding totals presented in this report are conservative, especially regarding multipurpose grants. That is, because agency accounting systems did not specifically track crime technology assistance, we included only amounts that could be reasonably estimated by agency officials or by our reviews of agency information. Appendix I presents more details about our objectives, scope, and methodology. We requested comments on a draft of this report from the Attorney General, the Secretary of the Treasury, and the Director of ONDCP. Justice’s, Treasury’s, and ONDCP’s oral and written comments are discussed near the end of this letter. We performed our work from August 1998 to April 1999 in accordance with generally accepted government auditing standards. As table 1 shows, for the three federal agencies we reviewed, single- purpose grants ($255.1 million) and multipurpose grants ($746.3 million) were the major type of crime technology assistance provided to state and local law enforcement agencies during fiscal years 1996 through 1998. We identified 13 relevant single-purpose grant programs. Of these, one program—the National Criminal History Improvement Program— represented about 58 percent of the $255.1 million total. This program, funded at about $147.2 million for fiscal years 1996 through 1998, wholly involved crime technology assistance in that its purpose is to help states upgrade the quality and completeness of criminal history records and to increase compatibility with and access to FBI criminal information databases. The other 12 single-purpose grant programs represented about 42 percent ($107.9 million) of the $255.1 million total and included programs such as a DNA Laboratory Improvement Program and the National Sex Offender Registry. We identified 10 multipurpose grant programs that had some purposes involving crime technology assistance to state and local law enforcement agencies. Of these 10, the following 2 programs accounted for about 88 percent of the estimated $746.3 million total: Making Officer Redeployment Effective Grants ($466.1 million): This program, administered by the Office of Community Oriented Policing Services, allows law enforcement agencies to purchase equipment and technology to help expand the time available for community policing by current law enforcement officers. Byrne Formula Grant Program ($188.0 million): Three purpose areas in this program—purpose areas 15(a), 15(b), and 25—specifically involved crime technology assistance to state and local law enforcement agencies during fiscal years 1996 through 1998. Purpose area 15(a) is for improving drug-control technology; purpose area 15(b) is for criminal justice information systems (including a 5-percent set-aside for improving criminal justice records); and purpose area 25 is for developing DNA analysis capabilities. The other eight multipurpose grant programs represented 12 percent ($92.2 million) of the estimated $746.3 million total funding through multipurpose grants for crime technology assistance. Among many other uses, grant recipients used these funds to develop (1) information management systems for drug court programs and (2) integrated computer systems for tracking domestic violence cases. Appendix II provides more details about Justice’s single-purpose and multipurpose grants that provided crime technology assistance to state and local law enforcement agencies during fiscal years 1996 through 1998. The second largest category of estimated crime technology assistance, as table 1 shows, was access to support services and systems provided by Justice and Treasury. Of these two agencies, Justice was the more significant provider, with an estimated $146.6 million in assistance to state and local law enforcement agencies, or about 90 percent of the $162.5 million total during fiscal years 1996 through 1998. For these 3 fiscal years, about 62 percent of Justice’s $146.6 million in assistance was attributable to 4 of the 21 relevant support services or systems we identified: Regional Information Sharing Systems ($50.3 million): Funded by the Bureau of Justice Assistance, six regional criminal intelligence centers provide state and local law enforcement with access to an Intelligence Database Pointer, a National Gang Database, a secure intranet, and other support services. Combined DNA Index System ($16.7 million): The FBI’s index system contains DNA records of persons convicted of crimes. State and local crime laboratories can use the system to store and match DNA records. Law Enforcement On-Line ($12.6 million): Managed by the FBI, this intranet links the law enforcement community throughout the United States and supports broad, immediate dissemination of information. National Crime Information Center ($11.1 million): Managed by the FBI, this is the nation’s most extensive criminal justice information system. The system’s largest file, the Interstate Identification Index, provides access to millions of criminal history information records in state systems. Justice’s other 17 support services and systems represented 38 percent ($55.9 million) of the agency’s $146.6 million total assistance in this category. Two examples are (1) the Drug Enforcement Administration’s National Drug Pointer Index, which has an automated response capability to determine if a case suspect is under active investigation by a participating law enforcement agency and (2) the National Institute of Justice’s National Law Enforcement and Corrections Technology Centers, which offer technology and information assistance to law enforcement and corrections agencies by introducing promising technologies and providing technology training. Appendix III presents more detailed information about Justice’s support services and systems that provided crime technology assistance to state and local law enforcement agencies during fiscal years 1996 through 1998. Treasury’s support services and systems provided $15.9 million of assistance during this period, as table 1 shows. This assistance involved a total of 29 support services and systems. Of these, 14 were provided by one Treasury component—the Bureau of Alcohol, Tobacco and Firearms (ATF), which accounted for $13.4 million (84 percent) of Treasury’s $15.9 million total assistance in this category. Further, as indicated next, 3 of ATF’s 14 services and systems accounted for about two-thirds of the ATF’s support to state and local law enforcement: Arson/Explosives Incident System ($3.5 million): State and local agencies are able to query the system for information about component parts, stolen explosives, and device placement. Accelerant Detection Analysis ($2.7 million): This service involves the laboratory analysis of fire debris to detect and identify flammable liquids potentially used as accelerants in an incendiary fire. Firearms Tracing System ($2.5 million): The National Tracing Center traces firearms (from the manufacturer to the retail purchaser) for law enforcement agencies. Treasury’s other 15 relevant support services and systems accounted for the remaining 16 percent ($2.5 million) of the $15.9 million total assistance. Two examples are (1) the Financial Crimes Enforcement Network’s Project Gateway, which helps state and local law enforcement agencies combat money laundering and other financial crimes and (2) the Secret Service’s Questioned Document Branch, which maintains a forensic information system for analyzing handwriting samples. Appendix IV presents more detailed information about Treasury’s support services and systems that provided crime technology assistance to state and local law enforcement agencies during fiscal years 1996 through 1998. Based on information from Justice, Treasury, and ONDCP officials, we specifically identified only one established, relevant in-kind transfer program—i.e., an ONDCP technology transfer program that began in fiscal year 1998. According to ONDCP officials, the program involved 18 projects or systems that fit our definition of crime technology assistance, but only 15 of these were transferred to state and local law enforcement agencies in fiscal year 1998. For that year, as table 1 shows, federal funding for the transferred projects or systems totaled $13 million. Examples of transferred technologies include (1) a vapor tracer device for detecting and identifying small quantities of narcotics and explosives and (2) a secure, miniaturized multichannel audio device for inconspicuous use during covert operations. Appendix V provides further details about ONDCP’s technology transfer program. Justice and Treasury officials told us that their agencies routinely make excess equipment available to other governmental and nongovernmental users through the General Services Administration. However, the officials said they had no readily available information regarding how much of the excessed equipment was crime technology related and was subsequently received by state and local law enforcement agencies. Similarly, General Services Administration officials told us that their agency’s accounting systems could not provide this type of information. Treasury officials noted that there have been a few isolated or nonroutine instances of direct transfers, as when the Secret Service directly excessed some used automated data processing and communications equipment to state and local law enforcement agencies in 1998. We did not include these isolated or nonroutine transfers in the funding amounts presented in this report. Officials from Justice’s Office of Justice Programs and Office of Audit Liaison met with us on May 13, 1999, and provided the following comments on a draft of this report: In their written comments, Office of Justice Programs officials provided several technical comments and clarifications, mostly centered on funding amounts for certain National Institute of Justice programs presented in table II.1 in appendix II. As appropriate, we made adjustments to table II.1 and also added some clarifying language to our scope and methodology (app. I). The Director, Office of Audit Liaison, told us that the draft report had been reviewed by officials in other relevant Justice components (including INTERPOL–U.S. National Central Bureau and the Marshals Service) and the Office of Community Oriented Policing Services, and that these officials had no comments. Also, in written comments, the Drug Enforcement Administration, the FBI, and the Immigration and Naturalization Service suggested technical comments and clarifications, which have been incorporated in this report where appropriate. Treasury officials met with us on May 12, 1999, and provided oral comments. More specifically: ATF, Customs Service, Federal Law Enforcement Training Center, and Financial Crimes Enforcement Network officials expressed agreement with the information presented in the report. Secret Service officials had technical comments and clarifications, which have been incorporated in this report where appropriate. A Treasury Office of Finance and Administration official indicated that, although not in attendance at the May 12 meeting, IRS officials had reviewed the draft report and had no comments. On May 13, 1999, the Director of ONDCP’s Counterdrug Technology Assessment Center met with us and expressed agreement with the information presented in the report. To provide additional perspective, he suggested that appendix V reflect the fact that, as of December 1998, the 15 relevant ONDCP projects or systems involved a total of 202 recipient state and local law enforcement agencies. We added this information to the appendix. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this letter. We are sending copies of this report to Senator Orrin G. Hatch, Chairman, and Senator Patrick J. Leahy, Ranking Minority Member, Senate Committee on the Judiciary; Representative Henry J. Hyde, Chairman, and Representative John Conyers, Jr., Ranking Minority Member, House Committee on the Judiciary; The Honorable Janet Reno, Attorney General; The Honorable Robert E. Rubin, Secretary of the Treasury; The Honorable Barry R. McCaffrey, Director of ONDCP; and The Honorable Jacob Lew, Director, Office of Management and Budget. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix VI. If you or your staff have any questions about this report, please contact me on (202) 512-8777. Senator Mike DeWine requested that we identify crime technology assistance provided by the federal government to state and local law enforcement agencies. Specifically, for fiscal years 1996 through 1998, Senator DeWine requested that we identify the types and amounts of such assistance provided by the Departments of Justice and the Treasury and the Office of National Drug Control Policy (ONDCP). As agreed with the requester, we categorized the assistance into three types: (1) direct funding or grants; (2) access to support services and systems, such as automated criminal history records and forensics laboratories; and (3) in- kind (no-cost) transfers of equipment or other assets. Initially, we conducted a literature search to determine whether there was a commonly accepted definition of “crime technology assistance.” Since our search did not yield a definition, we developed our own by reviewing (1) a then pending bill (S. 2022), which has since been enacted into law,related to crime technology assistance introduced by Senator DeWine during the second session of the 105th Congress, and its legislative history; (2) his request letter to us and subsequent discussions with his staff; (3) the General Services Administration’s Catalog of Federal Domestic Assistance, a reference source of federal assistance programs, including crime control programs; and (4) Congressional Research Service reports on federal crime control assistance. Accordingly, our definition of crime technology assistance was the following: “Any technology-related assistance provided to state and local law enforcement agencies, including those of Indian tribes, for establishing and/or improving (1) criminal justice history and/or information systems and specialized support services or (2) the availability of and capabilities to access such services and systems related to identification, information, communications, and forensics.” We discussed this definition with Justice, Treasury, and ONDCP officials, who indicated that the definition was appropriate and would be helpful in their efforts to compile the requested information. To determine the types and amounts of crime technology-related assistance for fiscal years 1996 to 1998, we reviewed (1) excerpts from the Budget of the United States Government, Appendices and Analytical Perspectives for fiscal years 1998, 1999, and 2000, (2) agencies’ fiscal year budget requests, (3) appropriations acts, and (4) budget execution reports provided by the Office of Management and Budget. Also, as noted above, we reviewed the General Services Administration’s Catalog of Federal Domestic Assistance (annual editions for 1996 through 1998) and Congressional Research Service reports on federal crime control assistance. We contacted the Justice, Treasury, and ONDCP components most likely to provide crime technology-related assistance. Within Justice, we contacted the Drug Enforcement Administration (DEA); the Federal Bureau of Investigation (FBI); the Immigration and Naturalization Service; the INTERPOL - U.S. National Central Bureau; the U.S. Marshals Service; the Office of Community Oriented Policing Services; and the Office of Justice Programs and its components (i.e., the Bureau of Justice Assistance, the Bureau of Justice Statistics, the Drug Courts Program Office, the National Institute of Justice, the Office of Juvenile Justice and Delinquency Prevention, and the Violence Against Women Grants Office). Within Treasury, we contacted the Bureau of Alcohol, Tobacco, and Firearms (ATF); the Customs Service; the Federal Law Enforcement Training Center; the Financial Crimes Enforcement Network; the Internal Revenue Service (IRS); and the Secret Service. Within ONDCP, we contacted the Director of the Counterdrug Technology Assessment Center. From Justice, Treasury, and ONDCP, we obtained and reviewed relevant program descriptions and related explanatory materials, such as catalogs and program plans. Specifically, we reviewed (1) descriptions of grant programs and support services and systems and (2) corresponding funding data prepared by agency officials for this review. Generally, in this report, our references to state and local law enforcement agencies may include recipients of federal assistance in (1) the 50 states; (2) the District of Columbia; (3) the Commonwealth of Puerto Rico; and (4) where applicable, the U.S. territories of American Samoa and Guam, the Commonwealth of the Northern Mariana Islands, and the U.S. Virgin Islands. During our review of grant programs, we identified many crime technology projects that were funded by federal grants distributed to entities other than state and local law enforcement agencies. These recipients included federal agencies, private firms, colleges and universities, and others. Specifically, under the Local Law Enforcement Block Grants Program, 1 percent of authorized appropriations ($20 million per year) was for law enforcement technologies. According to a National Institute of Justice official, this 1-percent set-aside supported 153 projects during fiscal years 1996 through 1998; and, of this total, 145 projects involved grant funding in which state or local law enforcement agencies did not participate in the specific project at all. Therefore, we did not include grant funding for these 145 projects in the data tables presented in appendix II. However, the other eight projects directly involved state or local law enforcement agencies. For these eight projects, we included grant funding (a total of $2,325,600) in table II.1. We determined that, of the three agencies we reviewed, only Justice provided direct grant funding to state and local law enforcement agencies for crime technology assistance. Within Justice, the Office of Community Oriented Policing Services and the Office of Justice Programs administered the majority of grant programs that included crime technology assistance. We divided crime technology assistance grants into two categories--single-purpose grants and multipurpose grants. We defined single-purpose grants as those that clearly or completely fell within the definition of crime technology assistance. For single-purpose grants, Justice was able to identify applicable grants and specifically quantify funding amounts (see table II.1 in app. II). As table II.1 shows, the National Institute of Justice is one of Justice’s components that provided crime technology assistance. The Institute’s role includes allocating funds for certain research and development projects that may directly or indirectly involve crime technology assistance to state and local law enforcement agencies. For purposes of our review and table II.1, we included the Institute’s research and development project costs for only those projects wherein the Institute was able to identify that a state or local law enforcement agency was directly involved or partnered with private firms, companies, universities, or federal agencies. In commenting on a draft of this report, National Institute of Justice officials noted that: Such partnerships are difficult to identify and quantify unless the state or local agency is a subgrantee, which is something that is not electronically tracked by the Office of Justice Programs or the National Institute of Justice. Also, there are inherent difficulties in using research and development project costs to quantify direct benefits to state and local law enforcement agencies. For instance, partnering arrangements can result in a diffusion of funds among various entities, in contrast to grants made directly to a state or local law enforcement agency. Also, direct benefits may not be immediately realized given that research and development efforts may involve extended time frames and/or may not result in practical implementation. In further commenting on a draft of this report, National Institute of Justice officials emphasized that, in providing funding input for table II.1, they focused on crime technology assistance that involved information systems or information-sharing systems. The officials noted that the Institute is involved in other crime technology-related projects--such as concealed weapons detection systems and less-than-lethal technologies-- that were not included in the funding figures provided to us. We defined multipurpose grants as those that could be used for two or more purposes, including crime technology assistance. Since Justice did not have a system to track crime technology assistance, it was unable to determine the exact amount of crime technology assistance provided to state and local law enforcement agencies through multipurpose grants. However, in many cases, Justice officials were able to estimate the portions or amounts of multipurpose grants that involved technology assistance, as indicated in following sections and reflected in table II.2 in appendix II. Byrne Formula Grants may be used for a total of 26 purpose areas, 8 of which could involve crime technology. According to the Bureau of Justice Assistance, three of these eight purpose areas clearly consisted of crime technology-related activities: Purpose area 15(a) was for improving drug-control technology. Purpose area 15(b) was for improving criminal justice information systems. Purpose area 25 was for developing DNA analysis capabilities. However, for the other five potentially relevant purpose areas, Justice was unable to estimate the amounts of crime technology assistance provided by the Byrne Formula Grants. Therefore, regarding these 5 purpose areas, we conducted a file review of Byrne Formula Grants to determine the amount of crime technology assistance provided to the 10 states that received the most Byrne Formula Grants funds during fiscal year 1998. The 10 states (California, Florida, Georgia, Illinois, Michigan, New Jersey, New York, Ohio, Pennsylvania, and Texas) received nearly half of all Byrne Formula Grants funds during fiscal year 1998. Our findings concerning these 10 states are not statistically projectable to the total universe of recipients. According to the Office of Justice Programs, one of seven purpose areas of the Local Law Enforcement Block Grants Program involves crime technology assistance to state and local law enforcement agencies. This purpose area is entitled “equipment and technology” and includes both traditional law enforcement equipment (such as guns, vests, and batons) and crime technology. During fiscal years 1996 through 1998, the Office of Justice Programs obligated approximately $628 million for equipment and technology. Since the Office of Justice Programs did not specifically track how funding was allocated in the equipment and technology purpose area, it was unable to determine how much of the approximately $628 million obligated for fiscal years 1996 through 1998 was for traditional law enforcement equipment and how much was for crime technology. Although the Office of Justice Programs does not specifically track crime technology assistance provided through the Local Law Enforcement Block Grants Program, it asked grantees to complete reports describing how grantees expended equipment and technology funds. Grantees completing the reports classified their equipment and technology expenditures into 10 subcategories, 5 of which were most likely to include expenditures for crime technology, according to the Office of Justice Programs. However, since completion of the expenditure reports was not mandatory, at the time of our review the Office of Justice Programs had received these reports from only 60 percent of the 1996 grantees, about 10 percent of the 1997 grantees, and none of the 1998 grantees. The grantee expenditure reports showed that approximately $63 million was obligated for equipment and technology during fiscal years 1996 and 1997; and, of this amount, about $33 million (or 52 percent) was for the 5 crime technology- related categories. This office awarded three categories of grants under the Drug Court Discretionary Grant Program: planning, implementation, and enhancement. According to the Drug Courts Program Office, based on experience to date, “enhancement” grants is the category most likely to involve crime technology, especially grants for management information systems. The Office of Justice Programs conducted a word search for “crime technology” of Drug Court Discretionary Grants. The word search identified six Drug Court grants that involved crime technology. Office of Justice Programs staff examined the six grants to determine how much funding the grantees planned to allocate specifically for crime technology purposes. Funding for these programs, three in fiscal year 1997 and three in fiscal year 1998, is depicted in table II.2 in appendix II. The Office of Community Oriented Policing Services had four programs that included funding for crime technology assistance: (1) Making Officer Redeployment Effective (MORE), (2) Advancing Community Policing, (3) Problem-Solving Partnerships, and (4) Community Policing to Combat Domestic Violence. Under MORE, grant funds could be used for hiring civilian personnel or obtaining crime technology. However, for purposes of our review, agency officials specifically excluded personnel funding and identified the funding obligated for crime technology during fiscal years 1996 through 1998. This Justice office administered two multipurpose grant programs that could involve funding for crime technology: (1) Formula Grants for Law Enforcement and Prosecution, also known as the STOP program, and (2) Discretionary Grants to Encourage Arrest Policies. The Office of Justice Programs conducted a word search for “crime technology” that identified 4 STOP grants and 62 Grants to Encourage Arrest Policies during fiscal years 1996 through 1998. The Office of Justice Programs reviewed these grants to determine how much grantees planned to specifically allocate for crime technology purposes. For applicable support services and systems in Justice and Treasury, we asked responsible officials to provide us with funding data regarding usage related to meeting the needs of state and local law enforcement agencies. That is, for criminal history databases, forensics laboratories, and other crime technology-related services and systems, we wanted to know what portions of the federal operating costs were used to support state and local law enforcement. In response, Justice and Treasury officials provided us with estimates for each applicable service and system. (See apps. III and IV.) Table III.1 in appendix III presents the 21 relevant Justice support services and systems that we identified. For 15 of these services and systems--that is, those provided by DEA, FBI, the Immigration and Naturalization Service, and the INTERPOL-U.S. National Central Bureau--Justice officials told us that in most cases they calculated funding estimates by prorating total operating costs (excluding federal salaries and other personnel costs) between federal and state/local law enforcement agencies based on the number of queries or other usage measures. Justice’s other six relevant support services and systems are under three components--the Bureau of Justice Assistance, the Bureau of Justice Statistics, and the National Institute of Justice--that are not directly involved in conducting criminal investigations. According to Justice officials, these services and systems were established explicitly to benefit state and local law enforcement agencies. As such, the officials said that the funding figures provided to us represent total, nonprorated operating costs (excluding personnel costs), even though federal law enforcement agencies can be participants, as is the case with the Regional Information Sharing Systems program. Table IV.1 in appendix IV presents the 29 relevant Treasury support services and systems that we identified. Personnel costs were not included in any of the funding figures provided us by the Treasury Department agencies. ATF provided 14 of the 29 support services and systems to state and local law enforcement. For 10 of these 14 support services and systems, ATF prorated the obligated costs of crime technology assistance to state and local law enforcement agencies. For the remaining four ATF support services and systems, obligated costs were not prorated because all of these costs supported state and local law enforcement. The Customs Service provided one relevant support service and system, the Treasury Enforcement Communications System, to state and local law enforcement agencies. Customs prorated its expenditures based on the number of state and local law enforcement transactions relative to the total number of system transactions for each fiscal year from 1996 through 1998. The Federal Law Enforcement Training Center provided 2 of Treasury’s 29 support services and systems to state and local law enforcement agencies. The Center calculated the actual costs of the two training programs provided. The Financial Crimes Enforcement Network provided one support service and system--the Gateway System--to state and local law enforcement agencies. Since the Gateway system was designed to help state and local law enforcement agencies pursue criminal investigations, all obligations during fiscal years 1996 through 1998 applied to state and local law enforcement. IRS’ National Forensic Laboratory supports criminal investigations related to tax and financial statutes. However, the laboratory has few resources to support state and local law enforcement and does so only under exceptional circumstances. Although no precise records existed regarding assistance to state and local law enforcement agencies, an IRS official estimated that during fiscal years 1996 through 1998, the laboratory incurred annual costs of less than $1,000. The Secret Service provided 10 of Treasury’s 29 support services and systems. The Secret Service prorated costs for eight of its support services and systems and provided us full or actual costs for the other two (Identification Branch Training and Operation Safe Kids). Also, it should be noted that the support services and systems listed in appendixes III and IV do not include certain multiagency intelligence organizations, such as the El Paso Intelligence Center, which was established to collect, process, and disseminate intelligence information concerning illicit drug and currency movement, alien smuggling, weapons trafficking, and related activity. The Center’s 15 federal members include various Justice (e.g., DEA and FBI), Treasury (e.g., Customs Service and IRS), and Department of Transportation (e.g., Coast Guard and the Federal Aviation Administration) components; the Departments of the Interior and State; and liaison agencies such as the Central Intelligence Agency and the Department of Defense. In addition, all 50 states are associate members. The Center accepts queries from federal and state member agencies. For their respective organizations, we asked officials at Justice, Treasury, and ONDCP to identify any in-kind transfer programs that provided crime technology assistance to state and local law enforcement agencies during fiscal years 1996 through 1998. Responses from the three agencies indicated that only ONDCP had an established, relevant in-kind transfer program (see app. V). Justice and Treasury officials told us that their agencies generally do not directly transfer excess equipment--whether for crime technology or any other purpose--to state and local law enforcement agencies. However, Treasury officials noted that there have been a few isolated or nonroutine instances whereby some equipment has been given directly to a state or local law enforcement agency. For example, the officials noted that, during fiscal year 1998: The Secret Service directly excessed about $535,000 worth of automated data processing and communications equipment (based on original acquisition cost) to state and local law enforcement agencies. The Financial Crimes Enforcement Network directly excessed 186 pieces of computer equipment to state and local law enforcement agencies. The original acquisition cost of this equipment was about $367,000. The Secret Service directly excessed two polygraph instruments (original acquisition costs totaled about $8,900); one instrument was given to the St. Joseph (MO) Police Department and the other to the Chicago (IL) Police Department. Also, Justice and Treasury officials told us their agencies routinely make excess equipment available to other users through the General Services Administration, which is to make dispositions in the following priority order: (1) distribution to other federal agencies, (2) donation to state and local agencies, (3) sale to the general public, and (4) donation to nonprofit organizations through a state agency for surplus property. The Justice and Treasury officials said they had no readily available information regarding how much of the excessed equipment was crime technology related and was subsequently received by state and local law enforcement agencies. Similarly, General Services Administration officials told us that their accounting systems could not provide this type of information. Generally, we relied on funding information that agency officials provided to us. Since agency accounting systems typically did not track crime technology assistance, agency officials used a number of methods to estimate crime technology assistance. For example, the Office of Justice Programs had its program managers calculate how much crime technology assistance was in the multipurpose grant programs. We did not fully or independently verify the accuracy and reliability of the funding data provided by agency officials. However, to help ensure the overall quality of the funding data, we compared agency responses to primary source documents and attempted to reconcile any differences with agency officials, obtained information on and reviewed the processes used by agency officials to calculate the estimated amounts of crime technology assistance, and compiled the program and funding information we obtained into matrices and asked agency officials to verify the accuracy of this information. Finally, we note that the funding figures presented in this report are conservative, particularly regarding multipurpose grants, such as Byrne Formula Grants and Local Law Enforcement Block Grants, because we included only amounts that could be reasonably estimated by agency officials or by our reviews of agency information. This appendix presents information about the types and amounts of crime technology-related grants provided by the Department of Justice to state and local law enforcement agencies during fiscal years 1996 through1998. Generally, as reflected in the following two sections, we categorized applicable grants as being either single-purpose or multipurpose: Single-purpose grants are clearly for crime technology purposes--which can include any number of a wide array of activities under the definition of “crime technology assistance” provided in appendix I. Multipurpose grants, distributed funds that could be used by state and local law enforcement agencies for a variety of purposes including, but not limited to, crime technology assistance. Justice officials provided us with estimates of the funding amounts related to crime technology assistance. The last section in this appendix provides brief descriptions of applicable grant programs. Table II.1 shows the types and amounts of grants provided by Justice components to state and local law enforcement to be used expressly for crime technology-related purposes during fiscal years 1996 through 1998. Grants for these purposes totaled about $255.1 million for the 3-year period. Table II.1: Estimated Obligations of Department of Justice Single-Purpose Grants for Crime Technology Provided to State and Local Law Enforcement Agencies, Fiscal Years 1996 Through 1998 Dollars in thousands Component and program 1996 1997 1998 Drugfire Local Law Enforcement Block Grants Program: Technical assistance and training allocation National White Collar Crime Center State Identification Systems Grants Program Subtotal Bureau of Justice Statistics National Criminal History Improvement Program National Sex Offender Registry State Justice Statistics Program for Statistical Analysis Centers Subtotal National Institute of Justice Counterterrorism Technology Program Crime Act 1-Percent Set-Aside (funded by Local Law Enforcement Block Grants Program) Forensic DNA Laboratory Improvement ProgramScience and Technology Programs (funds from the Office of Community Oriented Policing Services) Southwest Border States Anti-Drug Information System Subtotal Office of Justice Programs Improved Training and Technical Automation Grants $117,873.6 The DNA Laboratory Improvement Program is a joint program with the FBI. Table II.2 shows estimates of the funding amounts involving crime technology assistance in multipurpose grants for fiscal years 1996 through 1998. As shown, the funding totaled about $746.3 million for the 3 years. However, for some of the grant programs, including the Byrne Formula Grant Program and the Local Law Enforcement Block Grants Program, the amounts shown in the table do not represent all of the applicable funding involving crime technology. Rather, as indicated below, the amounts are only a partial accounting. State and local law enforcement agencies can use Byrne Formula Grant funds for a total of 26 purpose areas. Sample uses include demand- reduction education programs and improving corrections systems. Eight of the 26 purpose areas could involve crime technology. According to the Bureau of Justice Assistance, three of these eight purpose areas clearly consist of crime technology-related activities (see app. I for more information). The Bureau of Justice Assistance provided us with funding amounts for these 3 purpose areas for all recipients, which totaled about $183.0 million for the 3 fiscal years, as presented in table II.2. However, because its accounting systems do not specifically track crime technology assistance as a separate category, the Bureau of Justice Assistance was unable to provide us with an estimate for the other five purpose areas. Thus, for these five purpose areas, we reviewed grant files at the Bureau of Justice Assistance. Due to time constraints, we limited our review to the 10 states that received the largest amounts of Byrne Formula Grant funding in fiscal year 1998. Collectively, these 10 states received 48 percent of total Byrne Formula Grant funding in fiscal year 1998. For these 10 states, we reviewed grants files for fiscal years 1996, 1997, and 1998. The relevant funding for these five purpose areas (about $5.0 million total for the 3 years) is also included in table II.2. An analysis of grant files for the other 40 states, the District of Columbia, and U.S. territories would likely result in a higher funding total. As shown in table II.2, the funding figures for the Local Law Enforcement Block Grants Program totaled about $32.7 million and are based on data from about 60 percent of fiscal year 1996 grantees and about 10 percent of fiscal year 1997 grantees. No data were available for fiscal year 1998. As indicated below, a more thorough accounting would likely result in a higher funding total for this grant program. According to the Bureau of Justice Assistance, only one purpose area of the Local Law Enforcement Block Grants Program involves crime technology assistance--that is, the purpose area called “equipment and technology.” However, by the Bureau’s definition, the “equipment” portion of this purpose area does not constitute crime technology assistance. Nonetheless, in response to our request, a Bureau of Justice Assistance official said that the Bureau’s accounting systems could not differentiate this portion of the purpose area from the “technology” portion. Alternatively, the official provided us with the following information: The equipment and technology purpose area has 10 subcategories. Of these, five subcategories--i.e., those pertaining to systems improvements (data linkage, criminal history records, etc.)--are most likely to include expenditures related to crime technology. For these five subcategories, actual expenditure data have been reported by some grantees for fiscal years 1996 and 1997. Specifically, based on reports from about 60 percent of fiscal year 1996 grantees, actual expenditures by these grantees for the five subcategories totaled about $28.9 million in 1996. Based on reports from about 10 percent of fiscal year 1997 grantees, actual expenditures by these grantees for the five subcategories totaled about $3.7 million. There has been no reporting by fiscal year 1998 grantees. Thus, for the five subcategories, the reported actual expenditures at the time of our review totaled about $32.7 million, as reflected in table II.2. From these same 1996 and 1997 grantees, the total expenditures reported for all 10 subcategories at the time of our review was about $63.4 million. The reported actual expenditures for the five crime technology-related subcategories ($32.7 million) represent about 50 percent of the total reported actual expenditures for the “equipment and technology” purpose area ($63.4 million). During fiscal years 1996 to 1998, funds obligated for this purpose area totaled about $628 million. However, the 50-percent figure cannot be used to statistically project how much of this total involved crime technology assistance during fiscal years 1996 through 1998. Office of Community Oriented Policing Services Advancing Community Policing Grants Community Policing to Combat Domestic Violence Grants Making Officer Redeployment Effective Grants Problem-Solving Partnerships Subtotal Violence Against Women Grants Office Grants to Encourage Arrest PoliciesSTOP Violence Against Women Formula Grants Subtotal $746,338.3 The amounts shown represent funding for three purpose areas: Purpose area 15(a) is for improving drug-control technology; purpose area 15(b) is for criminal justice information systems (including a 5- percent set-aside for improving criminal justice records); and purpose area 25 is for developing DNA analysis capabilities. The figures shown are based on our review of grant files for 10 states that received the largest amount of Byrne Formula Grant funding. Funding for these 10 states represented approximately 48 percent of total Byrne funding for fiscal year 1998. Brief descriptions of Justice’s single-purpose and multipurpose grant programs are given next. Following are brief descriptions of the Justice single-purpose grants listed in table II.1. The grant programs are grouped by the applicable Justice component. The Bureau of Justice Assistance provided the following grants for crime technology assistance to state and local law enforcement agencies. Drugfire: This program consists of a “reimbursable agreement” from the FBI under which the Bureau of Justice Assistance administers grants to state and local law enforcement to purchase Drugfire software, equipment, and training. Drugfire is a database system providing the ability to exchange and compare images of fired ammunition casings and bullets. The database is capable of connecting shootings based on a comparison and matching of firearms-related evidence. Local Law Enforcement Block Grants - Technical Assistance and Training Allocation: This allocation supports investigative personnel in using surveillance equipment and information systems applications. The program also provides for technology training. National White Collar Crime Center: The Center provides limited “case funding” assistance to states and localities to track and investigate white- collar crimes. The focus is on improving information-sharing capabilities in multijurisdictional investigations. State Identification Systems Grants Program: These are formula grants to states to develop computerized identification systems integrated with FBI’s national identification databases. The FBI reimburses the Bureau of Justice Assistance to administer programs to integrate state systems with the national DNA database (CODIS), the National Crime Information Center, and the Integrated Automated Fingerprint Identification System. The Bureau of Justice Statistics provided the following grants for crime technology assistance to state and local law enforcement agencies. National Criminal History Improvement Program: This program helps states upgrade the quality and completeness of criminal records and provides increased compatibility with and access to national crime information databases. A priority is to ensure that state criminal history records are complete and ready for access through the National Instant Criminal Background Check System. National Sex Offender Registry: This is a component of the National Criminal History Improvement Program but is separately funded. States use funds to identify, collect, and disseminate information on sexual offenders within their jurisdictions. Funds are also available to enhance state access to the FBI’s sex offender database. State Justice Statistics Program for Statistical Analysis Centers: This program is designed to provide financial support to supplement state funding of a central state criminal justice statistics capability to carry out data collection and research that can benefit both the state and the nation. Grants are awarded to state statistics centers for data collection and analysis relating to identifiable themes, including technology-based research focusing on the analysis and use of machine-readable criminal history record data for tracking case-processing decisions, evaluation of record systems management, or studies related to the use of records to limit or control firearms acquisition by ineligible individuals. The National Institute of Justice provided the following grants for crime technology assistance to state and local law enforcement agencies. Counterterrorism Technology Program: This program assists state and local law enforcement by developing technologies to combat terrorism and improve public safety. The funding amounts presented in table II.1 are for a project (the “InfoTech” project) to develop a technology to allow law enforcement agencies to share information using their existing systems and networks. Crime Act 1-Percent Set-Aside: Under the Local Law Enforcement Block Grants Program, 1 percent ($20 million) per year--of the $2 billion annual authorizations for fiscal years 1996 through 1998--were set aside for use by the National Institute of Justice for new law enforcement technologies. According to National Institute of Justice officials, this 1-percent set-aside supported 153 projects during the 3 fiscal years; and, of this total, 8 projects directly benefited state or local law enforcement agencies.Funding for these eight projects (about $2,325,600) is reflected in table II.1. Forensic DNA Laboratory Improvement Program: This program is for improving the quality and availability of DNA analysis for law enforcement identification, such as by expanding on-line capabilities with the national DNA database. Science and Technology Programs: This broad grant category supports research, development, and evaluation of approaches, techniques, and systems to improve the criminal justice system. Funded by a transfer of funds from the Office of Community Oriented Policing Services, state and local law enforcement grantees test and implement crime-fighting technologies that serve a community policing function. Southwest Border States Anti-Drug Information System: These grants can be used to construct or improve state databases, build interface capabilities with the overall information system, and help procure hardware and consulting services. Improved Training and Technical Automation Grants: This is a general technology assistance program focusing on communications and information integration. Grant purposes include improving communications systems, establishing or improving ballistics identification programs, increasing access to automated fingerprint identification systems, and improving computerized collection of criminal records. This program transferred funds for the FBI’s Law Enforcement On-Line program in fiscal year 1996, which are reflected in table III.1. Following are brief descriptions of Justice’s multipurpose grants listed in table II.2. Multipurpose grants have numerous criminal justice purposes established by law. State and local law enforcement can use the respective funds for a number of activities, including crime technology. The grant programs are grouped by the applicable Justice component. Byrne Discretionary Grants: These grants are authorized to be awarded to state and local law enforcement, as well as private entities, for crime control and violence prevention activities. The grant program focuses specifically on education and training for criminal justice personnel, technical assistance, multijurisdictional projects (e.g., state records integration), and program demonstrations. Grants also support research and development projects. Byrne Formula Grants: States receive federal funds to improve the functioning of criminal justice systems. Amounts are based on a set percentage (0.25) per state, with the remaining funds allocated based on state population. Eligible uses of funds are categorized by 26 program purpose areas. Of these, eight contain or could encompass crime technology assistance. Three of these purpose areas--criminal justice information systems, drug-control technology, and DNA analysis--are singularly allocated for crime technology assistance. Included in the criminal justice information systems purpose area is the 5 percent of Byrne funds that states must allocate toward improving criminal justice records. In the remaining five purpose areas, a portion of funds could potentially be spent on crime technology. Local Law Enforcement Block Grants Program: General purpose law enforcement grants are distributed directly to localities. The amounts are determined by the number of violent crimes reported in the jurisdiction. The program has seven broad purpose areas. Of those, one contains a specific funding stream for procuring equipment and technology. Funds cannot be used to purchase or lease tanks, fixed-wing aircraft, or other large items without direct law enforcement uses. Drug Court Discretionary Grant Program: This program provides financial and technical assistance to states and localities to develop and implement drug treatment courts that use a mix of treatment, testing, incentives, and sanctions to remove nonviolent offenders from the cycle of substance abuse and crime. Grant recipients can use funds to support the development of information management systems and accompanying software. Data sharing among drug courts is a primary focus. Advancing Community Policing: These grants assist state and local law enforcement in further developing community policing infrastructures. Grants can be used to purchase technology and equipment, statistical and crime-mapping software, and training services. Also, grants can be used to help law enforcement agencies overcome organizational obstacles and to establish demonstration centers that model current community policing methods. Community Policing to Combat Domestic Violence: These grants fund innovative community policing efforts to curb domestic violence by developing partnerships between law enforcement agencies and community organizations. Making Officer Redeployment Effective Grants: This program serves a broad purpose of increasing the deployment of law enforcement officers devoted to community policing by expanding available officer time without hiring new officers. Grants can be used to purchase equipment and technology to free up community policing resources. Grants fund up to 75 percent of the cost of equipment and technology, with a 25-percent local match. Problem-Solving Partnerships: These grants fund problem-solving partnerships between police agencies and community organizations to address persistent crime problems, such as drug dealing and other public disorder problems. Grants can be used for technology training and procurement of equipment. Grants to Encourage Arrest Policies: This grant program encourages states and localities to increase law enforcement attention to domestic abuse. Grants can support development of integrated computer tracking systems as well as provide training for police to improve tracking of domestic violence cases. STOP Violence Against Women Formula Grants: These formula grants are for creating a “coordinated, integrated” strategy involving all elements of the criminal justice system to respond to violent crimes against women. Broad program purposes include training for law enforcement and developing and implementing “services” to effectively address violent crimes against women. Department of Justice components provided crime technology assistance in the form of access to and use of specialized support services and systems. Table III.1 shows the specific types of assistance provided by the Bureau of Justice Assistance, Bureau of Justice Statistics, DEA, FBI, the Immigration and Naturalization Service, the INTERPOL-U.S. National Central Bureau, and the National Institute of Justice. As shown, regarding support services and systems, Justice’s crime technology assistance to state and local law enforcement agencies for fiscal years 1996 to 1998 totaled about $146.6 million. Marshals Service officials told us that their organization did not have support services and systems that provided crime technology assistance to state and local law enforcement. The last section of this appendix provides brief descriptions of the support services and systems presented in table III.1. Following are brief descriptions of Justice’s crime technology assistance programs listed in table III.1. The descriptions are based on information that the Justice components provided to us. The services and systems are grouped by the component providing them. National Cybercrime Training Partnership: To address the changing role of computers and the Internet in the commission of crimes, the National Cybercrime Training Partnership supports all levels of law enforcement with “cyber-tools, research, and development.” The Partnership is developing a nationwide communications network to serve law enforcement by providing secure “interconnectivity” over the Internet. Also, the Partnership focuses on training by (1) developing a cadre of instructors capable of training law enforcement and (2) distributing nontraditional modes of curricula, such as CD-ROMs, among other things. In addition, the Partnership plans to serve as a clearinghouse for information and experts available to law enforcement. National White Collar Crime Center: Headquartered in Richmond, VA, the center maintains a national support system for state and local law enforcement to facilitate multijurisdictional investigations of white-collar and economic crimes. The center operates a training and research institute that serves as a national resource for fighting economic crime. Regional Information Sharing Systems: Six regional criminal intelligence centers focus on multijurisdictional criminal activities. Each center operates in a mutually exclusive geographic area, a division designed to more effectively support investigation and prosecution of regional crimes. State and local law enforcement support services and systems include an Intelligence Pointer Database, a National Gang Database (under development), and a secure intranet. All six centers are electronically connected, and state and local members (about 87 percent of total membership as of December 31, 1997) are provided with access to the secure intranet, which facilitates secure E-mail transmissions and access to other databases. The centers also sponsor technical training conferences. Member jurisdictions can be assessed a nominal annual fee that varies by center. National Incident-Based Reporting System: This system represents the next generation of crime data from federal, state, and local law enforcement agencies. Designed to replace the Uniform Crime Reporting Program initiated by the FBI in 1930, the development of the National Incident-Based Reporting System represents a joint effort of the Bureau of Justice Statistics and the FBI to encourage the presentation of higher quality data on a wider variety of crimes. The Bureau of Justice Statistics has played a significant role in fostering participation and developing techniques to assist jurisdictions in conforming to program requirements. Also, the Bureau of Justice Statistics funds the operation of a dedicated website and the formulation of model analytic strategies. DEA provided the following crime technology assistance to state and local law enforcement agencies. Forensic laboratory analysis, District of Columbia Metropolitan Police: One of eight forensic laboratories located throughout the United States, the DEA’s Mid-Atlantic Laboratory provides forensic support to the District of Columbia Metropolitan Police. National Drug Pointer Index: This system provides participating law enforcement agencies with an automated response capability to determine if a case suspect is under active investigation by any other participating agency. If a match occurs, the system provides point-of-contact information for each identified record. Simultaneously, the system notifies each record owner that point-of-contact information has been released to the entry maker for that particular target. If the search finds no matching records, a negative response is returned. Training: DEA provided three training courses that were applicable to our definition of crime technology assistance. The Clandestine Laboratory Unit and the Specialized Training Unit both provided training on topics such as how to use technological devices to assess risks presented by chemicals found in laboratories. DEA also provided briefings to state and local law enforcement personnel on the National Drug Pointer Index and its benefits. The FBI Laboratory, which is one of the largest and most comprehensive forensic laboratories in the world, and the FBI’s Criminal Justice Information Services Division provided various support services and systems to state and local law enforcement agencies. Combined DNA Index System: An index containing DNA records from persons convicted of crimes. State and local crime laboratories are able to store and match DNA records. Computer Analysis Response Team: Technical assistance regarding computer technology and computer forensics is provided to federal as well as state and local law enforcement agencies. Criminal Justice Information Wide Area Network: This network supports the electronic capture, submission, processing, matching, storage, and retrieval of both criminal and civil fingerprints received by the FBI for use in the Integrated Automated Fingerprint Identification System environment. Express: An explosives reference database, Express provides and correlates information from bombing crime scenes and undetonated explosive devices. Fingerprint Identification Program: An identification process that scans fingerprint cards and captures fingerprint features for classification and selects candidates for future manual comparison. Identification Automated System: A system containing the criminal history records of persons arrested for the first time and reported to the FBI since July 1974, as well as selected manual records that have been converted to the automated system. Law Enforcement On-Line: As the intranet for the U.S. law enforcement community, Law Enforcement On-Line links all levels of law enforcement throughout the United States and supports broad, immediate dissemination of information. Learning programs through electronic sources are also delivered to local, state, and federal law enforcement through this intranet. National Crime Information Center: This is the nation’s most extensive computerized criminal justice information system. It consists of a central computer at FBI headquarters, dedicated telecommunications lines, and a coordinated network of federal and state criminal justice information systems. The center provides users with access to files on wanted persons, stolen vehicles, and missing persons. The system’s largest file, the Interstate Identification Index, provides access to millions of criminal history information records contained in state systems. National Integrated Ballistics Information Network: A joint effort between the FBI and the Bureau of Alcohol, Tobacco and Firearms, this national database computer system allows laboratories across the country to exchange and compare images of fired ammunition casings. The images of microscopic marks that result after a gun is fired are stored in this database. Training: The FBI Laboratory and the Criminal Justice Information Services Division’s Education/Training Services Unit provide training to the law enforcement community. Forensic laboratory training includes a variety of topics such as management of forensic and technical services; identification and comparison of latent fingerprints; explosives detection; postblast bombing investigations; and responding to and resolving the scientific, forensic, and technical elements of incidents involving chemical, biological, and nuclear materials. The Criminal Justice Information Services Division provides training on, among other things, the National Crime Information Center, Uniform Crime Reports, and the Integrated Automated Fingerprint Identification System. The Law Enforcement Support Center is accessible 24 hours a day to the criminal justice community. The center provides state and local law enforcement agencies with the ability to exchange information on the immigration status of foreign-born suspects under arrest or investigation. Information requests are submitted through the National Law Enforcement Telecommunications System. Upon receipt of an inquiry, the center searches INS and other criminal databases and transmits the findings to the requester. Also, the center provides an important source of information to each state conducting firearms purchaser inquiries stemming from the Brady Handgun Violence Prevention Act (Brady Act). State firearms points of contact may query the center on prospective gun purchasers and receive a “proceed” or “deny” recommendation using the disqualifying criteria mandated by the Brady Act. INTERPOL- U.S. National Central Bureau is a resource for state and local law enforcement agencies to conduct international investigations. The bureau is electronically connected through a secure network to the national police agencies of 177 INTERPOL member countries and the INTERPOL General Secretariat (headquarters) in Lyons, France. Also, the bureau communicates with state offices that have liaisons responsible for contacting foreign police. Coordination between the liaison offices and member countries is maintained through the National Law Enforcement Telecommunications System. National Law Enforcement and Corrections Technology Centers: The five regional centers and one national center identify and evaluate available technologies to determine law enforcement suitability, facilitate public/private partnerships to develop new technologies, and provide technology and information assistance to law enforcement and corrections agencies by introducing promising new technologies and providing technology training. According to a National Institute of Justice official, while the centers do not directly transfer crime technology, they serve as (1) brokers between crime technology manufacturers and state and local jurisdictions and (2) an information resource for law enforcement agencies. Southwest Border States Anti-Drug Information System: A secure law enforcement information sharing system that connects intelligence databases of four southwest border states (Arizona, California, New Mexico, and Texas); the three Regional Information Sharing Systems centers in that area; and the El Paso Intelligence Center. This system provides for secure E-mail transmissions and includes a preestablished query system. The system allows all participants to query the databases of all other participants and is composed of an administrative web server that offers key electronic services, such as providing agency contact information and system usage statistics. Department of the Treasury components provided crime technology assistance to state and local law enforcement agencies in the form of access to, and use of, specialized support services and systems, such as computerized databases and forensics laboratories. Treasury components also provided training on the use of technology-related equipment to state and local law enforcement agencies. Table IV.1 shows the specific types of crime technology assistance provided by the Bureau of Alcohol, Tobacco and Firearms; the Customs Service; the Federal Law Enforcement Training Center; the Financial Crimes Enforcement Network; IRS; and the Secret Service. As shown, Treasury’s crime technology assistance for fiscal years 1996 to 1998 totaled about $15.9 million. The last section in this appendix provides brief descriptions of applicable support services and systems. Following are brief descriptions of the Treasury’s crime technology assistance programs listed in table IV.1. The descriptions are based on information that the Treasury components provided to us. The services and systems are grouped by the component providing them. ATF provided the following types of crime technology assistance to state and local law enforcement agencies: Accelerant Detection Analysis: The laboratory analysis of fire debris to detect and identify flammable liquids potentially used as accelerants in an incendiary fire. ATF has offered on-site training in fire debris analysis to analysts from state and local laboratories and is currently assisting in fire debris analysis training provided by the National Fire Safety Training Center. Advanced Serial Case Management: A case management and lead-tracking database designed to accept large volumes of data. The database analyzes information to assist investigators in identifying trends, patterns, and investigative leads for major federal or state incidents. Arson CD-ROM: An interactive training tool, the CD-ROM/virtual reality training program is designed to elevate the overall investigative competency levels of all fire investigators in the United States and establish a consistent standard for fire investigation. Arson/Explosives Incident System: A clearinghouse for data collected from several agencies. The system produces statistical and investigative data that can be used real-time by arson and bomb investigators. State and local agencies can query the system for things such as component parts, stolen explosives, and device placement. Consolidated Gang Database: A database to track various types of information about outlaw motorcycle gangs, street and ethnic gangs, and antigovernment groups involved in criminal activity. Data are compiled from ATF investigative reports and state and local law enforcement agencies. Dipole Study: The Dipole Study is intended to assist state, local, and federal explosives investigators and building designers. For courtroom presentation purposes, the study produced a software package that will allow investigators to support their theory of an explosion and explain blast damage and fragmentary damage. Electronic Facial Identification Technique: A specialized software program that allows operators to create composite sketches of suspected perpetrators or unidentified persons who surface as suspects or potential witnesses during an investigation. Explosive Forensics: Examination of debris collected at the scene of an explosion or of suspected explosive material obtained from recovery or undercover purchase. Federal Firearms License System: The primary objective of this system is to produce a relational database of firearms and explosives licensing information. This system allows users to search for licensing information. On a semiannual basis, information is provided to state and local police departments identifying the federal firearms licensees in their geographical area of responsibility. Firearms Tracing System: The National Tracing Center traces firearms for federal, state, local, and foreign law enforcement agencies. The firearms are traced from the manufacturer to the retail purchaser. Integrated Ballistics Identification System: A system that allows firearms technicians to digitize and automatically correlate and compare bullet and shell casing signatures. The equipment quickly provides investigators with leads to solve greater numbers of crimes. National (Arson and Explosives) Repository: A database, the repository contains information regarding arson incidents and the actual and suspected criminal misuse of explosives throughout the United States. The information will be available for statistical analysis and research, investigative leads, and intelligence. National Response Team: The National Response Team was established to assist state and local police and fire departments in investigating large- scale fires and explosions. Youth Crime Gun Interdiction Initiative: This initiative is a focused component of the federal effort to combat firearms trafficking. Working with state and local law enforcement agencies, the tracing of crime guns provides leads to interdict the trafficking of firearms to youths and juveniles. Among other things, this initiative makes ATF’s firearms trace capabilities and data more accessible to state and local law enforcement agencies. The Treasury Enforcement Communications System has three programs that provide crime technology-related assistance to state and local law enforcement agencies: Diplomatic Licensing and Registration Program: Users may query vehicle registration and drivers license information for persons and vehicles licensed by the Department of State. Aircraft Registration and Tracking: Users may query information about aircraft registered with the Federal Aviation Administration. Bank Secrecy Act Program: Provides information to state agencies responsible for enforcing state money laundering statutes. The Federal Law Enforcement Training Center provides two training programs involving use of crime technology-related equipment: Advanced Airborne Counterdrug Operations Training Program: This program provides students with the opportunity to use technical equipment in darkness. Airborne Counterdrug Operations Training Program: This program teaches students how to use equipment such as global positioning hand-held devices and thermal imaging systems. Project Gateway: Established to facilitate the exchange of Bank Secrecy Act information with state and local law enforcement agencies. Gateway incorporates custom-designed software to provide designated state coordinators with rapid and direct on-line electronic access to Bank Secrecy Act records, including suspicious activity reports. National Forensic Laboratory: The laboratory primarily supports IRS criminal investigations involving violations of federal tax law and related financial crimes. According to IRS officials, a limited or negligible amount of support (less than $1,000 per year) is provided to state and local law enforcement. However, when provided, forensic support or assistance may include (1) document and handwriting analyses, (2) polygraph examinations, and (3) audio/video surveillance tape enhancements. The Secret Service provided the following types of crime technology assistance to state and local law enforcement agencies: Audio/Image Enhancement: Audio enhancements include 911 calls, telephone answering machine recordings, court recordings, and gunfire analysis. Image enhancements include images obtained from surveillance cameras that recorded robberies of stores, banks, and ATMs, as well as devices that recorded homicides in a variety of locations. Automated Fingerprint Identification System: This hybrid network of state and local digitized fingerprint databases provides in excess of 20 million fingerprint records to be searched by the Secret Service’s Identification Branch. Cellular Tracking Project: Equipment is used to track cellular telephones. The system is capable of identifying the suspect’s location. This system is made available to federal, state, and local law enforcement agencies upon request and, according to Secret Service officials, has proven to be very successful in cases where state and local law enforcement officials have requested this Secret Service equipment and expertise in murder, carjacking, and kidnapping cases. Computer Forensic Support: Forensic examination of electronic evidence is provided through the Electronic Crimes Special Agent Program. Special agents, among other things, assist state and local law enforcement agencies in the examination of computers, computer systems, electronic communication systems, telecommunication systems and devices, electronic organizers, scanners, and other devices manufactured to intercept or duplicate telecommunications services. Electronic Crimes Branch Support: This branch supports law enforcement investigations involving computers, computer systems intrusions, electronic communication systems, and telecommunication systems and devices. Among the branch’s duties are the oversight of all telecommunications and computer fraud cases; establishment and operation of a forensic laboratory to process electronically stored data and telecommunications devices; and coordination of training initiatives for field office investigators, state and local police, and industry representatives. Encryption: Assistance is provided to state and local law enforcement agencies in decoding encrypted computer files. Identification Branch Training: This branch provides training to state and local law enforcement agencies on appropriate methods for operating computerized fingerprinting systems. Operation Safe Kids: This program is sponsored by the Secret Service at the request of state and local law enforcement agencies. Operation Safe Kids uses digital cameras and fingerprint scanning technology to provide parents with a printed document that contains a photograph and thumbprints of their child. Polygraph examinations: The Secret Service’s Polygraph Branch conducts polygraph examinations regarding criminal activities ranging from embezzlement to child molestation and homicide. Questioned Document Branch: Four branch sections (document examination, instrumental analysis, document authentication, and automated recognition) provide forensic science support. The branch maintains three databases that can support state and local law enforcement: 1. The Forensic Information System for Handwriting is a software system where known and unknown writing samples are scanned, digitized, and subjected to mathematical algorithms against authors maintained in the database. 2. The Watermark Collection database contains over 23,000 watermarks enabling the identification of partial as well as complete watermarks. This database is updated several times a year as the paper industry submits new and updated watermarks. 3. The Ink Collection database contains over 7,500 inks dating back to the 1920s. Chemical analysis of these inks allows for differentiation and identification of the first date a particular entry could have been made based on the first commercial availability date of the writing ink. This database is updated yearly through submissions from the ink industry. ONDCP’s Counterdrug Technology Assessment Center (CTAC) was established to serve as the central counterdrug enforcement research and development organization of the U.S. government. CTAC was established by the National Defense Authorization Act for Fiscal Year 1991 (P.L. 101- 510), which amended the Anti-Drug Abuse Act of 1988 and placed CTAC under the operating authority of the Director of ONDCP and required that CTAC be headed by a Chief Scientist of Counterdrug Technology. CTAC’s mission is to advance technologies that support the national drug control goals by improving the effectiveness of law enforcement, drug interdiction, and substance abuse treatment research. As part of its mission, CTAC is to identify and define the scientific and technological needs of federal, state, and local drug enforcement agencies. According to ONDCP officials, in fiscal year 1998, CTAC implemented the Technology Transfer Pilot Program to assist state and local law enforcement agencies. According to ONDCP officials, for fiscal year 1998, the Technology Transfer Pilot Program involved 18 projects or systems that fit our definition of crime technology assistance. However, only 15 of these projects or systems were transferred or provided to state and local law enforcement agencies. According to the CTAC Director, as of December 1998, these 15 projects or systems involved a total of 202 recipient state and local law enforcement agencies. As table V.1 shows, the fiscal year 1998 obligations for these 15 projects or systems totaled $13 million. The Technology Transfer Pilot Program matched CTAC-sponsored systems with state or local law enforcement agency requirements and arranged for the transfer of those systems. In order to participate in the program, state and local law enforcement agencies submitted letters and a completed ONDCP questionnaire. When awarded, the recipients received a technical team, training, and technical assistance that was directly related to the technology product. The U.S. Army Electronic Proving Ground at Fort Huachuca, AZ, assisted CTAC by managing the Technology Transfer Pilot Program and providing engineering expertise in communications and electronics. The following are descriptions of the 15 CTAC crime technology assistance systems (see table V.1) that were transferred to state and local law enforcement agencies in fiscal year 1998. Air-Ground Surveillance Management System: This technology provides the ability to track and locate both field units (friendly assets) and suspects (targets) using a variety of remote sensors. Tracking and other information is graphically displayed and archived on a moving map display at the base station. Body Worn: A miniaturized audio device (body wire). The secure multichannel transmitter, with voice privacy and low probability of detection capabilities, can be worn inconspicuously during covert operations. Borderline With Voicebox System: A telephone intercept monitoring and recording system. The system digitally records telephone conversations, faxes, and computer data, plus any short notes typed by the monitor/operator. The recordings are then available for review and transcription and use in investigations. Data Locator System: A software package that provides secure exchange of electronic mail, database input and extraction, and police intelligence analysis information over a standard internet connection. Drugwipe: A surface residue test kit that identifies trace amounts of cannabis, cocaine, opiates, and amphetamines. Evidence of narcotic materials is identified by color change. GLADYS: A computer-based system that uses telephone company billing records to analyze cellular phone traffic. Mini-Buster Contraband Detector: A portable contraband detection kit. A “Buster”detector indicates differences in density encountered when moved across a surface. The kit includes an ultrasonic range finder that measures distances up to 90 feet within 1-inch accuracy for detection of false walls or bulkheads. The flexible fiber optic scope contains a portable light source for remote viewing inside inaccessible spaces such as fuel tanks. Money Laundering Software: A software package used to detect suspicious financial transactions. The software identifies underlying patterns and trends associated with money laundering, including suspicious financial transactions within complex data sources, such as state and bank activities involving money transfers and currency exchanges. Signcutter: A system that tracks and locates vehicles using a tracking unit based on Global Positioning System technology. The unit may be used to track law enforcement vehicles or covertly track a “suspect” vehicle. Small Look: A miniaturized video surveillance system consisting of a miniature, solid-state electronic camera system. It captures, processes, and stores hundreds of digital picture images in nonvolatile memory. Tactical Speech Collection and Analysis System: A voice identification system that can store up to 25 voice samples on the system’s hard drive. Thermal Imagers: An infrared imaging surveillance system that provides night vision capabilities. Real-time video pictures are generated in all lighting conditions when the unit senses heat. Vapor Tracer: A hand-held detection system, this device is capable of detecting and identifying extremely small quantities of narcotics and explosives. This system works by drawing a sample of vapor into the detector where it is heated, ionized, and identified. Video Stabilization System: A surveillance video enhancement system. The system is used to eliminate jitter and camera motion, typically associated with surveillance video. Wireless Interoperability System: A computer-based interagency radio communications switching system. Computer-aided switching technology is used to connect numerous law enforcement agencies to a central radio system console for the purpose of improving interagency communications during counternarcotic investigations. Although available in fiscal year 1998, three other crime technology-related systems sponsored by ONDCP were not transferred to state and local law enforcement agencies: Secure Messaging and Investigative Information Transmission System: A system using a secure web server to upload, search, and distribute images using wireless transmission between field and police headquarters. Suspect Pointer Index Network: A relational database application to be used for the entry, retention, and analysis of multimedia data, such as images and text, supporting counterdrug operations, general case investigations, and crime analysis requirements. Tactical Video Communication System: An analog communication system for transmitting live video and audio from a forward area back to a command post. Jan B. Montgomery, Assistant General Counsel Geoffrey R. Hamilton, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the crime technology assistance provided by the federal government to state and local law enforcement agencies for fiscal years 1996 through 1998, focusing on the types and amounts of assistance provided by the Department of Justice (DOJ), the Department of the Treasury, and the Office of National Drug Control Policy (ONDCP). GAO noted that: (1) identifiable crime technology assistance provided by DOJ, Treasury, and ONDCP to state and local law enforcement agencies during fiscal years 1996 through 1998 totalled an estimated $1.2 billion; (2) this total is conservative because--given that these federal agencies are not required to, and do not specifically track crime technology assistance separately in their accounting systems--GAO included only amounts that could be identified or reasonably estimated by agency officials or GAO; (3) this estimate was particularly conservative regarding multipurpose grants, which by design can be used for a variety of purposes; (4) further, regarding applicable support services and systems, GAO did not include personnel costs; (5) the large majority of identified crime technology assistance to state and local law enforcement agencies--$1.0 billion or about 85 percent of the estimated $1.2 billion total during fiscal years 1996 through 1998--was grants, all of which were administered by DOJ; (6) the three largest crime technology assistance grants during the 3 fiscal years were the: (a) Office of Community Oriented Policing Services' Making Officer Redeployment Effective grants ($466.1 million); (b) Bureau of Justice Assistance's Byrne Formula Grants ($188.0 million); and (c) the Bureau of Justice Statistics' National Criminal History Improvement Program ($147.2 million); (7) support services and systems was estimated to be the second largest category of crime technology assistance provided to state and local law enforcement agencies; (8) in this category, DOJ was the major provider, with an estimated $146.6 million in assistance compared to Treasury's $15.9 million; (9) regarding in-kind transfers, responses from the three agencies GAO reviewed indicated that only ONDCP had an established, relevant program; and (10) ONDCP's technology transfer program totalled an estimated $13.0 million in assistance during fiscal year 1998, the first year of the program's existence.
|
States and localities have developed and largely funded their own indigent defense systems. To do so, they have generally adopted one or more of the following methods for providing indigent defense—employing full or part-time public defenders to handle the bulk of cases requiring counsel; entering into contracts with private attorneys, often after a bidding contest, to provide counsel; or developing a list, or “panel,” of private attorneys who accept a predetermined fixed rate and from which the court appoints as defense counsel when needed. Further, depending on the state, funding for the indigent defense system is provided by the state, localities within the state, or a combination of state and local funding. According to DOJ, as of its most recent census of public defender offices issued in 2007, 22 states have established statewide—and state funded—public defender agencies to provide indigent defense in which a central office oversees the operations, policies, and practices of all public defender offices located in the state. In another 27 states, local jurisdictions—largely counties—are responsible for providing and, in whole or in part, funding, indigent defense services. The remaining state funds 100 percent of its indigent defense services, which are provided by assigned counsel, but does not have city, county, or state public defender offices. Unlike states, tribes—which retain limited, inherent sovereignty—are not bound by restraints placed upon the federal or state governments through the Bill of Rights or other amendments to the U.S. Constitution, including the Sixth Amendment’s right to counsel provision. However, the Indian Civil Rights Act of 1968 (ICRA), as amended, limits the extent to which tribes may exercise their powers of self-government by imposing conditions on tribal governments similar to those found in the Bill of Rights to the U.S. Constitution. For example, ICRA extends the protections of free speech, free exercise of religion, and due process and equal protection under tribal laws. Among other protections afforded under ICRA, tribes must also afford a defendant the right to be represented by counsel at his or her own expense, and, as amended, the right to be provided counsel at the tribe’s expense if a sentence of imprisonment for more than 1 year is sought. DOJ and, in the case of tribes, DOI are the primary federal agencies that play a role in supporting indigent defense. First, DOJ, as the agency responsible for ensuring the fair and impartial administration of justice for all Americans, works to provide support to all participants in the justice system. Further, in a June 2010 speech before a nonprofit organization dedicated to protecting the rights of individuals in North Carolina, the Attorney General identified a crisis in the criminal defense system, and stated the department’s commitment to focusing on indigent defense issues and developing and implementing solutions. Within DOJ, two components provide services that could support indigent defense providers, the Access to Justice Initiative (ATJ) and the Office of Justice Programs (OJP). Established in March 2010 to address criminal and civil access to justice issues, ATJ is charged with helping the justice system efficiently deliver outcomes that are fair and accessible to all, irrespective of wealth and status. ATJ staff work within DOJ, across federal agencies, and with state, local, and tribal justice system stakeholders to increase access to counsel and legal assistance and to improve the justice delivery systems that serve people who are unable to afford lawyers. According to DOJ, ATJ is comprised of seven staff and, in fiscal year 2011, had a budget of $1.27 million. ATJ staff focused their efforts on indigent defense as well as a range of pressing criminal and civil access to justice issues, including foreclosure and veterans’ affairs. OJP works in partnership with the federal, state, local, and tribal justice communities—which include indigent defense providers—to identify the most pressing crime-related challenges confronting the justice system; to provide training, coordination, and innovative strategies and approaches for addressing these challenges; and to provide grant funding for implementing these strategies. Within OJP, several bureaus provide research, technical assistance, and funding that could support indigent defense providers. Specifically, the National Institute of Justice (NIJ) seeks to provide objective, independent, evidence-based knowledge and tools to meet criminal justice challenges, particularly at the state and local levels. Among other things, NIJ funds research and development, assesses programs and policies, and publicizes its findings. In addition, the Bureau of Justice Statistics (BJS) serves as DOJ’s primary statistical agency, collecting, analyzing, publishing, and disseminating information on criminal justice systems. Finally, both the Bureau of Justice Assistance (BJA) and the Office of Juvenile Justice and Delinquency Prevention (OJJDP) provide training, technical assistance, and grant funding designed to enhance and support the criminal and juvenile justice systems, respectively. Such funding may be awarded through formula grants, which are awarded on a noncompetitive basis generally using statutorily defined calculations, or discretionary grants, for which applicants generally compete for funding. Second, within DOI, BIA is responsible for supporting tribes in their efforts to ensure public safety and administer justice as well as to provide related services directly to, or through contracts or compacts with, federally- recognized tribes.law enforcement, detention, and tribal court programs. In addition, the Division of Tribal Justice Support for Courts within BIA’s Office of Justice Services works with tribes to establish and maintain tribal judicial systems. This includes conducting assessments of tribal courts and providing training and technical assistance on a range of topics, including establishing or updating law and order codes. Further, tribal courts may receive funding through BIA’s TPA. All federally-recognized tribes are eligible to receive TPA funds—either through contracts or compacts—for operating tribal programs and, in general, these funds are available for use to provide basic tribal services, such as social services, child welfare, natural resources management, and tribal courts. DOJ and BIA provide funding, training, and technical assistance that could support indigent defense, which may help to address challenges that public defenders face. Specifically, public defender offices or agencies that responded to our survey most frequently reported that obtaining adequate funding (75 of 106, or 71 percent) and providing appropriate compensation for their attorneys (77 of 107 or 72 percent) were extremely or very challenging to the ability of their office or agency to provide indigent defense services.respondent explained that their office’s best attorneys leave to pursue positions offering higher compensation. For instance, one survey DOJ makes funding available through formula and discretionary grants that could be used for indigent defense. Specifically, we identified 13 grant programs that DOJ administered from fiscal years 2005 through 2010 that recipients could use for this purpose. Three of these programs—the John R. Justice Program (JRJ), Capital Case Litigation Initiative (CCLI), and Juvenile Indigent Defense National Clearinghouse Grant (JIDNC)—required recipients to allocate or use funding for indigent defense, either because of its authorizing statute or requirements that DOJ set in its grant solicitation. In addition, a fourth program—the Wrongful Conviction Review Program (WCR)—limits eligibility for funding to nonprofit organizations, as well as public defender offices, that represent convicted defendants (who are, according to DOJ, indigent) in claims of innocence. As a result, for the purposes of our review, we consider the WRC grant to require that funding be used for indigent defense. See table 1 for a description of the DOJ grant programs that require that funding be used for indigent defense. DOJ also administered nine grant programs from fiscal years 2005 through 2010 that recipients could choose to allocate or award to indigent defense, but were not required to do so. In five of these nine programs— the JAG, JABG, Tribal Juvenile Accountability Discretionary Grant (TJADG), Byrne Competitive Grant Program, and the Tribal Civil and Criminal Legal Assistance Grant (TCCLA)—DOJ identified indigent defense as a priority or specific purpose of the grant. It did so by identifying indigent defense either as a purpose area, a stated priority in its grant solicitation, a specific category in the grant, or as a national initiative. According to DOJ, it established indigent defense as a priority to encourage spending in this area. See tables 2 and 3 for a description of the DOJ grant programs that do not require funding to be used for indigent defense, although recipients can use funds for such purposes. In addition, we determined that BIA funding could be used for indigent defense. According to BIA officials, if tribes choose to use BIA funding for this purpose, this funding would come from the tribes’ Tribal Courts TPA, which are distributed pursuant to contracts or compacts.contracts and compacts, tribes, rather than BIA, determine the best use of their funds. BIA does not specify requirements for spending levels on particular tribal court services, including indigent defense, because the Through nature of tribal sovereignty precludes BIA from placing requirements on how tribes spend their TPA funding. Tribes allocated a total of approximately $22 million through their Tribal Court TPA in each fiscal year from 2005 through 2010. Further, within DOJ, BJA announced a new solicitation in April 2012 that focuses on helping indigent defense systems adhere to principles established by the American Bar Association (ABA) for public defense delivery systems. These principles, approved in 2002, were created as a practical guide for those creating and funding new, or improving existing, public defense delivery systems. The principles include the fundamental criteria necessary to design a system that provides effective, efficient, high-quality, ethical legal representation for indigent defendants in which defenders have no conflicts of interest, such as representing two defendants in the same case. According to DOJ, BJA will award $1.4 million of new discretionary grant funding to support projects that help make achievement of these principles a reality. According to officials from BJA and ATJ involved in developing the solicitation, BJA and ATJ staff worked closely together to develop the grant, and also conducted outreach to indigent defense advocates to determine the type of assistance that would benefit public defenders. The officials explained that the grant will be flexible enough that a diverse group of public defender offices will be eligible to apply for funding because it will allow both less developed and more developed offices to identify areas for improvement in adhering to the ABA principles. In addition to funding, DOJ, BIA, and the Administrative Office of the U.S. Courts (AOUSC) provide training and technical assistance to indigent defense providers.Assistance Center (NTTAC) accepts requests for and provides training and technical assistance to state, local, and tribal criminal justice For instance, BJA’s National Training and Technical stakeholders. DOJ has also awarded funding to the ABA to convene a focus group of 18 successful reformers from across the country to develop strategies for reforming indigent defense systems. In its January 2012 report to DOJ, the group suggested measures DOJ could take to improve indigent defense, including providing funding for programs that bring training and resources to regions that are most in need, among other things. In addition, BJA has a cooperative agreement with American University to provide technical assistance to criminal courts, including indigent defense providers. For instance, American University conducted a workshop on improving the criminal case process with the Texas Indigent Defense Board. Furthermore, DOJ (through ATJ and the U.S. Attorney’s Offices), BIA, and AOUSC’s Office of Defender Services have partnered to develop the Tribal Court Trial Advocacy Training Program, which will consist of a series of trainings for tribal court personnel, including defenders, prosecutors, and judges. The first such training occurred in August 2011, and the second in March 2012. According to DOJ and BIA officials, an additional six trainings are planned through January 2013. A BIA official responsible for conducting the trainings stated that the trainings have resulted in court personnel coming together to create an improved tribal justice system. Recipients of the four grant programs requiring funds to be used in whole or in part for indigent defense from fiscal years 2005 through 2010 allocated or planned to use $13.3 million out of $21.2 million—or 63 percent—of available funds for state, local, and tribal indigent defense. For instance, 19 of 33 grantees of the Capital Case Litigation Initiative used $3.3 million to provide training to indigent defense attorneys who handle death penalty cases. See appendix II for additional information about the allocation and use of these grants for indigent defense. However, two-thirds or more of the survey respondents who were recipients of the DOJ formula grants for which indigent defense was not a required use or Tribal Courts TPA distributions reported that they did not allocate funding for indigent defense, partly because of other competing priorities, such as law enforcement needs. As shown in figure 1, survey respondents for JAG State Administering Agencies (SAA)—the designated agencies in each state that establish funding priorities and coordinate JAG funds among state and local justice initiatives—and survey respondents for Tribal Courts TPA distributions more frequently reported allocating funds for indigent defense than JJDP, JABG, or local and tribal JAG survey respondents. (See appendix III for additional information on the percentages of respondents who reported allocating nondiscretionary funding for indigent defense.) Similarly, as displayed in figure 2, our analysis of discretionary grants showed that no more than 25 percent was awarded for indigent defense from fiscal years 2005 through 2010. For example, the percentage of grants that was awarded fully or in part for indigent defense ranged from 1.3 percent for the Justice and Mental Health Collaboration program to 25 percent for the TCCLA grant.on the number of discretionary grants awarded for indigent defense. Those recipients who chose to allocate or use funding for indigent defense generally reported providing a small amount of funding for indigent defense relative to their total awards. Specifically, the award amounts reported by JAG, JABG, JJDP, and Tribal Courts TPA recipients who allocated funding to indigent defense ranged from 2 percent of the total award (in the JJDP program) to 14 percent (in the JABG program). Similarly, in our review of discretionary grants, awards for indigent defense were generally small relative to total awards, ranging from at most 0.4 percent of the total award (in the Justice and Mental Health Collaboration and Drug Court Discretionary Grant programs) to at most 8.1 percent (in the Tribal Court Assistance Program). Figure 9 shows indigent defense allocations as a percentage of these survey respondents’ total awards, in current dollars unadjusted for inflation, while figure 10 shows discretionary awards for indigent defense as a percentage of total awards, in current dollars. See appendix IV for additional details about these allocations and awards by year. Recipients most frequently reported using indigent defense funding for personnel and training, which may help to address challenges that public defenders face. More specifically, public defenders that responded to our survey most frequently reported that financial challenges very greatly or greatly impacted their ability to increase compensation for people working in the indigent defense system, hire additional attorneys, travel to or register for external training, and hire clerical support or investigators. Further, indigent defense providers we spoke with during a panel discussion at the National Legal Aid and Defender Association’s annual conference confirmed this position, stating that critical funding needs for public defender programs included personnel—both attorneys and support staff—as well as training. JABG and JJDP recipients who had allocated funding for indigent defense most commonly reported using this funding for training and personnel, and our review of discretionary grants found that indigent defense funding was generally used for these same purposes. Figures 11 and 12 illustrate the most frequently reported uses of the grants. Similarly, selected recipients of JAG and BIA Tribal Courts TPA distributions with whom we spoke most commonly reported using funding for personnel. In terms of personnel, grantees funded both attorneys and support staff, including social workers, investigators, or substance abuse and treatment specialists. Support staff help to conduct investigations, process clients as they come in for assistance, or address needs clients have beyond their court case, such as challenges with substance abuse, mental health, employment, or housing. For instance, one JAG grantee reported that funds were used to hire an attorney in a county public defender office to represent veterans in the criminal court systems. The attorney represents the veterans at the county’s veterans’ court and also conducts significant outreach to treatment providers in the county to help ensure veterans can obtain any additional treatment they may need. In terms of training, grantees funded activities that included instruction on juvenile law and technology. Moreover, one grantee—The Bronx Defenders—received a Byrne Competitive grant to provide technical assistance to other public defender organizations on the public defense model they use, known as holistic defense (see sidebar). For grant programs that require funding be used at least in part for indigent defense, DOJ collects data on whether recipients have allocated funding for indigent defense and the allocation amounts, which allows DOJ to determine if funding was used in accordance with grant requirements. For instance, in the John R. Justice Program, which funds student loan repayments for public defenders and prosecutors, DOJ collects data on the number and amount of loan repayments made to state and local public defenders. In addition, for the Capital Case Litigation Initiative, DOJ can determine the amounts allocated for indigent defense because the grant funds must be allocated equally between prosecution and defense. In addition, DOJ collected data on indigent defense funding for the Byrne Competitive and TCCLA grants when funding for indigent defense was a priority. Specifically, in 2009, when hiring public defenders was a national initiative for the Byrne Competitive grant, DOJ collected data on awards by national initiative and, thus, collected data on grant funding awarded for indigent defense under this initiative. Similarly, for the TCCLA grant, DOJ collects data on the grant category under which awards are made and, therefore, can identify funding awarded under the grant category related to indigent defense. Further, DOJ has developed mechanisms to collect data on whether JAG recipients have allocated funding for indigent defense and the amount allocated. First, so that DOJ can determine the number of grantees that are using funds for a particular purpose, when applicants apply for JAG funding, DOJ allows them to identify which of the more than 150 “project identifiers” best describe the proposed activities for which they plan to use According to BJA officials, DOJ created an indigent defense the funding.“project identifier” in fiscal year 2011 to better track indigent defense spending, given that indigent defense is listed as a priority in the fiscal years 2010 and 2011 JAG solicitations. Further, in its fiscal year 2011 solicitation, DOJ required that JAG applicants identify up to 5 project identifiers to catalogue their allocations. According to DOJ grant program officials, they established this requirement because they wanted to be able to track grantees’ uses of the funds and respond to questions from Congress and others about these uses. DOJ officials stated that they limited the requirement to 5 project identifiers to help ensure that the identifiers selected were most representative of the projects being funded. BJA also noted that, during its application review, BJA staff have the option to select additional project identifiers that would assist in the description and tracking of projects being funded. Second, as part of its efforts to revise JAG performance measures, which were made partly in response to our review, BJA has drafted a performance measure for the amount of funding spent on defense. Moreover, DOJ recently improved its efforts to collect data on the extent to which JAG grantees have allocated funds for indigent defense, but DOJ does not collect data on whether JABG or TJADG grantees have allocated funding for this purpose. According to an OJP official responsible for the grant system, all project identifiers are available to any OJP grantees, including JAG as well as JABG and TJADG grantees. However, unlike the JAG program, JABG and TJADG grantees are not required to identify project identifiers to describe proposed project activities when applying for funding; therefore grantees may choose not to use them. Moreover, 3 of the 5 JABG survey respondents who reported allocating funding for indigent defense with whom we spoke reported that they were unaware of an indigent defense project identifier, and an additional respondent reported being aware, but unlikely to use it. DOJ officials responsible for the JABG program explained that they collect data on grantees’ allocation of funds for the JABG purpose area that includes hiring court-appointed defenders, but the data are not detailed enough to identify allocations specifically for indigent defense. Moreover, the officials noted that this purpose area is only 1out of 17 JABG purpose areas. Similarly, the purpose area that includes hiring court-appointed defenders is only 1 out of 17 TJADG purpose areas, and TJADG data we obtained from DOJ did not identify whether funding was awarded for indigent defense. In addition, JABG and TJADG applicants have not been required to identify project identifiers because, according to an official from OJJDP, the office that administers the programs, OJJDP was not aware that the project identifiers available to JAG grantees could also be used by OJJDP staff and grantees. We have previously reported that agencies should collect sufficiently complete, accurate, and consistent data to measure performance and support decision making at various organizational levels. General has identified a crisis in criminal defense and committed the department to focusing on indigent defense issues and developing and implementing solutions, collecting data on whether grantees have allocated or awarded funding for indigent defense could help DOJ better assess whether funding is supporting this commitment. GAO/GGD-96-118. According to the Office of Management and Budget (OMB), performance measurement indicates what a program is accomplishing and whether results are being achieved. For all DOJ grant programs that either require funding be used for indigent defense or identify it as a priority, DOJ has or is developing indigent defense-related performance measures and requires grantees to provide data to inform these measures. For example, for the Byrne Competitive, TCCLA, TJADG, and JABG programs—where DOJ has prioritized indigent defense-related funding—DOJ requires grantees to report indigent defense-related measures such as the number of public defenders hired. In the JAG program, DOJ is developing a measure—the number of cases For a list defended—in its revisions to its online performance measures.of performance measures DOJ uses to assess the impact of funding used for indigent defense-related activities, see appendix VI. OMB guidance outlines four types of performance measures agencies may use to assess program impact: those that describe the level of activity that will be provided over a period of time (output measures), those that describe the intended result of carrying out the program (outcome measures), those that indicate how well a procedure, process, or operation is working (process measures), and those that describe the resources used to produce outputs and outcomes (input measures). While each type of measure provides information that can help assess the impact of the program, OMB also states that appropriate performance goals should, among other things, focus on outcomes, but use outputs when necessary. In addition, OMB strongly encourages the use of outcomes because they are more meaningful to the public than outputs. As we have previously reported, developing output measures is a step toward developing outcome measures, and an important initial step in measuring progress. However, we have also previously reported that leading organizations promote accountability by establishing results- oriented, outcome goals and corresponding performance measures by which to gauge progress towards attaining these goals. We found that all nine of the DOJ grant programs that required or prioritized funding for indigent defense included output measures that described the level of grant activity. In addition, seven of the nine grant programs included outcome-oriented performance measures that described the intended results of the program. For example, for the Juvenile Indigent Defense National Clearinghouse Grant, DOJ developed the outcome measure “percentage of people exhibiting increased knowledge of the program area,” which demonstrates a clear linkage to the program goals to improve juvenile indigent defense, to build the capacity of the juvenile indigent defense bar, and to promote the zealous and effective advocacy for juvenile indigent defendants. See appendix VI for our more detailed analysis of the measures used. The John R. Justice and JAG programs do not include indigent defense- related outcome-oriented performance measures that gauge impacts or results. However, DOJ requires states that receive John R. Justice funding to submit, at the conclusion of the grant, an assessment of the program’s impact on retention of prosecutors and defenders in the state, which would allow DOJ to assess whether the program is achieving intended results. DOJ officials explained that they required grantees to submit this assessment because the DOJ Inspector General is required in the statute establishing the John R. Justice program to report on the program’s impact on the retention of prosecutors and public defenders. As a result, DOJ decided to require these assessments from recipients in order to provide this information to the Inspector General, if requested. In addition, according to DOJ officials, they have not developed indigent defense-related outcome-oriented performance measures for the JAG program because the ways in which JAG funds can be used vary significantly across and within the seven purpose areas, making the development of outcome-oriented measures that could capture the intended results of the program difficult. Even among grantees that allocated funding for indigent defense, the purposes varied significantly. For instance, one JAG grantee with whom we spoke reported using indigent defense-related JAG funds to update the case management system at a public defender’s office, for which an outcome measure could be the increase in the efficiency with which cases are handled, while another used the funds to pay for an attorney, for which an outcome measure could be the decrease in the number of cases each attorney handles. Further, OMB has acknowledged that developing performance measures for programs that, like JAG, address multiple objectives and support a broad range of activities can be challenging. For programs that focus funds on specific purpose areas, as JAG does, OMB states that agencies can address the challenge by articulating national goals, and then working with state and local entities to identify specific objectives and measures linked to the national goals that the grantee will address.However, because indigent defense is not a JAG purpose area, such a solution would not result in indigent defense-related measures. In addition, DOJ officials stated that asking grantees to develop measures would place an additional reporting burden on the grantees. DOJ officials also stated that while they do not have indigent defense-related outcome- oriented measures in the JAG program, they do ask grantees to report on what they have accomplished with their grants. Like an outcome measure, this could allow DOJ to assess the results of the program. With its indigent defense-related performance measures, DOJ can assess the impact of a grantee’s use of funds, such as whether the funding resulted in an increase in the number of defenders hired. However, its assessments are not intended to evaluate the effectiveness of a grantee’s, or any indigent defense provider’s, programs or services, such as whether the defender’s ability, training, and experience match the complexity of the case, which is one of ABA’s principles for public defense delivery systems. Instead, as we have previously reported, evaluations may be used to assess a program’s effectiveness, identify how to improve performance, or guide resource allocation. Moreover, evaluation can play a key role in strategic planning and in program management, providing feedback on both program design and execution. For example, respondents to our survey of public defender offices and agencies reported using evaluations for purposes such as enacting system improvements (8), supporting funding requests (8), and addressing caseload issues (4), among others. Of the 118 public defender offices or agencies that responded to our survey, 9 provided us with copies of evaluations that they had conducted of their office or agency or that another entity conducted, such as a consultant or oversight body.oversight committee for a local jurisdiction’s indigent defense services— collected data and used it to assess compliance with local indigent defense standards. The evaluation considered professional For example, one evaluation—conducted by the independence; attorney qualifications; training; supervision; workload; performance evaluation and discipline; support services; case management and quality control; and reporting. Appendix VII provides additional details on these evaluations. Sixty-two percent (68 of 109) of public defender offices or agencies that responded to our survey reported that no evaluation had been conducted of their office or agency. Respondents who reported reasons for not conducting an evaluation most frequently cited lack of personnel (46 percent, 29 of 63) and lack of expertise and/or the need for technical assistance (43 percent, 27 of 63) as the reasons. Moreover, respondents identified challenges to collecting data on factors that affect their ability to provide indigent defense services—information that could be used to conduct an evaluation. For example, 50 percent (59 of 118) reported that the amount of face-to-face time a public defender spends with a client—a potential indicator of effectiveness—is difficult or burdensome to collect, and data on client satisfaction are also costly to collect (18 percent, 21 of 118), difficult to measure (47 percent, 56 of 118), and imprecise (32 percent, 38 of 118). However, respondents also reported currently collecting data on factors that DOJ and indigent defense stakeholders report could affect the quality of indigent defense services. For example, according to BJA, managing defender workloads is important to ensuring that the administration of justice is fair and equitable, and quality of service may be impacted when public defenders are forced to manage too many clients with inadequate resources. According to respondents, data currently being collected includes both average caseload per public defender (86 percent, 96 of 111), and the number of active cases per public defender (84 percent, 94 of 112). Respondents also reported collecting data on average salary or hourly rate of public defenders (76 percent, 83 of 109)—a factor that indigent defense stakeholders have identified as relevant to the ability to attract and retain qualified attorneys. Appendix VIII provides additional information on the extent to which public defender offices or agencies reported collecting data that could be used to conduct an evaluation and the associated challenges or limitations of the data. DOJ has mechanisms that could help to address some of the evaluation challenges that indigent defense offices and agencies reported, including a lack of expertise or the need for technical assistance. For instance, BJA provides technical assistance with evaluation through its Center for Program Evaluation and Performance Measurement website, and makes technical assistance available to SAAs and BJA applicants, among others. Further, DOJ reported that it has funded more narrowly scoped, nongeneralizable, case studies intended to help inform a broader study of indigent defense and provide insights, as well as available resources, for criminal justice stakeholders. For instance, from fiscal years 2005 through 2010, the period of our review, the National Institute of Justice (NIJ) funded one study that described how outcomes differed when murder defendants were represented by public defenders versus court-appointed The study found that, in the city evaluated, private attorneys in one city.there were significant differences between the two groups on several dimensions. Specifically, defendants represented by the public defender office had shorter average sentences, were less likely to receive a life sentence, had less expected time served, and were more likely to plead guilty. See http://nij.gov/nij/topics/courts/indigent-defense/2010-symposium/welcome.htm for information about the symposium. The 2010 symposium was an update to a 1999 National Symposium on Indigent Defense, after which a report was released. See http://www.sado.org/fees/icjs.pdf for this report. domestic and international best practices for indigent defense and to develop an agenda on criminal indigent defense research. In addition to providing suggestions for future research, the workshop produced a report containing 40 recommendations to ATJ and NIJ. Among the major themes highlighted in the report was participant support for evidence-based research on indigent defense, including evaluation of successful domestic and international practices. DOJ officials stated that this report was completed consistent with NIJ’s practice of documenting conference proceedings and gathering information from stakeholders. Further, DOJ has taken steps to identify characteristics of model programs, including awarding grant funding to the National Criminal Justice Association to identify innovative use of JAG funds to support indigent defense, among other criminal justice areas. Moreover, in addition to the studies it has conducted or funded in the past, in February 2012, NIJ issued a solicitation—Social Science Research on Indigent Defense—that seeks applications for research on the fundamental issues surrounding access to legal services and the need for quality representation at the state and local level. Proposed topics include three areas: juvenile and adult defendants’ waiver of their right to counsel, the importance of defense team members in indigent defense cases—issues indigent defense stakeholders identify as impediments to effective representation—as well as other research focused on important issues surrounding indigent defense.solicitation—which will provide up to $1 million for research projects—will fund a rigorous, scientific study that will identify barriers defendants commonly face in securing effective representation. DOJ officials stated that they developed this solicitation after conducting a review of existing indigent defense research, which they used to identify the areas of research they believed to be most important. Given that public defenders in our survey reported the need for assistance in conducting evaluations, NIJ’s study could help provide these defenders with the information, framework, and tools to conduct such evaluations, and identify factors that may affect the provision of indigent defense services. Identifying a crisis in the nation’s criminal defense system, DOJ has stated its commitment to focusing on indigent defense issues and developing and implementing solutions. Moreover, both DOJ and BIA have undertaken efforts to assist state, local, and tribal indigent defense providers in overcoming barriers to providing effective indigent defense services. However, consistent with OJP’s commitment to identify and address the most pressing challenges confronting the justice system and BIA’s authority to support the development, enhancement, and continuing operation of tribal justice systems, they could do more to meet the needs of indigent defense providers. Specifically, by increasing awareness among JAG, JABG, and JJDP grantees, as well as indigent defense providers, that funding is available for indigent defense, DOJ could be in a better position to ensure that eligible grantees are aware that they can access federal funding to help address their needs. In addition, by increasing awareness among recipients of Tribal Courts TPA distributions that funding can be used for indigent defense, BIA could better help tribes enhance all aspects of their criminal justice system. DOJ collected data on the amount of funding allocated for indigent defense for the Byrne Competitive and TCCLA grants when funding for indigent defense was a priority, and has developed mechanisms to do so in the JAG program, but does not consistently do so in the JABG and TJADG programs, where indigent defense is also a priority. Since DOJ seeks to focus on indigent defense issues and develop solutions, taking steps to collect data on allocations for indigent defense would position DOJ to better assess if it is meeting its commitment to indigent defense and help inform future funding priorities. To ensure that OJP is best positioned to identify and address critical needs in the indigent defense community, determine whether it has met its commitment to indigent defense, and improve accountability in grants administration, we recommend that the Assistant Attorney General of OJP take the following three actions: take steps to increase JAG, JABG, and JJDP grantees’ awareness that funding can be allocated for indigent defense; inform indigent defense providers about grants for which they are eligible to apply; and take steps to collect data on allocations and spending for indigent defense in the JABG and TJADG programs. To ensure that the Office of Justice Services is best positioned to support the development, enhancement, and continuing operation of tribal justice systems, we recommend that the Director of the Bureau of Indian Affairs take actions to increase awareness among recipients of Tribal Court TPA distributions that funding can be allocated for indigent defense. We provided a draft of this report for review and comment to DOJ and DOI. In addition, we provided relevant sections of the report to The Bronx Defenders and the Administrative Office of the U.S. Courts (AOUSC). DOI did not provide official written comments to include in our report. However, in an email received April 23, 2012, the DOI liaison stated that DOI concurred with our recommendation. We received written comments from DOJ, which are reproduced in full in appendix IX. In its written comments, DOJ concurred with the recommendations in this report. DOJ, The Bronx Defenders, and AOUSC also provided technical comments which we incorporated throughout the report as appropriate. DOJ identified several actions that OJP will take to implement the recommendations related to increasing JAG, JABG, and JJDP grantees’ awareness that funding can be allocated for indigent defense and informing indigent defense providers about grants for which they are eligible to apply. These actions include updating its “Frequently Asked Questions” document for grantees; communicating this information to grantees through email, technical assistance websites, and during national meetings; and working with national organizations, such as ABA and NLADA, to disseminate information on available funding to indigent defense providers through conferences, meetings, emails, newsletters, and publications. Increasing grantee awareness that funding can be allocated for indigent defense could help DOJ better ensure that it meets its commitment to supporting indigent defense. OJP’s proposed steps, if implemented across eligible grant programs, should address the intent of our recommendations. With regard to the recommendation that OJP take steps to collect data on allocations and spending for indigent defense in the JABG and TJADG programs, we originally included language in the recommendation that described examples of actions OJP could take to collect such data. Specifically, we stated that such actions could include increasing JABG and TJADG applicants’ awareness of the indigent defense project identifier, to ensure more consistent use of the identifiers and allow DOJ to collect data on allocations of the grants to indigent defense, and requiring JABG and TJADG grantees to select project identifiers. After sending the draft report to DOJ for comment, officials from OJP and ATJ stated that they plan to work together to determine internally the best way to collect data on allocations and spending for indigent defense in the JABG and TJADG programs, which could include the actions we identified in the original recommendation or other measures. Thus, they requested that we remove the language that described examples of how OJP could collect this data. We agreed that DOJ was best positioned to determine how to implement the recommendation and modified the recommendation by removing the language that described such examples to address the recommendation. OJP stated that, by September 30, 2012, OJJDP will determine the mechanism by which data on allocations and spending for indigent defense in the JABG and TJADG programs can best be collected. Collecting such data would position DOJ to better assess if it is meeting its commitment to indigent defense. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Attorney General, the Secretary of the Interior, and the Director of the AOUSC. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix X. We addressed the following questions as a part of our review: 1. What type of support, if any, have the Department of Justice (DOJ) and Bureau of Indian Affairs (BIA) provided for state, local, and tribal indigent defense? 2. For fiscals years 2005 through 2010, to what extent was eligible DOJ and BIA funding allocated and awarded for indigent defense, what factors affected decisions to allocate and award funding for this purpose, and what actions have DOJ and BIA taken, if any, to address these factors? 3. When fiscal year 2005 through 2010 federal funding was allocated or awarded for indigent defense, how did it compare to the total allocations or awards made, and how did recipients use the funding? 4. To what extent does DOJ collect data on indigent defense funding when the grant program specifies that funds be allocated or awarded for this purpose or highlights it as a priority? 5. When a grant program specifies that funds be spent for indigent defense or highlights it as a priority, to what extent can DOJ assess the impacts of this grant funding, and to what extent have there been evaluations of indigent defense programs and has DOJ supported these evaluation efforts ? To determine what DOJ grant programs and BIA funding could be used to support state, local, and tribal indigent defense from fiscal years 2005 through 2010, we reviewed the Catalog of Federal Domestic Assistance, DOJ’s website, and BIA’s annual budget justifications. In addition, we spoke with public defenders and state and local government offices in selected states to determine whether there were additional grants they had applied for or received related to indigent defense that we had not already identified. We selected these states based on geographical location, the extent to which state and local government offices had received federal funding, and the structure of the state’s indigent defense system. We also met with agency officials in DOJ’s Office of Justice Programs (OJP), who are responsible for administering the programs, and BIA’s Office of Justice Services, who provide support to tribal courts, to discuss the federal funding programs in more detail. Once our determinations were made, we sent OJP officials a list of funding programs to be included in our review, and asked for confirmation of this list. The DOJ grants included in our review were the Edward Byrne Memorial Justice Assistance Grant (JAG) Program; the Juvenile Accountability Block Grant (JABG); the Juvenile Justice and Delinquency Prevention Title II (JJDP); John R. Justice Program; Byrne Competitive Program; Capital Case Litigation Initiative; Wrongful Conviction Review Program; Tribal Civil and Criminal Legal Assistance Program; Tribal Courts Assistance Program; Juvenile Indigent Defense Clearinghouse Grant; Tribal Juvenile Accountability Discretionary Grant; Juvenile Justice and Mental Health Collaboration Grant; and Adult Drug Court Discretionary Grant. The BIA funding included in our review was the Tribal Courts tribal priority allocation (TPA) distributions. We obtained records of all recipients of these grants from DOJ and by reviewing BIA’s budget documentation. Further, we interviewed knowledgeable agency officials about the source of the grant data and the controls in place to maintain the integrity of the data and determined that the data were sufficiently reliable for our purposes. In addition, to determine what other assistance DOJ and BIA made available to support indigent defense, we interviewed DOJ and BIA officials responsible for training and technical assistance to identify assistance other than funding that the agencies provide to support indigent defense. To determine the extent to which state, local, and tribal governments allocated federal funding for indigent defense, the factors that influenced their decisions, and the amounts allocated, we conducted separate Web- based surveys of all recipients of fiscal year 2005 through 2010 DOJ formula grants that could be allocated for indigent defense—the JAG, JJDP, and JABG grants—and tribal governments that received BIA Tribal Courts TPA distributions from fiscal years 2005 through 2010.develop the survey questionnaires, we reviewed existing literature about the provision of indigent defense, and interviewed state JAG, JABG, and JJDP recipients, local JAG recipients, and tribes. We designed draft questionnaires in close collaboration with a GAO social science survey specialist. We conducted pretests with five state and local JAG recipients, three JABG recipients, three JJDP recipients, two tribal JAG recipients, and two recipients of BIA Tribal Court TPA distributions to help further refine our questions, develop new questions, and clarify any ambiguous portions of the survey. We developed and administered the web-based questionnaires accessible through a secure server. We emailed each recipient a unique identification number and password, and a link to the questionnaire for their population. See table 4 for further details about the population, response rates, and generalizabilty of these surveys. Because all recipients of JABG and JJDP funding were included in our survey, and our results are therefore not subject to sampling error, and we received response rates of 82 and 89 percent, we consider our results generalizable to the populations of JABG and JJDP recipients. While we also included all eligible members of the target populations in our state and local JAG, tribal JAG, and Tribal Courts TPA surveys, because of their relatively low response rates and the possibility of other errors all questionnaire surveys face, our results represent only respondents participating in these surveys and should not be generalized to the populations. Specifically, certain members of these populations may have been more or less likely to respond to our survey and this may affect our data. For instance, our data may overrepresent allocations for indigent defense because recipients that allocated funding for indigent defense may have been more likely to respond to our survey than recipients that had never done so. In addition, on the JAG survey, recipients of larger amounts of money and recipients of multiple years of funding were more likely to respond, but recipients of funding awarded solely pursuant to amounts appropriated through the American Recovery and Reinvestment Act were less likely to respond. However, the responses provide insights into the extent to which JAG and BIA Tribal Courts TPA funding has been allocated for indigent defense. The practical difficulties of conducting any survey may introduce errors in estimates. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize these errors. In addition, as indicated above, social science survey specialists designed the questionnaire in collaboration with GAO staff that had subject matter expertise. We then conducted pretests to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on respondents, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. We made multiple contact attempts with nonrespondents during the survey by e-mail, and some nonrespondents were also contacted by telephone. When we analyzed the data, an independent analyst checked all computer programs. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire, eliminating the need to key data into a database, minimizing error. We assessed the reliability of funding allocation data that was provided by including a series of questions pertaining to the accuracy of reported data in our survey and reviewing the data for obvious errors. Dollar amounts reported through our surveys, particularly by JAG respondents, have limitations and should be treated as estimates. For instance, respondents may have had difficulty identifying the precise amount of funding allocated for indigent defense in the earlier years of our time frame (fiscal years 2005 through 2010), may have been unable to determine the exact amount of funding that was allocated for indigent defense if grant funds were used for multiple purposes, or may not have had sufficient information to provide total amounts allocated to indigent defense because they had not yet fully allocated their grant funds. In addition, our JAG data may overrepresent allocations to law enforcement because it was the first category listed in our survey and respondents that were unable to split their funding across purpose areas may have reported allocating all funding to law enforcement. To determine the extent to which DOJ awarded discretionary grants for indigent defense, we obtained project descriptions for all discretionary grants that could have been awarded for indigent defense in fiscal years 2005 through 2010 from DOJ. We then reviewed these descriptions to determine the recipients of the grant services. In addition to indigent defense providers, this included the following categories: civil defenders, criminal defenders, prosecutors, law enforcement providers, court offices, correctional agencies, crime victims, reentry service providers, juvenile delinquency prevention organizations, appellate defenders, drug treatment providers, community and public outreach providers, non- attorney staff, innocence projects, and universities. Many grants had multiple recipients of grant services. In these instances, we classified the grants to reflect all recipients; as a result, grants may be counted in more than one category. For each grant program we determined the number of grants used exclusively for indigent defense, the number of grants used for indigent defense with another grant recipient category, and the number and amount of grants that were not used for indigent defense. Two analysts made these classifications in order to verify each other’s work. To determine what factors influenced public defenders’ decisions to apply for funding, we conducted a web-based survey of public defenders. To develop the survey questionnaire, we reviewed existing literature about the provision of indigent defense, and interviewed stakeholder groups knowledgeable about the provision of these services. We designed draft questionnaires in close collaboration with a GAO social science survey specialist. We conducted pretests with four public defenders to help further refine our questions, develop new questions, and clarify any ambiguous portions of the survey. We drew our survey sample from 841 public defender offices identified nationwide. To identify the population of public defender offices nationwide, we started with the Census of Public Defender Offices, which was conducted in 2007 by the Bureau of Justice Statistics (BJS). This Census collected data from all state and county funded public defender offices across the country. We further worked in partnership with the National Legal Aid and Defender Association (NLADA) to update existing contact information and identify additional offices that should be included. We drew a stratified sample of 253 of the 841 public defenders nationwide. From this population of 841 public defenders, we sampled 100 percent of: 22 state-level offices, 6 territories, 17 tribes, 52 public defender offices in major metropolitan areas, and 71 secondary offices not in major metropolitan areas. The remaining 85 public defenders were drawn within strata defined by region. The six strata are shown in table 5. We developed and administered the web-based questionnaire accessible through a secure server, and emailed unique identification numbers and passwords to the 253 public defenders beginning December 6, 2011. We sent follow-up e-mail messages beginning December 13, 2011, to those who had not yet responded. Then we contacted all remaining nonrespondents by telephone, starting January 5, 2012. The questionnaire was available online until February 29, 2012. We received 118 responses from the sample of 253, for an unweighted response rate of 47 percent. Because of this relatively low response rate, our results represent only respondents participating in our survey and should not be generalized to the population of public defenders; thus we report results based only on the respondents and do not present population estimates. However, the responses provide insights into the factors that influence public defenders’ decisions to apply for federal funding. We took steps similar to those in our grant recipient surveys when developing the questionnaire, collecting the data, and analyzing them to minimize errors. To determine what efforts, if any, DOJ has taken to address factors influencing recipients’ decisions to allocate funding for indigent defense, we interviewed DOJ and BIA officials responsible for each type of funding. We also reviewed DOJ’s guidance to recipients to determine the extent to which DOJ communicated that funding could be used for indigent defense programs. We compared this guidance against the grant and BIA statutes, and DOJ’s stated commitment to support indigent defense. To determine the allocation amounts and uses of formula grants and Tribal Courts TPA distributions that were allocated for indigent defense, we asked a question pertaining to funding amounts in our state and local JAG, JABG, JJDP, tribal JAG, and Tribal Courts TPA surveys described above and also performed follow-up interviews with select grant recipients to determine the purposes for which funds were used. We contacted all 6 JABG and 7 JJDP recipients that reported allocating any fiscal year 2005 through 2010 funding for indigent defense, all 16 state and local JAG recipients that reported allocating fiscal year 2010 funding for indigent defense, and 7 recipients of BIA Tribal Courts TPA distributions that reported allocating any fiscal year 2005 through 2010 funding for indigent defense and asked them to describe how they used grants funds that were allocated for indigent defense. We conducted interviews with 5 of 6 JABG, 5 of 7 JJDP, 9 of 16 state and local JAG, and 7 of 20 recipients of BIA Tribal Courts TPA distributions that reported allocating to indigent defense. To determine the allocation amounts and uses of discretionary grants which were awarded for indigent defense, during our review of all project descriptions of DOJ discretionary grants that could have been awarded for indigent defense from fiscal years 2005 through 2010, in addition to determining the recipient of the grant services, we also determined the use for each grant. We identified the following possible uses: training, technical assistance, personnel, planning and evaluation, technology initiatives, equipment, case management, conflict counsel, outreach and public education, facilities, codes and legal rules, and representation from an outside source. For each grant, two analysts came to agreement on the categorization. As with the recipients of grant services, many grants had multiple uses. In these instances, we classified the grants to reflect all their uses; as a result, grants may be counted more than one time in our overall analysis. With this information we were able to provide the amount and use for all discretionary grants that were used all or in part for indigent defense. To determine the extent to which DOJ collects data on whether recipients allocate funds for indigent defense when such funding is required or highlighted as a priority, we reviewed all grant solicitations to determine whether DOJ required that funding be allocated or awarded for indigent defense or identified indigent defense as a purpose area or priority. For grants in which we found that it was, we spoke with DOJ officials about why they chose to do so. In addition, through document requests and interviews with DOJ officials, we asked the agency to provide information that describes the extent to which they track how grantees have allocated funding, including for indigent defense, and how they do or could do so. We analyzed this information to ascertain the status of their efforts and the mechanisms available to conduct such tracking. As part of this analysis, we requested data from DOJ on its fiscal year 2011 JAG grantees because—beginning in fiscal year 2011—JAG grantees were required to select up to five project identifiers to indicate how their 2011 JAG funds would be used, and DOJ developed a project identifier for indigent defense. We compared this data with our survey results from JAG grantees to determine the extent to which grantees that indicated in our survey that they are likely to allocate funding for indigent defense also selected indigent defense as a project identifier in their fiscal year 2011 grant application in order to assess the accuracy of the project identifier data. We compared DOJ’s data collection efforts against our prior work on implementing the Government Performance and Results Act, which states that agencies should collect sufficiently complete, accurate, and consistent data to measure performance and support decision making at various organizational levels. OMB, Performance Measurement Challenges and Strategies (Washington, D.C.: June 2003). towards attaining these goals.performance measures DOJ established or is establishing for all grant programs in which indigent defense funding is required or a priority to assess whether the measures focused on the intended result of the program (were outcome-oriented). The analysts then met to discuss and resolve any differences in the results of their analysis. In addition, we spoke with DOJ officials about the feasibility of collecting performance measures for grant programs. Two analysts also independently reviewed To determine the extent to which evaluations have been conducted of indigent defense programs, and the extent to which DOJ has supported these evaluation efforts, we asked public defender offices and agencies in our survey whether an evaluation had been conducted of their office and the challenges associated with conducting such an evaluation. We reviewed the evaluations of the 9 respondents who reported they were willing to share them, but did not assess the quality of the evaluations or their results. In addition, we conducted a literature search of peer- reviewed journals using databases such as ProQuest, PolicyFile, and LexisNexis. In December 2011, we also held a listening session at a National Legal Aid and Defender Association conference where public defenders described challenges to conducting evaluations, among other topics. defense systems, we reviewed studies funded or conducted by DOJ and interviewed DOJ officials about its efforts to evaluate indigent defense systems. Finally, to identify actions DOJ has taken to evaluate indigent We conducted this performance audit from February 2011 to May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our analysis based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our analysis based on our audit objectives. The 17 public defender office or agency leaders who attended the listening session also discussed characteristics of model public defender programs; factors that affect the ability of public defenders to provide effective representation; and critical funding needs facing public defender programs. We observed their discussion, recorded the information shared, and reviewed the information to identify common themes. As table 6 demonstrates, grantees of the four programs that we determined require funding for indigent defense—the John R. Justice Program, Capital Case Litigation Program, Wrongful Conviction Review Program, and Juvenile Indigent Defense National Clearinghouse—have allocated or used these grants in accordance with grant requirements for indigent defense. Figures 13, 14, and 15 show the percentages of Edward Byrne Memorial Justice Assistance Grant (JAG); Juvenile Accountability Block Grant (JABG); Juvenile Justice and Delinquency Prevention Title II (JJDP); and Bureau of Indian Affairs (BIA) Tribal Courts Tribal Priority Allocation (TPA) Distribution survey respondents who reported allocating funding for indigent defense from fiscal year 2005 through 2010. As the figures demonstrate, the percentage was highest among JAG State Administering Agencies (SAA)—the state agencies that administer JAG funds—in receipt of grants awarded pursuant to amounts appropriated through the American Recovery and Reinvestment Act (ARRA). In addition, the percentage of JJDP and JABG recipients that reported allocating funding for indigent defense increased and decreased, respectively, in fiscal year 2009. Further, the percentage of BIA tribal courts survey respondents that reported allocating funding for indigent defense has increased over time. As figures 16, 17, and 18 illustrate, among survey respondents who reported allocating funding for indigent defense, allocations for indigent defense as a percentage of total awards reported by survey respondents were generally small, but varied slightly across time. For instance, in the JAG program, reported allocations as a percentage of total awards were highest among localities and tribes in fiscal years 2009 and 2010. In addition, reported allocations as a percentage of total awards are highest in the JABG program, but have been decreasing over time. Further, reported allocations as a percentage of total awards among BIA Tribal Courts TPA recipients have been increasing over time. However, these data, particularly the allocation amounts, have limitations and should be treated as estimates. For instance, respondents may have had difficulty identifying the precise amount of funding allocated for indigent defense in the earlier years of our time frame (fiscal years 2005 through 2010), been unable to determine the exact amount of funding that was allocated for indigent defense if grant funds were used for multiple purposes, or not have had sufficient information to provide total amounts allocated to indigent defense because they had not yet fully allocated their grant funds. In addition, our JAG data may overrepresent allocations to law enforcement because it was the first category listed in our survey and respondents that were unable to split their funding across purpose areas may have reported allocating all funding to law enforcement. Table 7 shows, by DOJ discretionary grant, the number of grants awarded in whole or in part for indigent defense, this number as a percentage of total awards, total allocations for indigent defense, and these allocations as a percentage of total awards. These data, particularly the award amounts, have limitations. For instance, we reviewed project descriptions that were based on grantees’ applications; however, the descriptions did not identify the amount of funding specifically planned for indigent defense. Therefore, the amounts reported represent the maximum possible awards for indigent defense. Figures 19 and 20 show the percentage of grants awarded in whole or in part for indigent defense as well as maximum possible awards for indigent defense as a percentage of total grant awards from 2005 through 2010. As figure 19 demonstrates, the percentage of grants awarded for indigent defense was highest for fiscal year 2010 TCCLA grants, and in the fiscal year 2009 Byrne Competitive grant program, when indigent defense was part of a national initiative. In addition, as figure 20 illustrates, awards to indigent defense as a percentage of total awards were highest in the Tribal Court Assistance Program, although they were decreasing over time. Figures 21 through 34 display the percentages of their total awards that Edward Byrne Memorial Justice Assistance Grant (JAG) State Administering Agencies (SAA)—state agencies that administer JAG funds—and local and tribal JAG recipients that responded to our survey reported allocating for all seven JAG purpose areas by fiscal year. The figures also include the percentage of their total awards that these respondents reported allocating for indigent defense. However, our data have limitations and should be treated as estimates. Specifically, respondents may have had difficulty identifying the precise amount of funding allocated for indigent defense in the earlier years of our timeframe (fiscal years 2005 through 2010, including funding awarded pursuant to amounts appropriated through the American Recovery and Reinvestment Act (ARRA)), been unable to determine the exact amount of funding that was allocated for each purpose area if grant funds were used for multiple purposes, or not have had sufficient information to provide total amounts allocated to each purpose area because they had not yet fully allocated their grant funds. In addition, the data may overrepresent allocations to law enforcement because it was the first category listed in our survey and respondents that were unable to split their funding across purpose areas may have reported allocating all funding to law enforcement. Finally, our data may overrepresent allocations for indigent defense because JAG recipients that allocated funding to indigent defense may have been more likely to respond to our survey than recipients that had never done so. As figures 21 through 27 illustrate, SAAs who responded to our survey reported allocating the largest proportion of funding to the law enforcement purpose area. As figure 28 through 34 demonstrate, localities and tribes that received JAG funding and responded to our survey reported allocating the largest proportion of their funding to the law enforcement purpose area. For all grant programs in which funding for indigent defense is required or prioritized—the John R. Justice Grant Program; the Capital Case Litigation Initiative; the Wrongful Conviction Review Program; the Juvenile Indigent Defense National Clearinghouse; the Edward Byrne Memorial Justice Assistance Grant (JAG); the Juvenile Accountability Block Grant (JABG); the Byrne Competitive Grant Program; the Tribal Juvenile Accountability Discretionary Grant Program; and the Tribal Civil and Criminal Legal Assistance Grant Program (TCCLA)—the Department of Justice (DOJ) has developed or is developing indigent defense-related performance measures. Table 8 identifies these measures and whether the measures are output measures, or those that describe the level of activity that will be provided over a period of time; outcome measures, or those that describe the intended result of carrying out the program; input measures, or those that describe the resources used to produce outputs and outcomes; or process measures, or those that indicate how well a procedure, process, or operation is working. Of the 118 respondents to our survey of public defender offices or agencies, 9 provided us with, or directed us to, copies of evaluations of or reports on their office or agency. Table 9 shows select measures used in these evaluations or reports, and their findings and recommendations. Respondents to our public defender office survey reported collecting data that could be used to conduct an evaluation, but also reported challenges to collecting this data. Table 10 shows the percentage of survey respondents collecting each data element, and the associated challenges. In addition to the contact named above, Kristy N. Brown, Assistant Director; Jill Verret, Analyst-in-Charge; Heather Hampton; Christine Hanson; and Alicia Loucks made significant contributions to this report. Other key contributors were Michele Fejfar, Cynthia Grant, Thomas Lombardi, Lara Miklozek, Karen O’Connor; Carl Ramirez, Christine San, Jerome Sandau, and Janet Temko.
|
The Sixth Amendment to the U.S. Constitution guarantees every person accused of a crime the right to counsel. States and localities generally fund indigent defense services, and the Department of Justice (DOJ) also provides funding that can be used for these services. GAO was asked to review federal support for indigent defendants. This report addresses, for fiscal years 2005 through 2010, the (1) types of support DOJ provided for indigent defense; (2) extent to which eligible DOJ funding was allocated or awarded for indigent defense, the factors affecting these decisions, and DOJs actions to address them; (3) percentage of DOJ funding allocated for indigent defense and how it was used; (4) extent to which DOJ collects data on indigent defense funding; and (5) extent to which DOJ assesses the impacts of indigent defense grants, indigent defense programs have been evaluated, and DOJ has supported evaluation efforts. GAO surveyed (1) all 4,229 grant recipients about funding allocations and (2) a sample of 253 public defender offices about factors influencing their decisions to apply for funding. Though not all survey results are generalizable, they provide insights. GAO also analyzed grant related documents and interviewed relevant officials. The Department of Justice (DOJ) administered 13 grant programs from fiscal years 2005 through 2010 that recipients could use to support indigent defense, 4 of which required recipients to use all or part of the funding for this purpose. DOJ also provides training to indigent defense providers, among other things. From fiscal years 2005 through 2010, recipients of the 4 grants that required spending for indigent defense allocated or planned to use $13.3 million out of $21.2 million in current dollars for indigent defense. However, among the 9 grants that did not require allocations or awards for indigent defense, two-thirds or more of state, local, and tribal respondents to GAOs surveys reported that they did not use funds for this purpose, partly due to competing priorities. DOJ has listed the grants on its website. However, no more than 54 percent of grantees or public defender offices responding to GAOs surveys were aware that such funding could be used to support indigent defense. Taking steps to increase awareness would better position DOJ to help ensure that eligible grantees are aware that they can access federal funding to help address their needs. DOJ officials acknowledged that opportunities exist to enhance grantees awareness. When recipients allocated funding for indigent defense, the amount was generally small relative to the total award and most commonly used for personnel and training. For instance, among grant recipients who reported in GAOs surveys that they had allocated funding for indigent defense, allocations as a percentage of total awards ranged from 2 percent to 14 percent. DOJ generally collects data on funding allocated for indigent defense when the grant program requires such funding or identifies it as a grant priority, but does not do so in two juvenile-focused grants. According to DOJ, it does not collect such data in these two programs because indigent defense is 1 of 17 purposes for which grant funds can be used. GAO has previously reported that agencies should collect data to support decision making, and the Attorney General has committed to focusing on indigent defense issues. Collecting data on the amount of funding from these two grants that is used to support indigent defense would position DOJ to better assess if it is meeting the Attorney Generals commitment. DOJ assesses the impact of indigent defense grant funding and has mechanisms to help indigent defense providers evaluate services. All 9 of the DOJ grant programs that required or prioritized funding to be used for indigent defense included output measures that described the level of grant activity, such as the number of defenders hired, and 7 of the 9 included outcome measures that described the intended results of the funds, such as the percent increase in defendants served. Nine of the 118 public defender offices or agencies that responded to GAOs survey provided GAO with a copy of an evaluation that had been conducted of their office; those that did not most frequently cited lack of personnel (28 of 62) and lack of expertise or the need for technical assistance (26 of 62) as the reasons. DOJ has mechanisms that could address these challenges. For instance, DOJ provides technical assistance through a website. GAO recommends that DOJ increase grantees awareness that funding can be allocated for indigent defense and collect data on such funding. DOJ concurred with the recommendations.
|
Ballistic missiles have different ranges—short, medium, intermediate, and intercontinental—as well as different speeds, sizes, and performance characteristics. Short-range ballistic missiles have a range of less than 621 miles; medium-range ballistic missiles have a range from 621 to 1,864 miles; intermediate-range ballistic missiles have a range from 1,864 to 3,418 miles; and intercontinental ballistic missiles have a range greater than 3,418 miles. As a result, MDA is developing a variety of systems that, when integrated, provide multiple opportunities to destroy ballistic missiles in flight for the strategic defense of the United States and regional defense of its deployed forces and allies. The BMDS includes space-based sensors; ground- and sea-based radars; ground- and sea- based interceptor missiles; and a command and control system that provides communication links to the sensors and interceptor missiles. Once a ballistic missile has been launched, these sensors and interceptors track and engage the threat missile during its flight as shown in figure 1. When MDA was established in 2002, the Secretary of Defense granted it exceptional flexibility to set requirements and manage the acquisition of the BMDS in order to meet a presidential directive to deliver an initial defensive capability against ballistic missiles in 2004. This flexibility allows MDA to develop BMDS elements outside of DOD’s standard acquisition process until they are mature enough to be handed over to a military service for production and deployment. Because the BMDS’s entrance into DOD’s acquisition process is deferred, certain laws and policies that generally require major defense acquisition programs to take certain steps at certain phases in the DOD acquisition process do not yet apply to MDA. For example, before a major defense acquisition program begins the product development phase, it must document key performance, cost, and schedule goals in an acquisition baseline that has been approved by a higher-level DOD official. This acquisition baseline is used to measure a program’s performance as it progresses. Specifically, as implemented by DOD, major defense acquisition programs’ baselines provide decision makers with key goals such as the program’s total cost for an increment of work, key dates associated with acquiring a capability, and the weapon’s intended performance. Additionally, once a baseline has been approved, DOD’s major defense acquisition programs are required to measure performance against their baseline and report certain changes to Congress. For instance, they are required to report certain increases in unit cost (cost divided by the quantity produced) measured from the original and the current program baseline. While this flexibility allows MDA latitude to manage the BMDS and enable it to rapidly develop and field new systems, we have previously reported that the agency has used this flexibility to employ acquisition strategies with high levels of concurrency (that is, overlapping activities such as testing and production), which increases the risk for performance shortfalls, costly retrofits, and test problems. this flexibility has hampered oversight and accountability.MDA officials, MDA has taken some steps to identify and track concurrency in their programs. We have also reported that Congress has taken steps to improve the transparency and accountability of BMDS development efforts. For example, in the National Defense Authorization Act for Fiscal Year 2008, Congress required MDA to establish cost, schedule, and performance baselines for certain BMDS elements. MDA first reported baselines for several BMDS elements to Congress in its June 2010 BAR and has continued to report baselines annually. Table 1 describes the six acquisition baselines MDA established and reports in its BAR for individual BMDS elements or major portions of such elements. GAO-12-486. 10 U.S.C. § 2432 defines, with respect to a major defense acquisition program, procurement unit cost as the amount equal to (1) the total of all funds programmed to be available for obligation for procurement for the program divided by (2) the number of fully configured end items to be procured. Program acquisition unit cost is defined as the amount equal to (1) the total cost for development and procurement of, and system-specific military construction for, the acquisition program divided by (2) the number of fully configured end items to be produced for the acquisition program. Most recently, the National Defense Authorization Act for Fiscal Year 2012 amended MDA’s baseline reporting requirements.law currently requires MDA to report to the congressional defense committees certain changes or variances in the current baselines from the baselines presented in the prior year’s report and from when the baselines were initially established. Additionally, the act allows MDA to revise an initial baseline, which the agency refers to as a “revised initial baseline.” Specifically, the In 2010, MDA also established an acquisition process that continues to guide the development of the BMDS. Table 2 identifies the five life cycle phases of MDA’s acquisition process. The agency has documented the key knowledge that is needed prior to the technology development, product development, initial production, and production phases. For example, prior to entering initial production, an element must demonstrate that its design and manufacturing processes are stable, planned quantities are affordable, and developmental and operational test results show that the user’s needs will be met. In general, developmental testing is aimed at determining whether the system design will satisfy the desired capabilities, while operational testing is aimed at determining whether the system is effective, survivable, and suitable in the hands of the user under realistic conditions. Additionally, according to DOD policy, programs entering initial production or production require approval from DOD’s Under Secretary of Defense for Acquisition, Technology, and Logistics. Table 3 describes the BMDS elements and programs assessed in this report and their current MDA acquisition phase. In March 2013, in response to a growing threat from Iran and North Korea, the Secretary of Defense announced steps that affected the acquisition of the BMDS including deploying 14 additional ground-based interceptors in Fort Greely, deploying a second AN/TPY-2 radar to Japan; and shifting the resources from development of the Aegis BMD SM-3 Block IIB interceptor, which was planned to be deployed after 2020 to defend against intercontinental ballistic missiles, to fund the additional ground-based interceptors, as well as develop advanced technology to improve the performance of current and future versions of BMDS interceptors. In 2013, DOD canceled the Aegis BMD SM-3 Block IIB and the Precision Tracking Space System citing concerns with the programs’ high-risk acquisition strategies and technical challenges that GAO has previously raised. We have previously reported that MDA did not consider a broad range of alternatives or fully assess program or technical risks before committing to either program. In addition, MDA altered its fiscal year 2013 acquisition plan to offset a funding reduction of $568 million (6.8 percent) in its total available budget for fiscal year 2013. For example, in consultation with key stakeholders such as the Operational Test Agency and DOD’s Office of the Director, Operational Test and Evaluation, MDA revised its test plan by combining, delaying, and deleting tests to cut costs. MDA also delayed some development activities for various elements into fiscal year 2014 and beyond. In fiscal year 2013, MDA successfully executed several flight tests that demonstrated key BMDS capabilities and modifications made to resolve prior development issues, but continued to experience failures and delays resulting in less testing and production than planned. For the first time in September 2013, the Aegis BMD and Terminal High Altitude Area Defense (THAAD) programs participated in an operational flight test that resulted in a near simultaneous engagement. Additionally, the Aegis BMD program also successfully conducted flight tests with the SM-3 Block IB missile. However, according to officials from the DOD’s Office of the Director, Operational Testing and Evaluation, Aegis BMD experienced a SM-3 Block IB missile failure that is currently being investigated and could result in a modification to a component that is common between the Aegis BMD SM-3 Block IA and IB. Lastly, GMD also successfully conducted a non-intercept test of its upgraded interceptor that is currently in development, but experienced an intercept test failure of the fielded interceptor, the cause of which is still unknown. Further details on the operational test are provided after table 4. In addition, because ongoing testing and development challenges increase the potential to affect the production of the SM-3 Block IB missile as well as delaying understanding of the capabilities and limitations of GMD’s fielded interceptor, we also provide additional details for these programs after the table. Table 4 presents a summary of key accomplishments and challenges for BMDS elements and programs that are in the BAR. After more than 11 years of development of the BMDS, MDA conducted the first system-level operational missile defense flight test called Flight Test Operational-01 (FTO-01) in September 2013. During the test, warfighters from several combatant commands employed multiple missile defense systems including Aegis BMD and THAAD to demonstrate the regional capabilities of U.S. missile defense. This is a significant achievement because it is the first operational test that involved multiple elements working simultaneously. To conduct this test, MDA invested in range assets and conducted other activities to ensure it could test multiple elements at once. For example, MDA conducted its first integrated system-level flight test, known as Flight Test Integrated-01, in October 2012 as a risk-reduction exercise for the operational test. During FTO-01, MDA launched two nearly simultaneous threat- representative medium-range ballistic missile targets including its air- launched extended-medium range ballistic missile (eMRBM) target for the first time.because of development problems associated with the eMRBM target. MDA also had to make some adjustments to the FTO-01 test plan because of fiscal year 2013 sequestration. Although MDA preserved its primary objective to demonstrate the interoperability of BMDS elements, it reduced the number of targets included in the test from five to two and removed participation of more mature elements such as the Patriot Advance Capability-3. This test was delayed for approximately one year in part The BMDS elements successfully engaged the targets during the test, but according to independent testing officials, full system integration was not achieved. Specifically, according to the DOD’s Director, Operational Test and Evaluation, the Aegis ship successfully intercepted one of the targets with a SM-3 Block IA and THAAD successfully intercepted a medium- range target for the second time. In addition, as a planned demonstration of its layered defense, THAAD launched a second interceptor at the target intercepted by the Aegis ship as a contingency in event the SM-3 Block IA did not achieve an intercept. However, DOD’s Director, Operational Test and Evaluation, also found that the test failed to achieve full integration between all systems due to challenges with system networks, limitations in elements’ ability to work together and component failures. For example, the test uncovered several issues with communication networks that are needed for interoperability between all elements. Interoperability is important because it can improve the missile defense effectiveness and enhance individual systems performance beyond operating alone. The Aegis BMD SM-3 Block IB program largely overcame previous development challenges and successfully intercepted all targets in its last three flight tests as shown in table 5. These tests are required for a full production decision—the last key production authorization by the Under Secretary of Defense, Acquisition, Technology, and Logistics that would allow MDA to produce the remaining 415 interceptors. However, a missile failure of the second interceptor launched during the September 2013 test could increase production risk if design changes are needed. As we found in April 2013, the SM-3 Block IB production line has been repeatedly disrupted since 2011 due to flight test anomalies caused by malfunctions in two separate sections of the third-stage rocket motor, and development challenges with the throttleable divert and attitude control system—components that maneuver the interceptor in its later stages of flight. These challenges delayed the SM-3 Block IB full production authorization by more than two years to fiscal year 2015. Largely resolving these previous challenges, in fiscal year 2013 the program received permission to procure 33 additional initial production missiles. Although MDA initially planned to award a contract for 29 SM-3 Block IB missiles in fiscal year 2013, it bought four additional missiles in August 2013 to recover an earlier reduction. That reduction occurred to provide funds to resolve technical and production issues. Based on successful intercepts of the last three flight tests, the program also received permission to buy 52 more interceptors in fiscal year 2014. Despite the three successful intercepts, the effect of the missile failure in September 2013 on the upcoming full production decision remains unclear. Before the program enters into full production, MDA’s acquisition management instruction requires it to demonstrate to the Under Secretary of Defense, Acquisition, Technology, and Logistics that there are no significant risks to production and that the planned production quantities are affordable and fully funded. The permission to enter full production is also based on independent assessments of the weapon’s effectiveness and suitability by the DOD’s Director, Operational Test and Evaluation and the Navy’s Commander Operational Test & Evaluation Force. Although the failure investigation is ongoing, preliminary results indicate that the failure occurred in the third-stage rocket motor, a component common to the SM-3 Block IA, which is nearing the end of its production. Different issues with that same component have contributed to previous SM-3 Block IB schedule delays and production disruptions. While the precise cause of the September 2013 failure is under review, MDA documentation indicates that it could potentially result in design changes to the third-stage rocket motor and changes to manufacturing processes. Additionally, retrofits may be required for SM-3 Block IB and SM-3 Block IA interceptors that were already produced. If design changes are necessary, program documentation indicates that they will not be flight tested until the fourth quarter of fiscal year 2015, just prior to the planned deployment of the SM-3 Block IB to support the regional defense of Europe and 6 months after its planned full production decision. Consequently, until the program thoroughly understands the extent of needed modifications, if any, and their effects on performance as demonstrated though testing, its production strategy is at risk of cost growth and schedule delays. MDA has experienced these consequences in other elements when it pursued design changes concurrently with production. Although the GMD program made progress in resolving a prior CE-II intercept failure, test failures and development challenges continue to disrupt the program and increase the cost to demonstrate the new CE-II. The GMD program first attempted to demonstrate the CE-II interceptor in January 2010 but subsequently experienced a number of setbacks in both the CE-II and the fielded CE-I, as seen in table 6 below. Developing a mitigation to the FTG-06a failure has proven more difficult than initially expected. The program initially planned to conduct FTG-06b in the third quarter of fiscal year 2012 but the test has since been delayed to at least the third quarter of fiscal year 2014 because of challenges resolving test failures. For example, while initial results from CTV-01 indicated the redesigned guidance system component could be used to resolve the problem that caused the FTG-06a failure, subsequent ground testing revealed that only one-third of those produced could be used in future interceptor production or flight tests because the component’s performance was uncertain. The program mitigated the issue by implementing software and hardware modifications and delivered the redesigned component for kill vehicle integration in October 2013. However, according to MDA, the program experienced further delays in the FTG-06b test while it implemented changes based on assessments from the ongoing FTG-07 failure review. Consequently, confirmation that the CE-II design works as intended has been delayed by nearly seven years and costs have increased by over $1 billion because of the CE-II development challenges and test failures. In July 2013, MDA conducted the FTG-07 developmental test to understand the performance of the fielded CE-I against a longer range target in more challenging conditions and assess the performance of upgrades. This interceptor was fielded before completing developmental testing, leading MDA to undertake retrofit efforts and upgrades to fix issues identified during testing. According to acquisition best practices, developmental testing should be complete before beginning production and fielding in order to, among other reasons, avoid the need for retrofits and upgrades to fix issues discovered during testing. The test failed, delaying understanding of the capabilities and limitations of upgrades to the fielded CE-I. Shortly after the test failure, the Director, MDA stated a failure review was initiated to not only identify the root cause of the failure, but also provide a comprehensive review of potential CE-I failures and identify any correlations with the CE-II. Since then, according to program officials, MDA has identified a kill vehicle component common to both interceptors that could be associated with the FTG-07 failure. However, it remains unclear what, if any, design changes, retrofits, or other corrective actions to the CE-I or CE-II are necessary since the failure review is not complete. According to MDA officials, they have not determined if they will re-conduct the flight test. If the CE-I is not flight tested again, the warfighter will not have a full understanding of the capabilities and limitations of the upgrades to the CE-I interceptor, the original purpose of the FTG-07 test. Overall, GMD’s ongoing testing issues in conjunction with concurrent acquisition practices have caused—and will likely continue to cause— major disruptions to the program. We previously found that, in 2004, MDA committed to a highly concurrent development, production, and fielding strategy for the CE-II interceptor and began delivering interceptors in 2008. Because MDA moved forward with CE-I and CE-II interceptor production before completing its flight testing program, test failures have exacerbated disruptions to the program. For example, because the program has delivered approximately three-fourths of the interceptors for fielding, the program faces difficult and costly decisions on how it will implement corrections from prior test failures. Also, the program has had to add tests that were previously not planned and delay tests that are necessary to understand the system’s capabilities and limitations. As a result of these development challenges, the GMD program will likely continue to experience delays, disruptions, and cost growth. MDA has taken some steps to improve the clarity of its resource and schedule baselines, but issues with the content and presentation of these baselines continue to limit the usefulness of the information available to decision makers for oversight of BMDS development efforts. Since 2011, we have found deficiencies in the quality of the cost estimates that underpin MDA’s resource baselines and reported on the efforts MDA has undertaken to improve those estimates. In 2013, we found the agency made little progress addressing the underlying quality issues with those cost estimates that we raised. As a result, this is the fourth year we have found that the cost estimates that support MDA’s resource baselines are not sufficiently reliable to support oversight. However, according to MDA officials, the agency is taking steps to improve the quality of its cost estimates to support the resource baselines it plans to report in its 2014 BAR. Assessing MDA’s progress in achieving its schedule goals is also difficult because MDA’s schedule baselines are not presented in a way that allows decision makers to understand or easily monitor progress. Until MDA improves the quality and comprehensiveness of its cost estimates and the clarity of its schedule information, its baselines may not be useful for decision makers. In its 2013 BAR, MDA continued to make useful changes to its reported resource and schedule baselines. We found in March 2011 that MDA’s schedule and resource baselines had several shortcomings that limited their usefulness for oversight, such as not explaining variances or significant changes in the baselines. Additionally, we found in April 2013 that, in its 2012 BAR, MDA only reported annual progress by comparing its current estimates for unit cost and scheduled activities against the prior year’s estimate and adjusted the content of the baselines from year to year in such a way that they were no longer comparable. As a result, MDA’s baselines were not useful for tracking longer term progress or for holding the agency accountable. MDA took some action to improve the completeness and clarity of the BAR baselines by: identifying the date of the initial baseline and, if applicable, the date when the initial baseline was most recently revised for each element or major portion of an element reported in the BAR; explaining most of the significant cost and schedule changes from the current baseline estimates against both the estimates reported in the prior year’s BAR and the latest initial baseline; and making the baselines easier to read by removing cluttered formatting such as strikethroughs and highlights that made some of the events listed in past BARs unreadable. GAO-11-372 In March 2011, we assessed MDA’s life-cycle cost estimates using the GAO Cost Estimating and Assessment Guide, which is based on best practices in cost estimating and identifies key criteria for establishing high-quality cost estimates. Our review found that the estimates we assessed were not comprehensive, lacked documentation, were not completely accurate, or were not sufficiently credible. We recommended that MDA (1) take steps to ensure its cost estimates are high quality, reliable cost estimates that are documented to facilitate external review and (2) obtain independent cost estimates for each baseline. resource baselines reported in the 2013 BAR. We have found that completing these steps could further improve the quality of MDA’s cost estimates. GAO-13-432 In April 2013, we recommended that the Secretary of Defense direct the MDA Director to include in its resource baseline cost estimates all life cycle costs, specifically the operations and support costs, from the military services in order to provide decision makers with the full costs of ballistic missile defense systems. Additionally, MDA has made little progress improving the comprehensiveness of the cost estimates that support its resource baselines. Similar to past years, the cost estimates reported in the 2013 BAR also do not include the operation and support costs funded by the individual military services, which we concluded in April 2013 may result in significantly understated life cycle costs for some BMDS elements. In response to our April 2013 recommendation, DOD agreed that decision makers should have insight into the full life cycle costs of DOD programs, but the department stated that the BAR should only include content for which MDA is responsible. Because MDA already reports the estimated acquisition costs and some of the operation and support costs for the acquisitions in the annual BAR, we concluded that annual document to be the most appropriate way to report the full costs to Congress. Additionally, we concluded that good budgeting requires that the full costs of a project be considered when making decisions to provide resources and, therefore, both DOD and Congress would benefit from a comprehensive understanding of the full costs of MDA’s acquisition programs. Until MDA’s resource baselines are based on reliable information and are comprehensive, they will not be useful for decision makers to understand progress or make well-informed investment decisions. In the National Defense Authorization Act for Fiscal Year 2014, Congress took steps to address concerns over MDA’s cost estimates by requiring MDA to report to the congressional defense committees on its efforts to improve the quality of the cost estimates included in its acquisition For example, the act requires MDA to report on a description baselines.of and schedule for planned actions to improve its cost estimates, as well as an assessment of how the planned improvements align with GAO’s cost estimating best practices. We are also required to provide our views on the content of MDA’s report. Additionally, the act requires that the life cycle cost estimate included in the agency’s acquisition baselines include a description of the operations and support functions and costs for which the military services are responsible, in addition to the costs borne by MDA. MDA’s schedule baselines are presented in a way that makes it difficult for decision makers to understand a program’s planned activities and therefore hold programs accountable for their performance. According to GAO’s Schedule Assessment Guide, a reliable program schedule includes all activities required to complete a project, but the schedule should not be too detailed to interfere with its use.presenting decision makers with a high-level summary of the schedule is a best practice because schedules that include too many milestones or have too much detail make it difficult to manage progress. Additionally, the activities included on the schedule should have descriptive names that clearly communicate the work required. MDA’s 2013 BAR schedule baselines include numerous events but provide very little information about them, making it difficult to understand what the events are and why they are important. For example, the milestones identifying significant increases in performance for C2BMC Spiral 8.2 are numbered with no description of the capabilities they represent. Additionally, several of the events reported for Aegis modernized weapon system software are titled with abbreviations that are not explained in the BAR. In addition, MDA does not present any comparisons of event dates with previously reported dates. In contrast, DOD’s major defense acquisition programs report a comparison of current schedule estimates against their original and current schedule goals. According to GAO’s Schedule Assessment Guide, comparing the current schedule to the baseline schedule to track deviations from the plan provides decision makers valuable insight into program risk and can help identify where corrective action may be needed. While removing the formatting that identified changes in prior BARs made the schedule baselines easier to read, doing so removed the ability for decision makers to see if the planned dates for events had changed. As a result, decision makers must consult past versions of the BAR to identify any changes in the planned schedule for a specific event, which can be difficult or impossible in some cases. For example, we found in April 2013 that we were unable to compare the current estimated dates for the activities presented in the Aegis Ashore schedule baseline to the dates baselined in the 2010 BAR because activities were split into multiple events, renamed, eliminated, or moved to several other Aegis BMD schedule baselines.audit we raised this issue with MDA and, according to agency officials, MDA is open to considering alternative formats for presenting the schedule baseline in future version of the BARs. Until MDA improves the content of its schedule baselines, decision makers will not be able to assess how a program is performing over time. During fiscal year 2013, MDA was able to make some significant acquisition progress, including the first operational system-level flight test involving multiple BMDS elements, but it continued to experience difficulties achieving its goals for testing. This has resulted in delaying progress on individual elements, delaying understanding of the overall performance of the BMDS, and fielding assets before all testing is complete. The most significant acquisition effects have been experienced on the Aegis BMD SM-3 Block IB and GMD program where testing and development challenges have led to failure investigations and increased the risk of continued cost growth and schedule delays. Both programs conducted flight tests and made progress in resolving design flaws in 2013, but still have further development, testing, and production issues to address. For Aegis BMD SM-3 Block IB, the failure of its interceptor in a September 2013 flight test means that a key component may need to be redesigned and that change confirmed to work in additional flight testing. For GMD, the failure of the deployed CE-I interceptor in a July 2013 flight test compounds its challenge because the program did not gain the expected understanding of the effectiveness of software upgrades planned for the operational fleet and now must determine the cause of the failure. As a result, for both programs, to the extent that software or hardware changes are necessary to resolve the cause of these failures, new flight tests will likely be needed to demonstrate both the effectiveness of any resolutions and, for GMD, understand the performance of the software upgrades that were the original purpose of the test. Additionally, for over a decade, we have reported that MDA provides Congress with only limited insight into the acquisition progress for individual programs. While MDA has taken steps to improve the clarity of the baselines it reports to Congress, the agency’s cost and schedule reporting still lacks the quality, completeness, and clarity necessary to track actual cost or schedule growth over time. Specifically, the agency has not addressed all of the critical gaps in the quality of its underlying cost estimates used to develop its resource baselines that we have identified over the years. Until corrective actions are implemented and substantial improvements are made to MDA’s cost estimates, its reported resource baselines will not be useful for decision makers to hold MDA accountable for its performance or make informed decisions on how best to allocate limited resources. Congress recently amended the requirements for the cost estimates MDA must report in its baselines, which may enhance the transparency into MDA’s cost estimating processes. As a result, we do not make any new recommendations regarding cost at this time. However, additional actions can be taken in the schedule baselines to improve the ability of decision makers to understand what program events are most critical and identify whether the dates for those critical events have changed. Until improvements are made, the schedule baselines will not be a useful tool for providing oversight of the BMDS. We recommend that the Secretary of Defense take the following three actions to strengthen MDA’s acquisitions and help support oversight. 1. To the extent that MDA determines hardware or software modifications are required to address the September 2013 Aegis BMD SM-3 Block IB failure, we recommend that the Secretary of Defense direct, a) the Director of the MDA to verify the changes work as intended through subsequent flight testing, and b) the Under Secretary of Defense, Acquisitions, Technology, and Logistics to delay the decision to approve the program’s full production until such testing demonstrates that the redesigned missile is effective and suitable. 2. To demonstrate the CE-I’s effectiveness against a longer range target in more challenging conditions and to confirm the design changes implemented to improve performance, as well as any changes needed to resolve the July 2013 CE-I flight test failure work as intended, we recommend that the Secretary of Defense direct MDA’s Director to conduct a flight test of the CE-I interceptor once the cause of the failure has been determined and any mitigations have been developed. 3. To improve the content of the schedule baselines it reports to Congress for monitoring program performance, we recommend that the Secretary of Defense direct MDA’s Director to take the following actions as MDA implements other improvements required by the Congress: a) Focus the information included in the schedule baselines to highlight critical events. b) For each event included in the schedule baseline, provide a description of the event explaining what it entails and why it is important. c) Present the schedule baseline in a format that allows decision makers to identify any changes made from the current estimated date to the date reported in not only the prior year’s BAR but also to the date established in the initial baseline. DOD provided written comments on a draft of this report. These comments are reprinted in appendix II. DOD also provided technical comments, which were incorporated as appropriate. DOD partially concurred with our first recommendation, non-concurred with our second recommendation and concurred with our third recommendation. The department partially concurred with our first recommendation to flight test any modifications that may be required to the Aegis BMD SM-3 Block IB as a result of September 2013 failure, before the Under Secretary of Defense, Acquisitions, Technology, and Logistics approves full production. In its comments, DOD acknowledged that if modifications are required they will be tested, but added that the type of testing—flight or ground testing—will depend on the magnitude of such modifications. The department also believes that the component currently tied to the failure, has a successful testing history and thus expects to meet the reliability requirement needed for the full production decision in fiscal year 2015. However, there have now been three flight test anomalies associated with this component over the last three years. According to Aegis BMD officials, they are considering design changes for this component. Since the fiscal year 2015 full production decision is the commitment by the Under Secretary of Defense, Acquisitions, Technology, and Logistics to produce several hundred missiles, this decision should be supported by an assessment of the final product under operational mission conditions to ensure that it is effective and suitable. As such, we maintain our recommendation that before the program is approved for full production, flight testing should demonstrate that any modifications work as intended. DOD did not concur with our second recommendation to complete the original purpose of the July 2013 CE-I flight test once the cause of that failure has been determined and any mitigations have been developed. In its response, DOD stated that the decision to flight test a CE-I interceptor will be made by the Director, MDA, based on the judgment of stakeholders from the Office of the Secretary of Defense and combatant commands on the need to perform a test. The DOD response focused almost exclusively on the steps it is taking to identify the cause of the July 2013 failure and mitigate it and did not address the main part of our recommendation—determining the effectiveness of the CE-I under more challenging conditions and confirming that design changes previously made improve performance. These were the objectives of FTG-07. In our view, resolving these performance questions remains important. Since the FTG-07 failure review is still ongoing, we cannot assess whether DOD should conduct a CE-I test for the sole purpose of demonstrating corrective actions, to the extent any are needed, to address the cause of the failure. While we acknowledge that DOD must balance several competing GMD priorities, including which flight tests to conduct, and conducting another CE-I flight test may not be feasible in the immediate future, we also maintain that demonstrating CE-I intercept capabilities should continue to be a priority for DOD since the CE-I interceptor constitutes a multi-billion dollar investment by DOD and serves as the primary defense of the United States homeland against enemy ballistic missile attacks. In addition to responding to our recommendations, the department’s letter raised additional concerns about our draft report. First, DOD disagreed with our statement that because the BMDS entrance into DOD’s acquisition process is deferred, it is exempt from certain acquisition laws and policies that generally provide oversight of major defense acquisition programs. DOD stated that MDA is not exempt from acquisition-related laws because, while it is not captured by several statutes, Congress has provided legislation specific to MDA to ensure oversight and accountability. We clarified the language in our report to remove the term exempt. However, because of the acquisition flexibility it has been granted, MDA is not yet required to apply certain laws and policies to the BMDS. We have found that while the flexibility allows MDA latitude to manage the BMDS and to rapidly develop and field new systems, we have also found that this flexibility has hampered oversight and accountability. Our report recognizes the actions Congress has taken to improve the transparency and accountability of the BMDS development efforts through legislation specific to MDA, particularly to require MDA to report baselines to Congress. However, there are a number of requirements that are triggered by phases of the DOD acquisition process that are important to sound acquisition management. For example, we have previously found that MDA is not yet required to conduct an analysis of alternatives to compare potential solutions and determine the most cost effective weapon system to acquire nor is MDA yet required to obtain an independent cost estimate prior to beginning product development. Second, DOD stated that it disagreed with our assessment that MDA’s cost estimates are not sufficiently reliable to support oversight, suggesting that the report be revised to include more of MDA’s efforts to improve the quality of its cost estimates. Since 2011, we have found that there are issues with the cost estimates and baseline reporting, including incomplete cost estimates due to the exclusion of military service operation and support costs as well as instability in the content of the baselines, which makes assessing progress difficult or impossible. While the draft report was being reviewed by DOD, we met with MDA officials who discussed more of their efforts to improve their cost estimates; however, we were not provided sufficient information to change our determination. We did clarify the report to better reflect the efforts they have undertaken. DOD stated in its response that MDA has included previously unreported costs in the baselines it provides to Congress, which we have previously found is an improvement to the amount of information reported to Congress, but which does not demonstrate that the quality of the cost estimates themselves have improved. DOD stated in its response that it has provided joint operation and support costs documentation for two programs reported in the 2013 BAR. However, we did not assess the joint operation and support costs because they were not included in the 2013 resource baselines reported to Congress. DOD also stated in its response that it received an assessment from us on cost estimate documentation for a third program. We did not perform a formal assessment of the third program because it was cancelled and not included in the 2013 BAR. In order to assist MDA in improving its cost estimates, we did informally assess the third program’s cost estimate, but reached no conclusion as to its quality. However, we noted several issues in that informal review. For example, we concluded that because MDA did not provide a cost model to support the estimate we were unable to check the cost estimate for accuracy. Finally, DOD stated in its response that MDA has published and implemented a cost estimating handbook. We have previously found that fully implementing that handbook could improve the quality of MDA’s cost estimates. However, during this review, we specifically asked to review the cost estimate documentation supporting MDA’s fiscal year 2013 BAR baselines in order to assess its progress in implementing that handbook. An MDA senior cost official told us that the agency was working to fill in documentation gaps on existing cost estimates and that the estimates were not ready for us to review. In the course of our work, we concluded and informed MDA that the cost estimating process defined in that handbook has not been applied to any systems that are currently baselined or part of the BMDS. Until MDA is able to provide us with documentation that supports the actual baselines reported to Congress so we can independently assess the quality of the cost estimates, we have no basis to change our assessment. Third, DOD disputed that MDA has not obtained independent cost estimates from DOD’s Office of the Director of Cost Assessment and Program Evaluation for any of the elements GAO reviewed since 2010. In response, we clarified the language in the report so that it specifically refers to the lack of independent cost estimates completed for any of the resource baselines reported to Congress in the 2013 BAR. We also clarified in the report that DOD’s Office of the Director of Cost Assessment and Program Evaluation has assessed other BMDS costs. DOD was unable to provide us with documentation of independent cost estimates completed for MDA’s BAR baselines, therefore, we have no basis to change our determination. Lastly, DOD identified 35 “technical and factual errors” in its technical comments. However, upon review we found that 29 were not technical or factual errors, but rather different conclusions, errors in DOD’s comments, or required additional substantiation that was not provided. We determined that 6 were actual technical or factual errors and therefore made the appropriate changes in those circumstances. We are sending copies of this report to the Secretary of Defense and to the Director, MDA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To assess the Missile Defense Agency’s (MDA) progress and any challenges associated with developing, testing, and producing the ballistic missile defense system (BMDS) during fiscal year 2013, we examined the acquisition accomplishments of several missile defense elements and MDA’s targets program. Specifically, we reviewed the Aegis Ballistic Missile Defense (Aegis BMD) with Standard Missile-3 (SM-3) Block IB; Aegis Ashore; Aegis Modernized Weapon System Software; Army/Navy Transportable Radar Surveillance and Control Model 2 (AN/TPY-2); Command, Control, Battle Management, and Communications (C2BMC); Ground-based Midcourse Defense (GMD) System; Targets and Countermeasures; and Terminal High Altitude Area Defense (THAAD) elements because, as reported in the 2013 BMDS Accountability Report (BAR), these elements or programs have entered MDA’s product development, initial production, or production acquisition phase, but are not yet mature enough to be transferred to a military service and enter the We formal DOD acquisition cycle for full-rate production and deployment.reviewed key management documents for fiscal year 2013 including Program and Baseline Execution Reviews, which detailed program accomplishments and areas of concerns, and interviewed program element officials. We also examined MDA’s master test plan and flight test reports, and discussed the element- and BMDS-level test programs and test results with the BMDS Operational Test Agency and the Department of Defense’s (DOD) Office of the Director, Operational Test and Evaluation and office of Developmental Test and Evaluation. In addition, we also met with officials from MDA’s functional directorates including the Engineering Directorate to discuss the agency’s process for delivering and integrating BMDS capabilities as well as the Directorates for Acquisition and Operations to discuss significant internal and external events and decisions that occurred in fiscal year 2013, such as sequestration, that affected the agency’s overall acquisition of the BMDS. To assess the progress made as well as any remaining challenges MDA faces in reporting resource and schedule baselines that support oversight, we examined MDA’s reported baselines in the 2010, 2011, 2012, and 2013 BARs. To be consistent with last year, we focused our assessment on the resource and schedule baselines as they continue to be the only reported baselines that have measurable goals, such as cost estimates and dates of program events, and separately explain when current estimates have deviated to a certain extent from the baselines set in prior BARs. We also examined the National Defense Authorization Act for Fiscal Year 2012, which required MDA to establish and maintain baselines for program elements or major portions of such program elements and outlined the information to be included in MDA’s baselines, as well as interviewed officials within MDA’s general counsel’s office. We also interviewed officials in MDA’s Acquisitions Directorate about how the agency establishes and manages its acquisition baselines and met with MDA officials in the Operations Directorate to discuss their progress in adopting best practices in cost estimating based on our Cost Guide. We also reviewed findings and recommendations from several of our past reports to see if MDA had made progress in improving the completeness, clarity, and stability of its reported resource and schedule baselines. In addition, we examined DOD acquisition policy such as the Interim DOD Instruction 5000.02 issued in November 2013 and the Defense Acquisition University’s Defense Acquisition Guidebook to discern how other major defense acquisition programs are required to report baselines and measure program progress. We also reviewed GAO’s cost and schedule guides, which outline best practices for establishing and managing program cost and schedule estimates. To gauge the extent to which MDA reported changes or variances in the current baselines from the baselines presented in the prior year’s BAR and from when the baselines were initially established, we compared the 2013 BAR resource and schedule baselines for each BMDS element in our review to the baselines presented in the 2012 and 2010 BARs. In order to compare unit costs calculated in different years, there were instances where it was necessary to convert prior cost estimates to match the base year of the estimates presented in the 2013 BAR. We performed these conversions using indexes published by the Office of the Secretary of Defense (Comptroller) in the National Defense Budget Estimates, commonly referred to as the “Green Book.” The National Defense Authorization Act for Fiscal Year 2013 directed GAO to provide separate assessments on several other missile defense related issues. Specifically, GAO was required to provide briefings on our views and to submit reports as soon as practicable to the congressional defense committees our assessments of DOD reports on (1) a comprehensive evaluation of alternatives for the Precision Tracking Space System and its conformance with GAO best practices for analyses of alternatives; (2) the Ground-based Midcourse Defense system’s test plan; (3) the status and progress of regional missile defense programs, including the adequacy of MDA’s existing and planned efforts to deploy a U.S. missile defense in Europe; and (4) the status of efforts to improve the homeland defense capability of the United States. additional mandated work covers the details on many BMDS elements, we do not include appendixes on each of the individual elements as we have done in prior reports under this mandate. Pub. L. No. 112-239, § 224 (e), § 231 (e), § 229 (c), and § 228 (c). For our assessments of DOD’s reports, see GAO, Missile Defense: Precision Tracking Space System Evaluation of Alternatives, GAO-13-747R (Washington, D.C.: July 25, 2013) and Regional Missile Defense: DOD’s Report Provided Limited Information; Assessment of Acquisition Risks is Optimistic, GAO-14-248R (Washington, D.C.: Mar. 14, 2014). At the time of publication of this report, our work on DOD’s reports on the Ground-based Midcourse Defense system’s test plan and homeland defense is ongoing. officials from the Aegis BMD SM-3 Block IB, Aegis Ashore, and Aegis Modernized Weapon System Software program offices. In Huntsville, we interviewed program officials for BMDS Sensors, C2BMC, GMD, and THAAD as well as officials in MDA’s Acquisition and Cost Directorates. We also visited several contractor facilities that were working on programs covered in our review. These facilities were located in Huntsville and Courtland, Alabama as well as Tucson and Chandler, Arizona. In Huntsville, we discussed the manufacturing of the Aegis BMD SM-3 Block IB interceptor with Raytheon officials and met with GMD’s prime contractor, Boeing, to discuss progress in resolving development challenges and their plans to deliver additional interceptors. In Courtland, we met with officials from Lockheed Martin to discuss the production of the extended medium-range ballistic missile target, which was used in flight test operational 01 on September 10, 2013. In Tucson and Chandler, Arizona, we met with GMD’s subcontractors Raytheon and Orbital to discuss their progress in resolving development challenges with the interceptor, flight testing, and future development efforts. We also interviewed officials from various testing agencies located in Arlington, Virginia and Huntsville, Alabama. In Arlington, we met with officials from DOD’s Director, Operational Test and Evaluation, as well as DOD’s Director of Developmental Test and Evaluation, to discuss MDA’s test plans and results from recent tests. Lastly, in Huntsville, we spoke with officials from the BMDS Operational Test Agency to discuss MDA’s performance assessment. We conducted this performance audit from April 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, David Best, Assistant Director; Aryn Ehlow; Meredith Allen Kimmett; Wiktor Niewiadomski; Kenneth E. Patton; John H. Pendleton; Karen Richey; Steven Stern; Roxanna Sun, Robert Swierczek; Brian Tittle; Hai V. Tran; and Alyssa Weir made key contributions to this report.
|
Since 2002, MDA has spent approximately $98 billion and has requested $38 billion more through fiscal year 2018 to develop, test, and field a system to defend against enemy ballistic missiles. The BMDS is comprised of a command and control system, sensors that identify incoming threats, and intercepting missiles. GAO is mandated by law to assess the extent to which MDA has achieved its acquisition goals and objectives, as reported to Congress through its acquisition baselines, and to report on other issues as appropriate. This report examines the agency's progress and any challenges in fiscal year 2013 associated with (1) developing, flight testing, and producing individual systems, which MDA refers to as BMDS elements; and (2) reporting resource and schedule baselines that support oversight. To support this effort, GAO examined MDA's acquisition and test reports, analyzed two of MDA's acquisition baselines—resource and schedule—to discern progress, and interviewed a wide range of DOD and contractor officials. In fiscal year 2013, the Missile Defense Agency (MDA) made mixed progress in achieving its acquisition goals to develop, test, and produce elements of the Ballistic Missile Defense System (BMDS). For the first time, MDA conducted an operational flight test that involved warfighters from several combatant commands using multiple BMDS elements simultaneously. The agency also successfully conducted several developmental flight tests that demonstrated key capabilities and modifications made to resolve prior production issues. However, the Aegis BMD and Ground-based Midcourse Defense (GMD) continued to experience testing and development challenges. Aegis BMD—while the program successfully conducted three intercept flight tests with the Standard Missile (SM)-3 Block IB missile in support of a full production decision planned for fiscal year 2015, a missile failed during one of these tests. Although the cause of failure is not known, the program plans to move forward with missile production in 2014. The program is also determining whether a key component that is common with the already fielded SM-3 Block IA missile will need to be redesigned. GMD—although the program successfully conducted a non-intercept flight test of its upgraded interceptor, the program is nearing a seven year delay in completing its first successful intercept. Until this upgraded interceptor is demonstrated in an intercept test, expected to be conducted in the third quarter of fiscal year 2014, manufacturing and deliveries remain on hold. In July 2013, the GMD program also failed a flight test of its fielded interceptor. This flight test was designed to assess the fielded interceptor under more challenging conditions and to confirm design changes to resolve prior issues. MDA has not yet made a decision on how to proceed since the cause of failure has not been determined. MDA has improved the clarity of its resource and schedule baselines since it first submitted them to Congress in 2010. However, issues with the content and presentation of these baselines continue to limit the usefulness of the information available to decision makers for oversight. First, as the agency is still in the process of improving the quality and comprehensiveness of the cost estimates that support its resource baselines, for the fourth year, GAO has found that MDA's cost estimates are unreliable. For example, MDA's 2013 cost estimates still do not include operations and support costs for military services which may significantly understate total costs. Congress has recently required MDA to include these costs in future acquisition baselines which may improve transparency. Second, MDA's schedule baselines are presented in a way that makes it difficult to assess progress. Specifically, MDA's 2013 schedule baselines include numerous events but provide very little information about them, making it difficult to understand what the events are and why they are important. Additionally, the 2013 schedule baselines do not compare the current event dates with previously reported dates, so decision makers cannot easily assess how the program is performing over time. Until MDA improves the quality and comprehensiveness of its cost estimates and the content of its schedule information, its baselines will not be useful for decision makers to gauge progress. GAO recommends (1) any changes to the SM-3 Block IB be flight tested before DOD approves full production; (2) retest the fielded GMD interceptor to demonstrate performance; and (3) improve the content of its schedule baselines. DOD partially concurred with the first, non-concurred with the second, and concurred with the third, stating that the production and testing decisions will be made using the proper DOD processes. GAO believes both recommendations are valid as discussed in this report.
|
OPM’s mission is to ensure that the federal government has an effective civilian workforce. In this regard, one of the agency’s major human resources tasks is to manage and administer the retirement program for federal employees. According to the agency, the program serves federal employees by providing (1) retirement compensation and (2) tools and options for retirement planning. OPM’s Center for Retirement and Insurance Services administers the two defined benefit retirement plans that provide retirement, disability, and survivor benefits to federal employees. The first plan, the Civil Service Retirement System (CSRS), provides retirement benefits for most federal employees hired before 1984. The second plan, the Federal Employees Retirement System (FERS), covers most employees hired in or after 1984 and provides benefits that include Social Security and a defined contribution system. According to OPM, there are approximately 2.9 million active federal employees and nearly 2.5 million retired federal employees. The agency’s March 2008 analysis of federal employment retirement data estimates that nearly 1 million active federal employees will be eligible to retire and almost 600,000 will most likely retire by 2016. Figure 1 summarizes the estimated number of employees eligible and likely to retire. OPM and employing agencies’ human resources and payroll offices are responsible for processing federal employees’ retirement applications. The process begins when an employee submits a paper retirement application to his or her employer’s human resources office and is completed when the individual begins receiving regular monthly benefit payments (as illustrated in fig. 2). Once an employee submits an application, the employing agency’s human resources office provides retirement counseling services to the employee and augments the retirement application with additional paperwork, such as a separation form that finalizes the date the employee will retire. Then the agency provides the retirement package to the employee’s payroll office. After the employee separates for retirement, the payroll office is responsible for reviewing the documents for correct signatures and information, making sure that all required forms have been submitted, and adding any additional paperwork that will be necessary for processing the retirement package. Once the payroll office has finalized the paperwork, the retirement package is mailed to OPM to continue the retirement process. Payroll offices are expected to submit the package to OPM within 30 days of the retiree’s separation date. Upon receipt of the retirement package, OPM calculates an interim payment based on information provided by the employing agency. The interim payments are partial payments that typically provide retirees with 80 percent of the total monthly benefit they will eventually receive. OPM then starts the process of analyzing the retirement application and associated paperwork to determine the total monthly benefit amount to which the retiree is entitled. This process includes collecting additional information from the employing agency’s human resources and payroll offices or from the retiree to ensure that all necessary data are available before calculating benefits. After OPM completes its review and authorizes payment, the retiree begins receiving 100 percent of the monthly retirement benefit payments. OPM then stores the paper retirement folder at the Retirement Operations Center in Boyers, Pennsylvania. According to the agency’s 2008 performance report, the average processing time from the date OPM receives the initial application to the time the retiree receives a full payment is 42 days. According to the Deputy Associate Director for the Center of Retirement and Insurance Services, about 200 employees are directly involved in processing the approximately 100,000 retirement applications OPM receives annually. This processing includes functions such as determining retirement eligibility, inputting data into benefit calculators, and providing customer service. The agency uses over 500 different procedures, laws, and regulations, which are documented on the agency’s internal Web site, to process retirement applications. For example, the site contains memorandums that outline new procedures for handling special retirement applications, such as those for disability or court orders. Further, OPM’s retirement processing involves the use of over 80 information systems that have approximately 400 interfaces with other internal and external systems. For instance, 26 internal systems interface with the Department of the Treasury to provide, among other things, information regarding the total amount of benefit payments to which an employee is entitled. OPM has stated that the federal employee retirement process currently does not provide prompt and complete benefit payments upon retirement, and that customer service expectations for more timely payments are increasing. The agency also reports that a greater workload is expected due to an anticipated increase in the number of retirement applications over the next decade, yet current retirement processing operations are at full capacity. Further, the agency has identified several factors that limit its ability to process retirement benefits in an efficient and timely manner. Specifically, it noted that current processes are paper-based and manually intensive, resulting in a higher number of errors and delays in providing benefit payments; the high costs, limited capabilities, and other problems with the existing information systems and processes pose increasing risks to the accuracy of benefit payments; current manual capabilities restrict customer service; federal employees have limited access to their retirement records, making planning for retirement difficult; and attracting qualified personnel to operate and maintain the antiquated retirement systems, which have about 3 million lines of custom programming, is challenging. In the late 1980s, OPM recognized the need to automate and modernize its retirement processing and began retirement modernization initiatives that have continuously called for automating its antiquated paper-based processes. The agency’s previously established program management plans included the objectives of having timely and accurate retirement benefit payments and more efficient and flexible processes. For example, the agency’s plans call for processing retirement applications and providing retirees 100 percent of their monthly benefit payments the day it is due versus providing interim monthly payments. Its initial modernization vision called for providing prompt and complete benefit payments by developing an integrated system and automated processes. However, the agency has faced significant and long-standing challenges in doing so. In early 1987, OPM began a program called the FERS Automated Processing System (FAPS). However, after 8 years of planning, the agency decided it needed to reevaluate the program, and the Office of Management and Budget (OMB) requested that an independent board conduct a review to identify critical issues impeding progress and recommend ways to address the issues. The review identified various management weaknesses, including the lack of an established strategic plan, cost estimation methodologies, and baseline; improperly defined and ineffectively managed requirements; and no clear accountability for decision making and oversight. Accordingly, the board suggested areas for improvement and recommended terminating the program if immediate action was not taken. In mid-1996, OPM terminated the program. In 1997, OPM began planning a second modernization initiative, called the Retirement Systems Modernization (RSM) program. The agency originally intended to structure the program as an acquisition of commercially available hardware and software that would be modified in-house to meet its needs. From 1997 to 2001, OPM developed plans and analyses and began developing business and security requirements for the program. However, in June 2001, it decided to change the direction of the retirement modernization initiative. In late 2001, retaining the name RSM, the agency embarked upon its third initiative to modernize the retirement process and examined the possibility of privately sourced technologies and tools. To this end, OPM issued a request for information to obtain private sourcing options and determined that contracting was a viable alternative that would be cost efficient, less risky, and more likely to be completed on time and on budget. In 2006, the agency awarded three contracts for: (1) a commercially available, defined benefits technology solution (DBTS) to automate retirement processing; (2) services to convert paper records to electronic files; and (3) consulting services to support the redesign of its retirement operations. The contract for DBTS was awarded to Hewitt Associates, and the additional contracts to support the technology were awarded to Accenture Ltd. and Northrop Grumman Corporation, as reflected in table 1. OPM produced a December 2007 program management plan that, among other things, described capabilities the agency expected to implement as outcomes of retirement modernization. Among these capabilities, the agency expected to implement retirement benefit modeling and planning tools for active federal employees, a standardized retirement benefit calculation system, and a consolidated system to support all aspects of retirement processing. In February 2008, OPM renamed the program RetireEZ and deployed a limited initial version of DBTS. As the foundation of the modernization initiative, DBTS was to be a comprehensive technology solution that would provide capabilities to substantially automate retirement processing. This technology was to be provided by the contractor for a period of 10 years and was intended to provide, among other things, an integrated database with calculation functionality for retirement processing. In addition to calculating retirement benefit amounts, DBTS was intended to provide active and retired federal employees with self- service, Internet-based tools for accessing accounts, updating retirement records, submitting transactions, monitoring the status of claims, and forecasting retirement income. The technology was also expected to enhance customer service by providing OPM and agency personnel with the capability to access retirement information online. Further, the technology was expected to be integrated with OPM and federal agency electronic retirement records and processes. When fully implemented, the modernized program was expected to serve OPM retirement processing personnel, federal agency human resources and payroll offices, active federal employees, retirees, and the beneficiaries of retirees. According to the agency, in late February 2008, the DBTS was deployed with limited functionality to 26,000 federal employees serviced by the General Services Administration’s (GSA) payroll offices. In April 2008, OPM reported that 13 of the 37 retirement applications received from GSA’s payroll office had been processed through DBTS with manual intervention and provided the retirees 100 percent of their monthly benefits within 30 days from their retirement date. However, a month later, the agency determined that DBTS had not worked as expected and suspended system operation. In October 2008, after 5 months of attempting to address system quality issues, the agency terminated the contract. In November 2008, OPM began restructuring the program and reported that its efforts to modernize retirement processing would continue. Figure 3 illustrates the timeline of retirement modernization initiatives from 1987 to the present. Various entities within OPM are responsible for managing RetireEZ. Specifically, the management is composed of committees, a program office, and operational support, as reflected in table 2. Since 2005, we have conducted several studies of OPM’s retirement modernization noting weaknesses in its management of the initiative. In February of that year, we reported that the agency lacked processes for retirement modernization acquisition activities, such as determining requirements, developing acquisition strategies, and implementing a risk program. Further, the agency had not established effective security management, change management, and program executive oversight. We recommended that the Director of OPM ensure that the retirement modernization program office expeditiously establish processes for effective oversight of the retirement modernization in the areas of system acquisition management, information security, organizational change management, and information technology (IT) investment management. In response, between 2005 and 2007, the agency initiated steps toward establishing management processes for retirement modernization and demonstrated the completion of activities with respect to each of our nine recommendations. However, in January 2008, we reported that the agency still needed to improve its management of the program to ensure a successful outcome for its modernization efforts. Specifically, we reported that initial test results had not provided assurance that DBTS would perform as intended, the testing schedule increased the risk that the agency would not have sufficient resources or time to ensure that all system components were tested before deployment, and trends in identifying and resolving system defects had indicated a growing backlog of problems to be resolved prior to deployment. Further, we reported that although the agency had established a risk management process, it had not reliably estimated the program costs, and its progress reporting was questionable because it did not reflect the actual state of the program. We recommended that the Director of OPM address these deficiencies by conducting effective system tests and resolving urgent and high priority system defects prior to system deployment, in addition to improving program cost estimation and progress reporting. In response to our report, OPM stated that it concurred with our recommendations and was taking steps to address them. However, in March 2008, we determined that the agency was moving forward with system deployment and had not yet implemented its planned actions. OPM subsequently affirmed its agreement with our recommendations in April 2008 and reported that it had implemented or was in the process of implementing each recommendation. As of March 2009, however, these recommendations still had not been fully addressed. OPM remains far from fully implementing the retirement modernization capabilities described when it documented its plans for RetireEZ in 2007. The agency only partially implemented two of eight capabilities that it identified to modernize retirement processing. The remaining six capabilities, which were to be delivered through the DBTS contract, have not been implemented, and OPM’s plans to continue implementing them are uncertain. While the agency has taken steps to restructure the RetireEZ program without the DBTS contract, it has not developed a plan to guide its future modernization efforts. OPM’s retirement modernization plans from 2007 described eight capabilities that were to be implemented to achieve modernized processes and systems. As of late March 2009, the agency had partially implemented two of these capabilities while the remaining six had not been implemented (see table 3). Specifically, it had achieved partial implementation of an integrated database of retirement information that was intended to be accessible to OPM and agency retirement processing personnel. In this regard, the agency implemented a new database, populated with images of retirement information, which is accessible to OPM retirement processing personnel online. This database contains over 8 million files which, according to agency officials, represent approximately 80 to 90 percent of the available retirement information for all active federal employees. However, the capability for the information in the database to be integrated with OPM’s legacy retirement processing systems and to be accessible to other agency retirement processing personnel has not yet been implemented. OPM has also partially implemented enhanced customer service capabilities. Specifically, the agency acquired a new telephone infrastructure (i.e., additional lines) and hired additional customer service representatives to reduce wait times and abandonment rates. However, the agency has not yet developed the capabilities for OPM retirement processing personnel to provide enhanced customer support to active and retired federal employees through online account access and management. Moreover, six other capabilities have not been implemented—and plans to implement them are uncertain—because they were to be delivered through the now-terminated DBTS contract, which had been expected to provide a single system that would automate the processing of retirement applications, calculations, and benefit payments. Among the capabilities not implemented was one for other agencies’ automated submissions of retirement information to OPM that could be used to process retirement applications. While OPM began developing this capability by establishing interfaces with other agencies as part of its effort to implement DBTS, it discontinued the use of the interfaces for processing retirement applications when the DBTS contract was terminated. Thus, federal agencies that submit retirement information to OPM continue to provide paper packages and information when employees are ready to retire. Further, OPM has not implemented a planned capability for active and retired federal employees to access online retirement information through self-service tools. While the agency provided demonstrations of DBTS in April 2008 that showed the ability for employees to access information online, including applying for retirement and modeling future retirement benefits, this capability was to be provided by DBTS, and thus, no longer exists. The contractor had also been expected to deliver a consolidated system to support all aspects of retirement processing and an electronic case management system to support retirement processing. In the absence of these capabilities, the agency continues to manage cases through paper tracking and stand-alone systems. Additionally, OPM and federal agencies continue to rely on nonstandardized systems to determine and calculate retirement benefits, and federal retirees currently have only limited online, self-service tools. Program management principles and best practices emphasize the importance of using a program management plan that, among other things, establishes a complete description that ties together all program activities. An effective plan includes a description of the program’s scope, implementation strategy, lines of responsibility and authority, management processes, and a schedule. Such a plan incorporates all the critical areas of system development and is to be used as a means of determining what needs to be done, by whom, and when. Furthermore, establishing results-oriented (i.e., objective, quantifiable, and measurable) goals and measures, that can be included in a plan, provides stakeholders with the information they need to effectively oversee and manage programs. A plan for the future of the RetireEZ program has not been completed. In November 2008, OPM began restructuring the program and reported it was continuing toward retirement modernization without the DBTS contract. The restructuring efforts have resulted in a wide variety of documentation, including multiple descriptions of the program in formal agency reports, budget documentation, agency briefing slides, and related documents. For example, OPM’s November Fiscal Year 2008 Agency Financial Report described what the RetireEZ program is expected to achieve (e.g., provide retirement modeling tools for federal employees) once implemented. The agency’s Annual Performance Report, dated January 2009, outlined that the new vision for the restructured program is “to support benefit planning and management throughout a participant’s lifecycle through an enhanced federal retirement program.” The agency also presented information to OMB that identified eight fiscal year 2009 program initiatives, as listed in table 4. The agency has developed a variety of informal program documents and briefing slides that describe retirement modernization activities. For instance, one document prepared by the program office describes a five- phased approach that is intended to replace its previous DBTS-reliant strategy. The approach includes the following activities: (1) collecting electronic retirement information, (2) automating the retirement application process, (3) integrating retirement information, (4) developing retirement calculation technologies and tools, and (5) improving post- retirement processes through a technology solution. In addition, briefing slides also prepared by the program office outline a schedule for efforts to identify new technologies to support retirement modernization by drafting a request for information, which OPM expects to issue in late April 2009. Regardless, OPM’s various reports and documents describing its planned retirement modernization activities do not provide a complete plan for its restructured program. Specifically, although agency documents describe program implementation activities, they do not include a definition of the program, its scope, lines of responsibility and authority, management processes, and schedule. Also, the modernization program documentation does not describe results-oriented (i.e., objective, quantifiable, and measurable) performance goals and measures. According to the RetireEZ program manager, the agency is developing plans, but they will not be ready for release until the new OPM director has approved them, which is expected to occur in April 2009. Until the agency completes and uses a plan that includes all of the above elements to guide its efforts, it will not be properly positioned to obtain agreement with relevant stakeholders (e.g., Congress, OMB, federal agencies, and OPM senior executives) for its restructured retirement modernization initiative. Further, the agency will also not have a key mechanism that it needs to help ensure successful implementation of future modernization efforts. OPM has significant management weaknesses in five areas that are important to the success of its retirement modernization program: cost estimating, EVM, requirements management, testing, and program oversight. For example, the agency has not performed key steps, including the development of a cost estimating plan or completion of a work breakdown structure, both of which are necessary to develop a reliable program cost estimate. Also, OPM has not established and validated a performance measurement baseline, which is essential for reliable EVM. Further, although OPM is revising its previously developed system requirements, it has not established processes and plans to guide this work. Nor has the agency addressed test activities, even though developing processes and planning test activities early in the life cycle are recognized best practices for effective testing. Furthermore, although OPM’s Executive Steering Committee and Investment Review Board have recently become more active regarding RetireEZ, these bodies did not exercise effective oversight in the past, which has allowed the aforementioned management weaknesses to persist. Notably, OPM has not established guidance regarding how these entities are to engage with the program when corrective actions are needed. Until OPM addresses these weaknesses, many of which we and others made recommendations to correct, the agency’s retirement modernization initiative remains at risk of failure. The establishment of a reliable cost estimate is a necessary element for informed investment decision making, realistic budget formulation, and meaningful progress measurement. A cost estimate is the summation of individual program cost elements that have been developed by using established methods and validated data to estimate future costs. According to federal policy, programs must maintain current and well- documented estimates of program costs, and these estimates must span the full expected life of the program. Our Cost Estimating and Assessment Guide includes best practices that agencies can use for developing and managing program cost estimates that are comprehensive, well-documented, accurate, and credible, and provide management with a sound basis for establishing a baseline to measure program performance and formulate budgets. This guide identifies a cost estimating process that includes initial steps such as defining the estimate’s purpose (i.e., its intended use, scope, and level of detail); developing the estimating plan (i.e., the estimating approach, team, and timeline); defining the program (e.g., technical baseline description); and determining the estimating structure (e.g., work breakdown structure). According to best practices, these initial steps in the cost estimating process are of the utmost importance, and should be fully completed in order for the estimate to be considered valid and reliable. OPM officials stated that they intend to complete a modernization program cost estimate by July 2009. However, the agency has not yet fully completed initial steps for developing the new estimate. Specifically, the agency has not yet fully defined the estimate’s purpose, developed the estimating plan, defined program characteristics in a technical baseline description, or determined the estimating structure. With respect to the estimate’s purpose, agency officials stated that the estimate will inform the budget justification of RetireEZ for fiscal year 2011 and beyond. However, the agency has not clearly defined the scope or level of detail of the estimate. Regarding the estimating plan, agency officials stated that they have created a timeline to complete the estimate by July 2009. However, the agency has not documented an estimating plan that includes the approach and resources required to complete the estimate in the time period identified. With respect to the technical baseline description, agency officials stated that they are in the advanced stages of developing a request for information and a concept of operations that will serve as the basis for a technical baseline description. These documents are expected to be reviewed for approval in April 2009. Regarding the estimating structure, the agency has developed a work breakdown structure that identifies elements of the program to be estimated. However, the agency has not yet developed a work breakdown structure dictionary that clearly defines each element. Weaknesses in the reliability of OPM’s retirement modernization cost estimate have been long-standing. We first reported on the agency’s lack of a reliable cost estimate in January 2008 when we noted that critical activities, including documentation of a technical baseline description, had not been performed, and we recommended that the agency revise the estimate. Although OPM agreed to produce a reliable program cost estimate, the agency has not yet done so. Until OPM fully completes each of the steps, the agency increases the risk that it will produce an unreliable estimate and will not have a sound basis for measuring program performance and formulating retirement modernization program budgets. OMB and OPM policies require major IT programs to use EVM to measure and report program progress. EVM is a tool for measuring program progress by comparing the value of work accomplished with the amount of work expected to be accomplished. Such a comparison permits actual performance to be evaluated, based on variances from the planned cost and schedule, and future performance to be forecasted. Identification of significant variances and analysis of their causes helps program managers determine the need for corrective actions. Before EVM analysis can be reliably performed, developing a credible cost estimate is necessary. In addition to developing a cost estimate, an integrated baseline review must be conducted to validate a performance measurement baseline and attain agreement of program stakeholders (e.g., agency and contractor officials) before reliable EVM reporting can begin. The establishment of a baseline depends on the completion of a work breakdown structure, an integrated master schedule, and budgets for planned work. Although the agency plans to begin reporting on the restructured program’s progress using EVM in April 2009, the agency is not yet prepared to do so because initial steps have not been completed and are dependent on decisions about the program that have not been made. Specifically, the agency has not yet developed a reliable cost estimate for the program; such an estimate, which is critical for establishing reliable EVM, is not expected to be complete until July 2009; the agency does not plan to conduct an integrated baseline review to establish a reliable performance measurement baseline before beginning EVM reporting; and the work breakdown structure and integrated master schedule that agency officials report they have developed may not accurately reflect the full scope and schedule because key program documentation, such as the concept of operations, has not been completed. This situation resembles the state of affairs that existed in January 2008, when we reported that OPM’s EVM was unreliable because an integrated baseline review had not been conducted to validate the program baseline. At that time we recommended, among other things, that the agency establish a basis for effective use of EVM by validating a program performance measurement baseline through a program-level integrated baseline review. Although the agency stated that it agreed, it did not address this recommendation. Until the agency has developed a reliable cost estimate, performed an integrated baseline review, and validated a performance measurement baseline that reflect its program restructuring, the agency is not prepared to perform reliable EVM. Engaging in EVM reporting without first performing these fundamental steps could again render the agency’s assessment unreliable. Well-defined and managed requirements are a cornerstone of effective system development and acquisition. According to recognized guidance, disciplined processes for developing and managing requirements can help reduce the risks of developing a system that does not meet user and operational needs. Such processes include (1) developing detailed requirements that have been derived from the organization’s concept of operations and are complete and sufficiently detailed to guide system development and (2) establishing policies and plans, including defining roles and responsibilities, for managing changes to requirements and maintaining bidirectional requirements traceability. OPM’s retirement modernization requirements processes include some, but not all, of the elements needed to effectively develop and manage requirements. The agency began an effort to better develop its retirement modernization requirements in November 2008. This effort was in response to the agency’s recognition that its over 1,400 requirements lacked sufficient detail, were incomplete, and required further development. The agency intends to complete this requirements development effort in April 2009. However, the requirements will not be derived from OPM’s concept of operations because the agency is revising the concept of operations expected to be completed by April 2009, to reflect the program restructuring. Further, OPM documentation indicates that the agency has not yet determined the level of detail to which requirements should be developed. Additionally, agency officials stated that OPM is developing a requirements development process for retirement modernization. With respect to requirements management, OPM developed an organizational charter that outlined roles and responsibilities for supporting efforts to manage requirements. However, the agency does not yet have a requirements management plan. OPM’s prior experience with DBTS illustrates the importance of effective requirements development and management. According to RetireEZ program officials, insufficiently detailed requirements, poorly controlled requirements changes, and inadequate requirements traceability were factors that contributed to DBTS not performing as expected. Moreover, these requirements development and management weaknesses were identified, and recommendations for improvement were made by OPM’s independent verification and validation contractor before DBTS deployment. However, the agency has not yet corrected these weaknesses. Until OPM fully establishes requirements development and management processes, the agency increases the risk that it will (1) identify requirements that are neither complete nor sufficiently detailed and (2) not effectively manage requirements changes or maintain bidirectional traceability, thus further increasing agency risk that it will produce a system that does not meet user and operational needs. Effective testing is an essential component of any program that includes developing systems. Generally, the purpose of testing is to identify defects or problems in meeting defined system requirements and satisfying user needs. To be effectively managed, testing should be planned and conducted in a structured and disciplined fashion that adheres to recognized guidance and is coordinated with the requirements development process. Beginning the test planning process in the early stages of a program life cycle can reduce rework later in the program. Early test planning in coordination with requirements development can provide major benefits. For example, planning for test activities during the development of requirements may reduce the number of defects identified later and the costs related to requirements rework or change requests. Further, planning test activities early in a program’s life cycle can inform requests for proposals and help communicate testing expectations to potential vendors. OPM has not begun to plan test activities in coordination with developing its requirements for the RetireEZ program. According to OPM officials, the agency intends to begin its test planning by revising the previously developed DBTS test plans after requirements have been developed. However, the agency has not yet added test planning to its project schedule. Early test planning is especially important to avoid repeating the agency’s experience during DBTS testing when it identified more defects than it could resolve before system deployment. In January 2008, we reported that an unexpectedly high number of defects were identified during testing; yet, the deployment schedule had increased the risk of not resolving all defects that needed to be corrected before deploying DBTS. According to the RetireEZ program officials, the failure to fully address these defects contributed to the limited number of federal employees who were successfully processed by the system when it was deployed in February 2008. If it does not plan test activities early in the life cycle of RetireEZ, OPM increases the risk that it will again deploy a system that does not satisfy user expectations and meet requirements (i.e., accurately calculate retirement benefits) because of its potential inability to address a higher number of defects than expected. Moreover, criteria used to develop requests for proposals and communicate testing expectations to potential vendors could be better informed if the agency plans RetireEZ test activities early in the life cycle. GAO and OMB guidance calls for agencies to ensure effective oversight of IT projects throughout all life-cycle phases. Critical to effective oversight are investment management boards made up of key executives who regularly track the progress of IT projects such as system acquisitions or modernizations. These boards should maintain adequate oversight and track project performance and progress toward predefined cost and schedule goals, as well as monitor project benefits and exposure to risk. Another element of effective IT oversight is employing early warning systems that enable management boards to take corrective actions at the first sign of cost, schedule, and performance slippages. OPM’s Investment Review Board was established to ensure that major investments are on track by reviewing their progress and determining appropriate actions when investments encounter challenges. Despite meeting regularly and being provided with information that indicated problems with the retirement modernization, the board did not ensure that the investment was on track, nor did it determine appropriate actions for course correction when needed. For example, from January 2007 to August 2008 the board met and was presented with reports that described problems the retirement modernization program was facing, such as the lack of an integrated master schedule and earned value data that did not reflect the “reality or current status” of the program. However, meeting minutes indicate that no discussion or action was taken to address these problems. According to a member of the board, OPM guidance regarding how the board is to communicate recommendations and corrective actions when needed for the investments it is responsible for overseeing has not been established. In addition, OPM established an Executive Steering Committee to oversee retirement modernization. According to its charter, the committee is to provide strategic direction, oversight, and issue resolution to ensure that the program maintains alignment with the mission, goals, and objectives of the agency and is supported with required resources and expertise. However, the committee was inactive for most of 2008 and, consequently, did not exercise oversight of the program during a crucial period in its development. For example, from January 2008 until October 2008, the committee discontinued its formal meetings, and as a result, it was not involved in key program decisions, including the deployment of DBTS. Further, a member of the committee noted that OPM guidance for making recommendations and taking corrective actions also has not been provided. The ineffectiveness of the board and the inactivity of the committee allowed program management weaknesses in the areas of cost estimation, EVM, requirements management, and testing to persist and raise concerns about OPM’s ability to provide meaningful oversight as the agency proceeds with its retirement modernization. Without fully functioning oversight bodies, OPM cannot monitor modernization activities and make the course corrections that effective boards and committees are intended to provide. OPM’s retirement modernization initiative is in transition from a program that was highly dependent on the success of a major contract that no longer exists, to a restructured program that has yet to be fully defined. Although the agency has been able to partially implement a database of retirement information and improvements to customer service, it remains far from implementing six other key capabilities. Recognizing that much work remains, OPM has undertaken steps to restructure the retirement modernization program, but it has not yet produced a complete description of its planned program, including fundamental information about the program’s scope, implementation strategy, lines of responsibility and authority, management processes, and schedule. Further, OPM’s retirement modernization program restructuring does not yet include definitions of results-oriented goals and measures against which program performance can be objectively and quantitatively assessed. In addition, OPM has not overcome managerial shortcomings in key areas of program management, including areas that we have previously reported. Specifically, the agency is not yet positioned to develop a reliable program cost estimate or perform reliable EVM, both of which are critical to effective program planning and oversight. Nor has OPM overcome weaknesses in its management of system testing and defects, two activities that proved problematic as the agency was preparing to deploy the RetireEZ system that subsequently was terminated. Adding to these long-standing concerns are weaknesses in OPM’s process to effectively develop and manage requirements for whatever system or service it intends to acquire or develop. Finally, these weaknesses have been allowed to persist by entities within the agency that were ineffective in overseeing the retirement modernization program. As a consequence, the agency is faced with significant challenges on two fronts: defining and transitioning to its restructured program, and addressing new and previously identified managerial weaknesses. Until OPM addresses these weaknesses, many of which were previously identified by GAO and others, the agency’s retirement modernization initiative remains at risk of failure. Institutionalizing effective planning and management is critical not only for the success of this initiative, but also for that of other modernization efforts within the agency. To improve OPM’s effort toward planning and implementing its retirement modernization program by addressing management weaknesses, we recommend that the Director of the Office of Personnel Management provide immediate attention to ensure the following six actions are taken: Develop a complete plan for the restructured program that defines the scope, implementation strategy, lines of responsibility and authority, management processes, and schedule. Further, the plan should establish results-oriented (i.e., objective, quantifiable, and measurable) goals and associated performance measures for the program. Develop a reliable cost estimate by following the best practice steps outlined in our Cost Estimating and Assessment Guide, including definition of the estimate’s purpose, development of an estimating plan, definition of the program’s characteristics, and determination of the estimating structure. Establish a basis for reliable EVM, when appropriate, by developing a reliable program cost estimate, performing an integrated baseline review, and validating a performance measurement baseline that reflects the program restructuring. Develop a requirements management plan and execute processes described in the plan to develop retirement modernization requirements in accordance with recognized guidance. Begin RetireEZ test planning activities early in the life cycle. Develop policies and procedures that would establish meaningful program oversight and require appropriate action to address management deficiencies. The Director of the Office of Personnel Management provided written comments on a draft of this report. (The comments are reproduced in app. II.) In the comments, OPM agreed with our recommendations and stated that it had begun to address them. To this end, the Director stated that the agency had, among other actions, begun revising its retirement modernization plans, developing a new program cost estimate, planning for accurate EVM reporting, incorporating recognized guidance in requirements management planning, and planning test activities during requirements development. If the recommendations are properly implemented, they should better position OPM to effectively manage its retirement modernization initiative. The agency also provided comments on the draft report regarding our description of the federal retirement application process, as well as our characterizations of OPM’s EVM and requirements management capabilities vis-à-vis the retirement modernization program. In each of these instances, we made revisions as appropriate. We are sending copies of this report to the Director of the Office of Personnel Management, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. As requested, the objectives of our study were to (1) assess the status of the Office of Personnel Management’s (OPM) efforts toward planning and implementing the RetireEZ program and (2) evaluate the effectiveness of the agency’s management of the modernization initiative. To assess the status of OPM’s efforts toward planning and implementing the RetireEZ program, we reviewed and analyzed program documentation, including program management plans, briefing slides, and project status documentation, to identify planned retirement modernization capabilities and determine to what extent these capabilities have been implemented; evaluated the agency’s documentation about restructuring the program and analyzed the extent to which the documentation describes current and planned RetireEZ program activities; identified and evaluated the agency’s program goals and measures and compared them to relevant guidance to determine the extent to which the goals and measures are described in results-oriented terms; supplemented agency program documentation and our analyses by interviewing agency and contractor officials, including the OPM Director, Chief Information Officer, Chief Financial Officer, Director of Modernization, Associate Director for Human Resources Products and Services Division, and executives from Hewitt Associates and Northrop Grumman Corporation; and observed retirement operations and ongoing modernization activities at OPM and contractor facilities in Washington, D.C.; Boyers, Pennsylvania; and Herndon, Virginia. To determine the effectiveness of OPM’s management of the retirement modernization initiative, we evaluated the agency’s management of program cost estimating, earned value management (EVM), requirements, test planning, and oversight and compared the agency’s work in each area with recognized best practices and guidance. Specifically, to evaluate whether OPM effectively developed a reliable program cost estimate, we analyzed the agency’s program documentation and determined to what extent the agency had completed key activities described in our Cost Estimating and Assessment Guide; to assess OPM’s implementation of EVM, we reviewed program progress reporting documentation and compared the agency’s plans for restarting its EVM-based progress reporting against relevant guidance, including our Cost Estimating and Assessment Guide; regarding requirements management, we evaluated OPM’s processes for developing and managing retirement systems modernization requirements and compared the effectiveness of those processes against recognized guidance; to determine the effectiveness of the agency’s test planning for the retirement modernization, we reviewed program activities and test plans against best practices and evaluated the extent to which the agency has begun planning for these activities; and we reviewed and analyzed documentation from program oversight entities and evaluated the extent to which these entities took actions toward ensuring the RetireEZ program was being effectively overseen. We also evaluated OPM’s progress toward implementing our open recommendations and interviewed OPM and contractor officials as noted. We conducted this performance audit at OPM headquarters in Washington, D.C., the Retirement Operations Center for OPM in Boyers, Pennsylvania, and contractor facilities in Herndon, Virginia, from May 2008 through April 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributions to this report were made by Mark T. Bird, Assistant Director; Barbara S. Collier; Neil J. Doherty; David A. Hong; Thomas J. Johnson; Rebecca E. LaPaze; Lee A. McCracken; Teresa M. Neven; Melissa K. Schermerhorn; Donald A. Sebers; and John P. Smith.
|
For the past two decades, the Office of Personnel Management (OPM) has been working to modernize the paper-intensive processes and antiquated systems used to support the retirement of federal employees. By moving to an automated system, OPM intends to improve the program's efficiency and effectiveness. In January 2008, GAO recommended that the agency address risks to successful system deployment. Nevertheless, OPM deployed a limited initial version of the modernized system in February 2008. After unsuccessful efforts to address system quality issues, OPM suspended system operation, terminated a major contract, and began restructuring the modernization effort, also referred to as RetireEZ. For this study, GAO was asked to (1) assess the status of OPM's efforts to plan and implement the RetireEZ program and (2) evaluate the effectiveness of the agency's management of the modernization initiative. To do this, GAO reviewed OPM program documentation and interviewed agency and contractor officials. OPM remains far from achieving the modernized capabilities it had planned. Specifically, the agency has partially implemented two of eight planned capabilities: (1) an integrated database of retirement information accessible to OPM and agency retirement processing personnel and (2) enhanced customer service capabilities that support customer needs and provide self-service tools. However, the remaining six capabilities have yet to be implemented because they depended on deliverables that were to be provided by a contract that is now terminated. Examples of these missing capabilities include: (1) automated submission of retirement information through interfaces with federal agencies and (2) Web-accessible self-service retirement information for active and retired federal employees. Further, OPM has not yet developed a complete plan that describes how the program is to proceed without the system that was to be provided under the terminated contract. Although agency documents describe program implementation activities, they do not include a definition of the program, its scope, lines of responsibility and authority, management processes, and a schedule. Also, modernization program documentation does not describe results-oriented performance goals and measures. Until the agency completes and uses a plan that includes all of the above elements to guide its efforts, it will not be properly positioned to move forward with its restructured retirement modernization initiative. Further, OPM has significant weaknesses in five key management areas that are vital for effective development and implementation of its modernization program: cost estimating, earned value management (a recognized means for measuring program progress), requirements management, testing, and oversight. For example, the agency has not developed a cost estimating plan or established a performance measurement baseline--prerequisites for effective cost estimating and earned value management. Further, although OPM is revising its previously developed system requirements, it has not established processes and plans to guide this work or addressed test activities even though developing processes and plans, as well as planning test activities early in the life cycle, are recognized best practices for effective requirements development and testing. Finally, although OPM's Executive Steering Committee and Investment Review Board have recently become more active regarding RetireEZ, these bodies did not exercise effective oversight in the past, which has allowed the aforementioned management weaknesses to persist and OPM has not established guidance regarding how these entities are to intervene when corrective actions are needed. Until OPM addresses these weaknesses, many of which GAO and others made recommendations to correct, the agency's retirement modernization initiative remains at risk of failure. Institutionalizing effective management is critical not only for the success of this initiative, but also for that of other modernization efforts within the agency.
|
CHCS is a comprehensive medical information system that Defense has developed to provide automated support to its military medical treatment facilities. As shown in figure 1, the system is multi-faceted and complex, composed of nine integrated modules and shared capabilities, such as order-entry, results retrieval, and electronic mail. The modules are used to create and update the integrated patient database, which can be accessed by all authorized users. We describe the CHCS shared capabilities and modules in more detail in appendix II. CHCS supports high-volume workloads generated by numerous physicians and other health care professionals using the system simultaneously and enhances communications within and among medical treatment facilities. In acquiring CHCS, Defense awarded a contract to Science Applications International Corporation (SAIC), in March 1988, to design, develop, deploy, and maintain CHCS. This contract recently completed its eighth and last year and ended on February 29, 1996. CHCS has become an important part of Defense’s inpatient and outpatient medical operations. From the time a patient is admitted into a medical facility to the time of discharge, CHCS records information on the patient’s condition and treatment and makes it available to physicians, nurses, and technicians. For example, CHCS establishes a medical record as a new patient registers at the facility. As the results of tests that physicians order (as well as other patient information) are entered into CHCS, they become immediately available for medical care decisions. Further, if medication is prescribed, CHCS, in processing the prescription, checks it against the patient’s medical record for potentially dangerous medical interactions. CHCS is also integral to Defense’s implementation of Tricare, its nationwide managed health care program. Defense’s goals for the Tricare program are to improve access to high-quality care while containing the growth of health care costs. Tricare, which is being implemented over a 3-year period, calls for coordinating and managing care on a regional basis using all available military hospitals and clinics supplemented by contracted civilian services. The Managed Care Program submodule of CHCS is the application through which active duty members and beneficiaries choosing the health maintenance organization option will be enrolled in Tricare. Tricare managers will use CHCS to assign enrolled beneficiaries to primary care providers from either the military medical treatment facility or the civilian provider network. CHCS will also assist Tricare managers in maintaining the provider network and scheduling appointments with military and/or civilian network primary care providers and specialists. Finally, CHCS is critical to measuring Tricare’s success because it enables managers to track enrollment and disenrollment in Tricare. To assess Defense’s actions relating to CHCS deployment and operations, we met with program officials at the Office of the Assistant Secretary of Defense for Health Affairs and CHCS program officials at Defense, as well as contractor officials at the following eight medical treatment facilities: Walter Reed Army Medical Center, Washington, D.C.; National Naval Medical Center Bethesda, Maryland; 89th Medical Group, Andrews Air Force Base (AFB), Maryland; 20th Medical Group, Shaw AFB, South Carolina; Moncrief Army Community Hospital, Ft. Jackson, South Carolina; Naval Medical Center Portsmouth, Virginia; 1st Medical Group, Langley AFB, Virginia; and McDonald Army Community Hospital, Ft. Eustis, Virginia. We also contacted CHCS program officials by telephone and mail at the following nine CHCS medical treatment facilities: Naval Hospital Great Lakes, Illinois; Darnall Army Community Hospital, Ft. Hood, Texas; Tripler Army Medical Center, Honolulu, Hawaii; Eisenhower Army Medical Center, Ft. Gordon, Georgia; Blanchfield Army Community Hospital, Ft. Campbell, Kentucky; 59th Medical Wing, Lackland AFB, Texas; 96th Medical Group, Eglin AFB, Florida; 81st Medical Group, Keesler AFB, Mississippi; and 82nd Medical Group, Sheppard AFB, Texas. To assess Defense’s continuing efforts to address past problems, we examined (1) Defense’s August 1994 Performance Management Plan Version 5.0, (2) Defense’s August 1993 deployment plan, Implementation and Use of the CHCS, (3) Defense deployment schedules through October 4, 1995, (4) monthly progress reports provided to Defense by the CHCS contractor through December 1995, (5) Defense’s May 1995 report on VAX/PC system sizing algorithms, (6) Defense’s June 1995 report on high-end system sizing algorithms, and (7) Defense’s July 1995 report on the high-end computing platform for CHCS. We met with CHCS users to ascertain their use of and satisfaction with CHCS, and to observe CHCS in operation. In addition, we reviewed Defense documentation relating to the results of CHCS operational tests. We also received formal briefings from Defense on projects and programs related to CHCS, such as Defense’s Clinical Integrated Workstation project, Defense’s managed health care program, CHCS’ Benefits Realization Improvement Program, and Defense’s Pacific Medical Network project. We worked closely with and briefed senior CHCS program officials at Defense to discuss our concerns as they arose and to confirm our understanding of potential problems and their implications for the achievement of CHCS objectives. We requested written comments from the Secretary of Defense. They were provided by the Assistant Secretary of Defense for Health Affairs and are incorporated as appendix I. At the end of 1995, Defense completed deployment of CHCS to 526 of its 815 medical treatment facilities worldwide. CHCS deployment involved the installation of computer equipment and software to carry out CHCS outpatient and inpatient functions. Given the complexity of the design and development of CHCS and the number of facilities involved, this was not an easy task. Key to the successful development and deployment of CHCS has been the leadership provided by the Deputy Assistant Secretary of Defense for Health Services Operations and Readiness and the CHCS program manager and their application of a set of fundamental information management practices that we refer to as best practices. With worldwide deployment, Defense can realize the full benefits of CHCS, such as the time savings associated with physicians having immediate and facility-wide access to patient information. Instrumental to the successful development and deployment of CHCS worldwide has been Defense’s application of some of the best practices of leading private and public organizations for strategic information management. For example, it has been shown that the involvement and commitment of line management are crucial to making information management decisions and implementing projects. Over the past 5 years, the Deputy Assistant Secretary of Defense for Health Services Operations and Readiness, as the chief executive for the CHCS project, obtained such line management involvement and commitment by (1) promoting tri-service (Army, Navy, and Air Force) representation within Defense’s CHCS Program Office, and (2) engaging the support of the military department surgeon general organizations, which oversee Defense’s medical treatment facilities. The Deputy Assistant Secretary also appointed an experienced and knowledgeable CHCS program manager, who was instrumental in (1) sustaining program momentum, (2) ensuring that CHCS was developed and tested in increments, thereby mitigating the impact of large-scale software development problems, and (3) instituting a set of performance measures relating to hospital operations and medical outcomes to help guide overall program direction. Successful organizations also manage information systems as investments rather than expenses. Two key attributes are: (1) linking information system decisions tightly to program budget decisions and focusing them on mission improvement, and (2) using a disciplined process of postimplementation reviews—based on explicit decision criteria and quantifiable measures assessing mission benefits, risk, and cost—to select, control, and evaluate information systems projects. Defense has issued policies implementing the above two attributes. Also, the CHCS program has consistently followed these policies, which require the continuous involvement of senior Defense program, financial, and information resources management officials. For example, in order to proceed into the various system development phases (analysis, design, programming, testing, validation, and implementation), the CHCS program manager had to submit justification to and obtain approval from Defense’s Major Automated Information Systems Review Council. This justification, which included documentation, such as a functional economic analysis,served as (1) a record of system approval by senior Defense officials and (2) input to Defense’s planning, programming, and budgeting process. Finally, successful organizations have competent line and information management professionals, and ensure that their skills and knowledge are kept current. For example, both the CHCS program and deputy program managers were required to complete the comprehensive, advanced program management training offered by the Defense Systems Management College (DSMC). They also must remain current in their clinical areas by satisfying necessary continuing professional education requirements. Defense currently projects total benefits of $4.1 billion to be derived from using CHCS. This amount exceeds Defense’s $2.8 billion estimated system life-cycle cost by $1.3 billion. Of the total benefits amount, 83 percent represents savings attributed to increased productivity and direct cost offsets. Productivity increases would come from improved scheduling and improved access to patient information. For example, under Defense’s prior paper-based systems, physicians would order tests on paper and the results would be maintained in a patient’s paper medical file. Physicians and other health-care providers would then have to search for either the medical file or some item that was expected to be in the file. With CHCS, this information is now entered directly into the computer and is available to every authorized system user. Health-care providers can review the test results as soon as they are entered into the computer, without having to search through paper documents, thus saving staff time. Similarly, the patient saves time, as fewer visits are unproductive due to missing information. Direct offsets include dollar savings derived from not operating the paper-based systems used prior to CHCS and from expected decreases in malpractice claims. For example, CHCS users and officials told us that because the automated CHCS records contain complete information on the patient’s allergies and medications, fewer incidents of adverse patient reactions to drugs are expected. In the past 4 years, we have issued several reports identifying problems associated with CHCS design and implementation, such as Defense’s lack of an acceptable method for physicians to enter inpatient orders into CHCSand weaknesses or deficiencies in Defense’s tools and methodology for managing CHCS performance. Defense is addressing these concerns. Defense originally envisioned that under the CHCS inpatient order-entry process, physicians would directly key in instructions to nurses and technicians for the treatment of hospitalized patients. Defense’s intent was to eliminate the (1) costs associated with other staff entering physicians’ orders into CHCS and (2) errors in the data other staff entered because of misinterpretations of physicians’ handwriting. In September 1991, we reported that the inpatient order-entry capability in CHCS was not considered user-friendly by many physicians because entering conditional and complex orders into CHCS took much more time than writing out the orders by hand. As a result, many physicians resisted using the inpatient order-entry features of CHCS, electing to write out their orders by hand and to have other staff enter them into the system. Further, Defense deactivated the inpatient order-entry capability at all but two of its medical treatment facilities pending further development and testing. Defense has performed extensive analysis in the past 4 years to address the inpatient order-entry problem. It issued a request for proposals to solicit commercial inpatient order-entry-system solutions in February 1992. By mid-1992, it had developed basic requirements for an inpatient order-entry capability. Defense’s analysis of those requirements led it to conclude that in order to provide physicians with this capability, it needed to develop a clinically-oriented graphical user interface (GUI). Defense is currently building a prototype GUI. This prototype, once successfully completed, should enable physicians to access computer screens or windows containing icons that represent activities such as ordering or modifying patients’ prescriptions, and ordering inpatient laboratory tests. It is intended that physicians will be able to look up inpatient data, review inpatient laboratory test results, and perform many other tasks by clicking on a few icons and selecting items from a few menus. The GUI is being developed to enable physicians to use CHCS more efficiently, thereby reducing the possibility of errors in the system due to data-entry mistakes and reducing costs associated with having other staff enter physicians’ orders. Defense expects to complete an operational version of this GUI during 1996, as part of the Clinical Integrated Workstation project. In July 1994 we reported that the tools Defense was using at its CHCS sites to measure performance did not collect all the data it needed to detect response-time problems, diagnose their causes, and determine their significance. Defense also lacked modern performance analysis tools that would help it determine the causes of response-time problems and project the impact on response time of changes in workload and/or system configuration. In addition, we reported that Defense’s methodology for managing CHCS performance was weak. The methodology did not require routine analysis and elimination of extremely long response times that occur sporadically, but relied instead on user complaints to initiate review and resolution of such problems. At that time, we also found that Defense’s method of determining reserve CHCS capacity was unreliable and might have resulted in either excessive capacity, thereby incurring unnecessary cost, or insufficient capacity, thereby leading to unsatisfactory system performance. Since our July 1994 report, Defense has modified several existing CHCS performance measurement and analysis tools and has purchased additional ones. These tools enable Defense to measure system response times and determine which CHCS system resources (for example, memory and disk drives) are causing the response-time problems. Appendix III describes in more detail Defense’s on-going efforts to address deficiencies in its performance management tools. In addition, Defense has taken steps to strengthen its methodology for managing CHCS performance. Specifically, Defense has (1) updated its performance management plan to include procedures for investigating and correcting extremely long response times and (2) improved its measures of system reserve capacity by developing performance simulation models for each CHCS computer platform that forecast computer resource capacity requirements. Defense’s current backup and recovery plan at CHCS facilities contains provisions for (1) backup copies of CHCS software and databases to be stored in other buildings, (2) critical CHCS functions to be performed manually in emergency situations, and (3) access to emergency backup generators and related equipment if power is lost. However, the plan lacks policies and procedures for the rapid repair or replacement of CHCS equipment damaged in a disaster, such as an earthquake, fire, accident, or sabotage. If the computer room housing a hospital’s CHCS hardware were heavily damaged by a disaster, users would likely suffer serious, potentially prolonged disruptions in computer service. Sound information system controls require agencies to ensure that they are adequately prepared to cope with disaster. A current, tested, and reliable backup and recovery plan is essential to ensuring that Defense can restore CHCS operations and data should disaster strike. According to Defense officials, their initial strategy with respect to recovery of CHCS equipment was reactive: to wait until a disaster struck before determining how best to repair or replace damaged equipment. They cited as justification for this stance: (1) the low probability of a serious disaster affecting CHCS that would not also affect the host hospital’s entire operations, (2) the costs associated with adopting a more proactive method, and (3) the sufficiency of reverting to manual methods during periods of CHCS downtime. We disagree with this justification. Regarding Defense’s first point, CHCS now operates, for the most part, in a regional environment, where a single CHCS host facility supports one or more geographically remote satellite CHCS facilities. In this regional configuration, each host maintains an automated central patient record that is accessed by satellite facilities on demand. A disruption in CHCS operations at a host facility due to a fire, for instance, which destroys the computer room (whether or not it also destroys the rest of the hospital) will disrupt operations in every satellite facility connected to that host. Concerning Defense’s second point, CHCS program office officials have recently stated that improvements in technology—better, faster, and cheaper computer equipment—may now make it possible for them to adopt a more active plan for repairing or replacing damaged CHCS hardware at a reasonable cost. Finally, with respect to Defense’s third point, health-care providers at CHCS facilities told us that they have become so dependent on the patient information in CHCS that they would experience great difficulty reverting to manual methods during an extended CHCS downtime. For example, CHCS currently provides medical treatment facilities with the capability to perform drug interaction screening, which cannot be done as effectively by a human relying on memory or reviewing paper documents as it can by the computer. We discussed our concerns with CHCS program office officials on several occasions. In recent meetings, they said they are reviewing Defense’s CHCS backup and recovery plan to address rapid repair or replacement of damaged CHCS equipment. As the backbone of Defense’s medical operations, CHCS will provide personnel with almost instant access to patient information, from medical history to current treatment and vital statistics. With CHCS, Defense can make significant improvements in the way its medical treatment facilities operate: It can lower the cost and improve the quality of its health care delivery, and better address the needs of its patients, physicians, nurses, and other system users. Patients’ access to health care has increased with better appointment availability through improved scheduling. Physicians and nurses have experienced time savings in the delivery of medical care with improved access to patient information. If Defense is to realize all of CHCS’ potential, however, it is critical that CHCS be available to physicians and other health care providers when needed. While Defense’s backup and recovery plan provides for recovery from disruptions in computer service due to power outages, the plan does not effectively address recovery from major disruptions requiring the repair or replacement of CHCS equipment damaged as a result of disaster. Health care providers have become dependent on the patient information in the system, so any major disruption in the availability of that information could result in injury or even loss of life. This risk would be greatly minimized if Defense had a more effective backup and recovery plan for CHCS equipment. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to develop, test, and implement Defense-wide policies and procedures for the rapid repair or replacement of CHCS equipment damaged in disasters. In commenting on a draft of this report, the Department of Defense stated that it fully agreed with the report. Defense concurred with our recommendation to implement policies and procedures for the rapid repair or replacement of CHCS equipment damaged in disasters. Specifically, the CHCS Program Office, in coordination with the Office of the Assistant Secretary of Defense for Health Affairs, tasked a commercial vendor during January 1996, to prepare a requirements analysis and recommendations. These would enable Defense to implement policies and procedures for continuity of operations and recovery from disasters for the Military Health Services System-wide infrastructure. We are sending copies of this report to the Chairmen and Ranking Minority Members of the House and Senate Committees on Appropriations, the Secretary of Defense, and the Director of the Office of Management and Budget. Copies will also be made available to other interested parties upon request. Please contact me at (202) 512-6252 or William Franklin, Director, at (202) 512-6234 if you have any questions concerning this report. GAO has been monitoring and reporting on CHCS since August 1985. We conducted this latest evaluation from June through December 1995, in accordance with generally accepted government auditing standards. Major contributors to this report are listed in appendix IV. CHCS is composed of several shared capabilities—such as order-entry, results retrieval, and electronic mail—and nine modules. The modules provide access to an integrated electronic patient database, which facilitates collection and input of data at the point of care. This supports integration of the patient care process and immediate availability of patient information to any authorized system user. The following sections describe the CHCS shared capabilities and each CHCS module. Capabilities shared by most CHCS modules include order-entry, which allows the entry of patient orders by health-care providers and ancillary support personnel; results retrieval, which allows direct access to test results performed under any module; and electronic mail, which allows users to communicate with each other. The Dietetics module manages the order and delivery of patient dietary instructions. The Clinical module manages orders for patient care and the retrieval of test results. It contains checks against the patient’s medical record for risks and contraindications, and issues a warning if necessary. The Laboratory module manages data associated with clinical and anatomical pathology, and blood/chemical tests. This includes ordering tests, processing specimens, documenting test results, and supporting quality controls. The Patient Administration module manages the registration of patients and their medical records. The Patient Appointment and Scheduling module manages appointment schedules for clinics and health care providers. Its Managed Care Program submodule supports enrollment, provider network management, and health care finder activities. The Pharmacy module manages the ordering and filling of prescriptions. It checks for drug interactions and allergies, while providing an automated inventory control capability. The Radiology module manages the ordering and scheduling of diagnostic, radiologic, nuclear medicine, and radiation therapy testing as well as the reporting of test results. The Medical Records and Image Files Tracking module manages and tracks patient medical records and images. The Quality Assurance module supports the identification and documentation of recurring problems related to patient care, and tracks their solutions and resolutions. It also provides management of provider case lists and training to support the credentialing process. In our previously cited July 1994 report, we identified deficiencies in Defense’s CHCS performance management tools. These deficiencies are summarized below, along with Defense’s ongoing efforts to resolve them. First, Defense’s Performance Monitoring Tool did not use a representative sampling of CHCS functions in measuring system response time experienced by system users. Defense now recognizes that additional user functions need to be included in its sampling. It is currently conducting engineering analyses to determine how many additional user functions should be measured. Second, Defense’s Option Audit tool only measured system component use by option (i.e., a menu item that a user selects, such as “Enter/Maintain Lab Orders” or “Lab Order Entry/Login”), rather than at the user-function level. Defense is now modifying this tool to enable it to measure system component use at the CHCS user-function level, collect data on the frequency with which system users employ various CHCS functions, and measure system-component use for CHCS interfaces. Defense expects these modifications to be completed during 1997. Third, Defense did not have adequate tools for the PC-CHCS UNIX platform. It has since modified the Performance Monitoring Tool and Option Audit to support performance monitoring and analysis of PC-CHCS systems. In addition, CHCS performance engineering staff evaluated five commercial-off-the-shelf UNIX performance measurement tools, and recommended obtaining two of them: Olympus TuneUp for site-level performance monitoring and analysis and Stallion Technology Monitor for evaluation and analysis of the performance impact of changes to CHCS software. Last, we reported that Defense did not have adequate modeling tools for its CHCS systems. It has since acquired the SES Workbench simulation modeling tool, and developed performance simulation models for all CHCS configurations, including the VAX, Alpha, and PC systems. These simulation models allow Defense to project the impact of workload growth and system configuration changes on response times. Defense recently used one of the models to project the impact of the CHCS software version 4.4 upgrade on system response time at CHCS facilities. According to Defense, the changes to response time predicted by the model were close to the actual changes resulting from the upgrade. Defense Health Care: Issues and Challenges Confronting Military Medicine (GAO/HEHS-95-104, March 22, 1995). Defense’s Composite Health Care System: Background Briefing for the Staff of the Senate Committee on Appropriations, Subcommittee on Defense (March 17, 1995). Defense’s Composite Health Care System: Background Briefing for the Staff of the Senate Committee on Armed Services, Subcommittee on Force Requirements and Personnel (February 14, 1995). Defense’s Composite Health Care System: Background Briefing for the Staff of the House Committee on National Security, Subcommittee on Military Personnel (February 14, 1995). Medical ADP Systems: Defense’s Tools and Methodology for Managing CHCS Performance Need Strengthening (GAO/AIMD-94-61, July 15, 1994). Composite Health Care System: Outpatient Capability Is Nearly Ready for Worldwide Deployment (GAO/IMTEC-93-11, December 15, 1992). Medical ADP Systems: Composite Health Care System Is Not Ready To Be Deployed (GAO/IMTEC-92-54, May 20, 1992). Medical ADP Systems: Changes in Composite Health Care System’s Deployment Strategy Are Unwise (GAO/IMTEC-91-47, September 30, 1991). Medical ADP Systems: Composite Health Care System: Defense Faces a Difficult Task (GAO/IMTEC-90-42, March 15, 1990). Defense’s Acquisition of the Composite Health Care System (GAO/T-IMTEC-90-04, March 15, 1990). Medical ADP Systems: Composite Health Care System Operational Tests Extended (GAO/IMTEC-89-30, April 10, 1989). Medical ADP Systems: Analysis of Technical Aspects of DOD’s Composite Health Care System (GAO/IMTEC-88-27, July 11, 1988). Medical ADP Systems: Composite Health Care System Acquisition—Fair, Reasonable, and Supported (GAO/IMTEC-88-26, March 4, 1988). Medical ADP Systems: Composite Health Care System Operational Test and Evaluation Costs (GAO/IMTEC-88-18BR, January 28, 1988). ADP Systems: Concerns About DOD’s Composite Health Care System Development Contracts (GAO/IMTEC-87-25, June 8, 1987). ADP Systems: Concerns About the Acquisition Plan for DOD’s Composite Health Care System (GAO/IMTEC-86-12, March 31, 1986). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a legislative requirement, GAO reviewed the Department of Defense's (DOD) Composite Health Care System (CHCS), focusing on: (1) DOD efforts to complete deployment of CHCS to military medical treatment facilities worldwide; (2) DOD efforts to address previously identified problems; and (3) a new CHCS operational issue. GAO found that: (1) DOD completed deployment of CHCS to 526 medical treatment facilities worldwide, which was difficult because of the system's complexity and the number of sites involved; (2) two DOD officials ensured the deployment's success by providing leadership and using fundamental information management practices; (3) DOD expects CHCS benefits to exceed its costs by $1.3 billion over the system's expected life; (4) CHCS should improve scheduling, give greater and quicker access to patient information, and increase the timeliness of medical care; (5) DOD has made progress in addressing its two previously identified problems by developing a prototype clinically oriented graphical user interface to make patient order-entry less cumbersome and strengthening the tools and methodology needed to manage CHCS performance; (6) DOD has updated its CHCS performance management plan and developed performance simulation models for each CHCS computer platform; (7) the lack of an effective plan for rapidly repairing or replacing CHCS equipment damaged by disaster remains a problem; and (8) DOD did not address this problem because of cost concerns and a lack of accurate information, but it is reconsidering its options for providing equipment adequate backup.
|
With over 235,000 employees, including physicians, nurses, counselors, statisticians, computer specialists, architects, and attorneys, VA is the second largest federal department. It carries out its mission through three agency organizations—Veterans Health Administration (VHA), Veterans Benefits Administration (VBA), and National Cemetery Administration— and field facilities throughout the United States. The department provides services and benefits through a nationwide network of 156 hospitals, 877 outpatient clinics, 136 nursing homes, 43 residential rehabilitation treatment programs, 207 readjustment counseling centers, 57 veterans’ benefits regional offices, and 122 national cemeteries. In carrying out its mission, the department depends on IT and telecommunications systems, which process and store sensitive information, including personal information on veterans. Information security is a critical consideration for any organization that depends on information systems and networks to carry out its mission or business. It is especially important for government agencies, where maintaining the public’s trust is essential. The dramatic expansion in computer interconnectivity and the expanding use of mobile devices and storage media are changing the way our government, the nation, and much of the world share information and conduct business. Without proper safeguards, enormous risk exists that systems, mobile devices, and information are exposed to potential data tampering, disruptions in critical operations, fraud, and the inappropriate disclosure of sensitive information. Recognizing the importance of securing federal systems and data, Congress passed the Federal Information Security Management Act (FISMA) in December 2002, which permanently authorized and strengthened the information security program, evaluation, and reporting requirements established by earlier legislation (commonly known as GISRA, the Government Information Security Reform Act). FISMA sets forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. The act requires each agency to develop, document, and implement an agencywide information security program for the data and systems that support the operations and assets of the agency, using a risk-based approach to information security management. According to FISMA, the head of each agency has responsibility for delegating to the agency chief information officer (CIO) the authority to ensure compliance with the security requirements in the act. To carry out the CIO’s responsibilities in the area, a senior agency official is to be designated chief information security officer (CISO). In June 2002, we reported that VA had not completed actions to strengthen its security management program, ensure compliance with security policies and procedures, and ensure accountability for information security throughout the department. We made four recommendations to VA: (1) complete a comprehensive security management program that included actions related to central security management functions, risk assessments, security policies and procedures, security awareness, and monitoring and evaluating computer controls; (2) develop a process for managing the department’s updated security plan to remediate identified weaknesses; (3) regularly report to the Secretary, or his designee, on progress in implementing VA’s security plan; and (4) ensure consistent use of information security performance standards when appraising the department’s senior executives. Since our report in 2002, VA’s IG has made additional recommendations addressing serious weaknesses within the department’s information security controls. In March 2005, the VA IG reported that the department had not appropriately restricted access to data, ensured that only authorized changes were made to computer programs, ensured that backup and recovery plans were adequate to ensure the continuity of essential operations, and moved the VA Central Office data center to a more appropriate location. The IG made a number of recommendations to the department to secure patient information and data over VA networks, improve application and operating system change controls, test continuity of operations plans at national data centers, and complete the move of the VA Central Office data center. In its annual FISMA report for fiscal year 2005, issued in September 2006, the IG carried forward all the recommendations from its prior years’ FISMA audits. It made recommendations in 17 areas to address all FISMA related findings for the fiscal year. On May 3, 2006, the home of a VA employee was burglarized, resulting in the theft of a personally owned laptop computer and external hard drive that contained personal information on approximately 26.5 million veterans and U.S. military personnel. The external hard drive was not encrypted or password protected. The Secretary of VA was notified of the theft on May 16, 2006, and Congress and veterans were notified on May 22, 2006. Notification letters were sent to all veterans, and VA announced that free credit monitoring services would be offered. A number of congressional hearings were held and bills introduced related to the protection of veterans’ privacy and identity. During this time period, many veteran service organizations expressed concerns to Congress as to whether VA was capable of safeguarding the personal information of veterans. These organizations also expressed doubt over whether the department’s attempts to correct the weaknesses would be effective. The stolen computer equipment was recovered on June 28, 2006, and forensic testing by the Federal Bureau of Investigation determined that the sensitive data files had not been accessed or compromised. After the equipment was recovered, the Office of Management and Budget (OMB) withdrew its request to Congress for funding for the free credit monitoring services because it had concluded that credit monitoring services were no longer necessary due to the results of the FBI’s analysis. Veterans’ organizations indicated that the department should continue to offer credit monitoring services in order to allay veterans’ worries regarding the potential of identity theft. As a result of the theft, the VA IG issued a report in July 2006 on the investigation of the incident and made five recommendations to improve VA’s policies and procedures for securing sensitive information and conducting security awareness training. Recognizing the concerns of veterans, in December 2006, Congress passed the Veterans Benefits, Health Care, and Information Technology Act of 2006. Under the act, the VA’s CIO is responsible for establishing, maintaining, and monitoring departmentwide information security policies, procedures, control techniques, training, and inspection requirements as elements of the departmental information security program. The act also includes provisions to further protect veterans and service members from the misuse of their sensitive personal information. In the event of a security incident involving personal information, VA is required to conduct a risk analysis, and on the basis of the potential for compromise of personal information, the department may provide security incident notifications, fraud alerts, credit monitoring services, and identity theft insurance. Congress is to be informed regarding security incidents involving the loss of personal information. On January 22, 2007, a security incident at a research facility in Birmingham, Alabama, highlighted other potential risks associated with the loss of information. The incident involved the loss of information on 1.3 million medical providers from the Centers for Medicare & Medicaid Services of the Department of Health and Human Services, as well as information on 535,000 individuals. In its report on the Birmingham incident, the VA IG noted that the information compromised in the incident could potentially be used to compromise the identity of physicians and other health care providers and commit Medicare billing fraud. VA took action to respond to the loss of provider information by requesting the Department of Health and Human Services to conduct an independent risk analysis on the provider data loss. The risk analysis concluded that there was a high risk that the loss of personal information could result in harm to the individuals concerned, and the Centers for Medicare & Medicaid Services sent a letter to VA on March 28, 2007, requesting that credit monitoring services be offered to providers. The department mailed notification letters to providers starting on April 17, 2007, and offered credit monitoring services. In addition, the Centers for Medicare & Medicaid Services indicated that VA might need to take additional measures to mitigate any risk of further harm, but it did not specify what such action might be or specifically mention Medicare fraud. Although VA has made progress, it has not yet fully or effectively implemented two of four GAO recommendations and has not fully implemented 20 of 22 IG recommendations to strengthen its information security practices. Because these recommendations have not yet been implemented, unnecessary risk exists that personal information of veterans and others would be exposed to data tampering, fraud, and inappropriate disclosure. VA has implemented two of our recommendations. However, it has not fully implemented two other GAO recommendations. In response to our recommendation that it regularly report on progress in updating its security plan to the Secretary, the department CIO took immediate steps in 2002 to begin briefing the Secretary and Deputy Secretary on a regular basis. Regarding our recommendation that it develop a process for managing its remedial action plan, VA issued, in May 2006, its IT Directive 06-1, which established the Data Security-Assessment and Strengthening of Controls Program to remedy weaknesses in managing its action plan. It also hired a contractor to develop Web-based tools to assist department officials in managing and updating the plan on a biweekly basis. However, it has not fully implemented our remaining two recommendations. First, although it has taken action, VA has not yet fully implemented our recommendation to complete a comprehensive security management program, including actions related to central management functions, security policies and procedures, risk assessments, security awareness, and monitoring and evaluating computer controls. In August 2006, VA issued Directive 6500, which documented a framework for the department’s security management program and set forth roles and responsibilities for the Secretary, CIO, and CISO to ensure compliance with FISMA requirements. VA also developed, documented, and implemented security policies and procedures for certain central management functions and security awareness training. In addition, it implemented a process for tracking the status of security weaknesses and analyzing the results of computer security reviews using software tools the department had developed. As part of implementing the department’s security directive (Directive 6500), VA planned to issue Handbook 6500 to provide guidance for developing, documenting, and implementing the elements of the information security program. However, it has not finalized and approved this handbook, which has been in draft form since March 2005. The handbook contains the VA National Rules of Behavior, as well as key guidance for minimum mandatory security controls, performing risk assessments, updating security plans, and planning for continuity of operations. This guidance is to be used as VA undertakes these activities as part of its preparation for completing the recertification and re- accreditation of its systems by August 2008 and to comply with provisions of the Veterans Benefits, Health Care, and Information Technology Act of 2006. VA officials indicated the handbook was close to completion, but they did not provide an estimated time frame for completion. Until the handbook is finalized and approved, VA cannot be assured that department staff are consistently coordinating security functions that are critical to safeguarding its assets and sensitive information against potential data tampering, disruptions in critical operations, fraud, and the inappropriate disclosure of sensitive information. Second, VA has not fully implemented our recommendation to ensure consistent use of information security performance standards in appraising the department’s senior executives. In September 2006, VA issued a memorandum that required all senior executive performance plans, which include performance elements and expectations, to include information security as an evaluation element by November 30, 2006. According to VA, senior executive performance plans were reviewed by human resource officials, and the plans complied with the memorandum. However, VA was unable to provide documentation on the performance plan reviews or a documented process for regular review of the plans. As a result, it is unknown whether the department can appropriately hold management accountable for information security. Until VA develops, documents, and implements a process for reviewing the senior executive performance plans on a regular basis to ensure that information security is included as an evaluation element, it may not have the appropriate management accountability for information security. Although VA has implemented 2 recommendations made by the IG, it has not yet fully implemented 20 other IG recommendations. For example, in response to the IG’s recommendation that the department complete actions to relocate and consolidate the Central Office’s data center, it moved servers and network hardware to other VA locations. Regarding the recommendation to research the benefits and costs of deploying intrusion prevention systems at all sites, the department began installing intrusion prevention systems at all sites. However, the department has not completed critical management activities to implement 15 of the 17 recommendations made by the IG in September 2006, which were carried forward from its March 2005 report, to appropriately restrict access to data, networks, and VA facilities; ensure that only authorized changes and updates to computer programs are made; strengthen critical infrastructure planning to ensure information security requirements are addressed; and ensure that background investigations are conducted on all applicable employees and contractors. To begin addressing these recommendations, VA has drafted policies and procedures, implemented certain technical solutions, and relocated data center servers to new locations at VA facilities. However, according to the department’s action plan to remediate weaknesses, all actions to resolve IG recommendations will not be completed until 2009. A detailed description of the actions VA has taken or plans to take to address the IG’s 17 recommendations can be found in appendix II. VA has also made some progress in addressing the five recommendations from the IG’s July 2006 report on the investigation of the May laptop theft incident. However, it has not fully implemented corrective actions. To begin addressing these recommendations, VA has drafted policies and procedures and updated its Cyber Security Awareness training course. However, VA is still in the process of finalizing standard contracting language to ensure that contractor personnel are held to the same standards as department personnel; it is also still standardizing all IT position descriptions and ensuring that they are evaluated, have proper sensitivity level descriptions, and are consistent throughout the department. Until these actions are complete, VA has limited assurance that it has the proper safeguards in place to adequately protect its sensitive information from inadvertent or deliberate misuse, loss, or improper disclosure. The need to fully implement GAO and IG recommendations to strengthen information security practices is underscored by the prevalence of security incidents involving the unauthorized disclosure, misuse, or loss of personal information of veterans and other individuals, such as medical providers. Between December 2003 and April 2006, VA had at least 700 reported security incidents involving the loss of personal information. For example, one incident in 2003 involved the theft of a laptop containing personal information on 100 veterans from the home of a VA employee. In 2004, personal computers that contained data on 2,000 patients were stolen from a locked office in a research facility. In 2005, information on 897 providers was inappropriately disclosed over VA’s e-mail system. In addition, in 2006, employee medical records were inappropriately accessed by a VA staff member, and a hacker compromised a computer system at a medical center supporting 79,000 veterans. All these incidents were partially attributable to weaknesses in internal controls. More recently, additional incidents have occurred that, like the earlier incidents, were partially due to weaknesses in the department’s security controls. In these incidents, which include the May 2006 theft of computer equipment from an employee’s home (discussed earlier) and the theft of equipment from department facilities, millions of people had their personal information compromised. Appendix III provides details on a selection of incidents that occurred between December 2003 and January 2007. Although VA has made some progress in implementing GAO and IG recommendations to resolve these weaknesses in security controls, all actions to resolve these recommendations are not planned to be implemented until 2009. As a result, VA will be at increased risk that systems, mobile devices, and information may be exposed to potential data tampering, disruptions in critical operations, fraud, and the inappropriate disclosure of sensitive information. VA has begun or continued several major initiatives since the May 2006 security incident to strengthen information security practices and secure personal information within the department, but more remains to be done. Since October 2005, VA has been reorganizing its management structure to provide better oversight and fiscal discipline over its IT systems, and it has undertaken a series of new initiatives. However, shortcomings with the implementation of these initiatives limit their effectiveness. For example, although VA has developed a remedial action plan that includes tasks to develop, document, revise, or update a policy or program, 87 percent of these do not have an established time frame for implementation across the department. Unless such shortcomings are addressed, these initiatives may not effectively strengthen information security practices at the department. An effective IT management structure is the starting point for coordinating and communicating the continuous cycle of information security activities necessary to address current risks on an ongoing basis while providing guidance and oversight for the security of the entity as a whole. Under FISMA and the Veterans Benefits, Health Care, and Information Technology Act of 2006, the CIO ensures compliance with requirements of these laws and designates a senior agency information security officer or CISO to assist in carrying out his responsibilities. One mechanism organizations can adopt to achieve effective coordination and communication is to establish a central security management office or group to coordinate departmentwide security-related activities. To ensure that information security activities are effective across an organization, an IT management structure should also include clearly defined roles and responsibilities for all security staff and coordination of responsibilities among individual staff. The department officially began its effort to provide the CIO with greater authority over IT in October 2005 by realigning its management organization to a centralized management structure. By July 2006, a department contractor began work to assist with the realignment effort. According to VA, its goals in moving to a centralized management structure were to provide the department better oversight over the standardization, compatibility, and interoperability of IT systems, as well as better overall fiscal discipline. The Secretary approved the department’s new IT organization structure in February 2007. The new structure includes an Assistant Secretary for Information and Technology (who serves as VA’s CIO), the CIO’s Principal Deputy Assistant Secretary, and five Deputy Assistant Secretaries. Five new senior leadership positions within the Office of Information and Technology were created to assist the CIO in overseeing five core IT process areas: cyber security, portfolio management, resource management, systems development, and operations. Completion of the realignment is scheduled for July 2008. Under the new IT management structure, responsibility for information security functions within the department is divided between two core process areas: First, the Director of the Cyber Security Office (part of the Information Protection and Risk Management process area) has responsibility for developing and maintaining a departmentwide security program; overseeing and coordinating security efforts across the organization; and managing the development and implementation of department security policy, standards, guidelines, and procedures to ensure ongoing maintenance of security. The Director of Cyber Security is also the designated CISO for the department. Second, the Director of the Field Operations and Security Office (part of the Enterprise Operations and Infrastructure process area) is responsible for implementing security and privacy policies, validating compliance with certification and accreditation requirements, and managing facility information security officers. In brief, the CISO/Director of Cyber Security is thus responsible for managing the departmentwide security program, but the Director of the Field Operations and Security is responsible for implementing it. Figure 1 shows these two offices within the new management structure. Although VA has made significant progress in the realignment of its IT management structure, no documented process yet exists for the two responsible offices to coordinate with each other in managing and implementing a departmentwide security program. VA officials indicated that the Director of Cyber Security and the Director of Field Operations and Security are communicating about the implementation of security policies and procedures within the department. However, this communication is not defined as a role or responsibility for either position in the new management organization book, nor is there a documented process in place to coordinate the management and implementation of the security program, both of which are key security management practices. As a result, policies or procedures could be inconsistently implemented throughout the department. Without a consistently implemented departmentwide security program, the CISO cannot effectively ensure departmentwide compliance with FISMA. Until the process and responsibilities for coordinating the management and implementation of IT security policies and procedures throughout the department are clearly documented, VA will have limited assurance that the management and implementation of security policies and procedures are effectively coordinated and communicated. In addition, the CISO position is currently unfilled, hindering VA’s ability to strengthen information security practices and coordinate security- related activities within the department. The CISO position has been vacant since June 2006, and currently, the CIO is the acting CISO of the department. The department has been attempting to fill the position of the CISO since October 2006. In addition, the department began trying to hire staff for other senior positions in March 2007. VA officials have indicated that the process and procedures they are required to undertake to hire staff for the positions is quite extensive and takes time to complete. Nevertheless, until the position of the CISO is filled, the department’s ability to strengthen information security will continue to be hindered. Furthermore, the department’s directive on its information security program has not been updated to reflect the new IT realignment structure for the position of the CISO. Under Directive 6500, the Associate Deputy Assistant Secretary for Cyber and Information Security is the senior information security officer or CISO. However, under the new realignment structure, there is no Associate Deputy Assistant Secretary for Cyber and Information Security, and instead the Director of Cyber Security is the CISO. VA officials have said that they intend to revise the directive to reflect the new management structure, but they did not provide an estimated time frame for completion. If roles and responsibilities are not updated or consistent in VA’s policies and directives, then communication and coordination of responsibilities among the department’s security staff may not be sufficient. Action plans to remediate identified weaknesses help departments to identify, assess, prioritize, and monitor progress in correcting security weaknesses that are found in information systems. According to OMB’s revised Circular A-123, Management’s Responsibility for Internal Control, departments should take timely and effective action to correct deficiencies that they have identified through a variety of information sources. To accomplish this, remedial action plans should be developed for each deficiency, and progress should be tracked for each. Following the May 2006 security incident, VA officials began working on an action plan to strengthen information security controls at the department. Referred to as the Data Security-Assessment and Strengthening of Controls Program, the plan was developed over a period of several months, and work has been completed on some tasks. By the end of January 2007, 20 percent of the items in the action plan had been completed, and task owners had been assigned for all items in the plan. As of June 1, 2007, the plan had at least 400 items to improve security and address weaknesses that the IG has identified at the department. On a biweekly basis, the action plan is updated with status updates provided by the task owners (including the percentage of work completed to resolve the item), and a new version of the plan is created. The CIO receives a briefing on each new version of the action plan. Once the new version is approved by the CIO, the plan is made available to task owners and other officials at the department. The CIO has also briefed other senior department officials on the plan and action items. Although VA’s action plan has task owners assigned and is updated biweekly, department officials have not ensured that adequate progress has been made to resolve items in the plan. First, in more than a third of cases, VA has not completed action items by their expected completion date. Specifically, VA has extended the completion date at least once for 38 percent of the plan items, and it has extended the completion date multiple times for 6 percent of the items in the plan. The average extension was about 5 months. In addition, 28 percent of action items that remained open as of June 1, 2007, had already exceeded the scheduled completion date, and over half of the work remained to be completed for a majority of those items. These extensions and missed deadlines can be attributed in part to VA’s not developing, documenting, and implementing procedures to ensure that action items were addressed in an effective and timely manner. If weaknesses are not successfully corrected in a timely manner, VA will continue to lack effective security controls to safeguard its assets and sensitive information. Second, a large portion of VA’s approach to correcting identified weaknesses has been focused on establishing policies and procedures: 39 percent of the items in the action plan are to develop and document or revise and update a policy, a program, or criteria. However, VA has not established action items for implementing these new or changed policies and procedures across the department. For 87 percent of action items related to policies and procedures, the action plan included no corresponding task with an established time frame for departmentwide implementation. Developing and documenting policies and procedures are just the first two steps in remediating identified weaknesses. If there are no implementation tasks with time frames, VA cannot monitor and ensure successful implementation. Until VA establishes tasks with time frames to implement policies and procedures in the plan, it will not be able to successfully manage its planned actions to correct identified weaknesses. Third, VA does not have a process in place to validate the closure of action plan items, that is, to ensure both that task owners have completed the activities required to sufficiently address action items and also that there is adequate documentation of these activities. During our review, we noted the closure of approximately 80 action items that included activities such as developing a policy or procedure, creating a schedule, deploying security tools, or updating software. However, according to the department official responsible for managing the plan, upon review of these completed items, VA found a number of them lacked support for closing the item (such as documentation). This official indicated that VA was developing a process to provide validation of closed action plan items, but no supporting documentation on the development of this validation process had been provided. Until VA develops, documents, and implements a process to validate the closure of action plan items, it will not be assured that closed action items have been sufficiently addressed. Fourth, VA’s action plan does not identify the activities it is taking to address our recommendations. In November 2006, the VA official in charge of managing the plan indicated that although the department had not previously identified activities being taken to address our recommendations, it would begin to do so. However, as of June 2007, these activities had not been identified and tracked in the action plan. As a result, VA may not be able to adequately monitor its progress in implementing our recommendations to resolve identified weaknesses. Until VA identifies the activities it is taking in its action plan to address our recommendations, it will have limited assurance that progress in implementing those activities is being adequately monitored. VA has developed its Information Protection Program, which is a phased approach to ensuring that the department has the appropriate software tools to assist in ensuring the confidentiality, availability, and integrity of information. During the first phase, VA installed encryption software on laptops across the department, a task completed in September 2006. In the second phase, the department is undertaking several other information protection initiatives, including improving the security of network transmissions and the protection of removable storage devices, such as the encryption of thumb drives. These initiatives are all currently being developed and documented. One mechanism to enforce the confidentiality and integrity of critical and sensitive information is the use of encryption. Encryption transforms plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. According to VA Directive 6504, issued in June 2006, approved encryption software must be installed if an employee uses VA government-furnished equipment or other non-VA equipment in a mobile environment, such as a laptop or PDA carried out of a department office or a personal computer in an alternative worksite, and the equipment stores personal information. The encryption software used must meet Federal Information Processing Standard 140. According to department officials, by September 2006, the department had successfully encrypted over 18,000 laptops. The laptops were encrypted through a combination of two software encryption products, both of which have been certified as complying with the provisions of Federal Information Processing Standard 140. Simultaneously, VA developed and implemented routine laptop “health checks.” These checks ensure that all laptops have applied updated security policies, such as antivirus software, and will also remove any sensitive information that is not authorized to be stored on the laptop. Based on the results of our testing, VA consistently implemented encryption software at eight VA facilities, with minor exceptions. At six of the eight facilities, all laptops were encrypted in accordance with the directive. At the other two facilities, both medical centers, the directive was not implemented in a small number of cases. At one medical center, of the 58 laptops tested, 3 should have been encrypted according to VA’s policy but were not. At another medical center, of the 41 laptops tested, 1 laptop was not encrypted that should have been. In some of these cases, VHA medical center officials noted that the reference in the directive to operation in a mobile environment led to ambiguity about which laptops were required to be encrypted. Although our testing showed sound consistency in this encryption effort, this and another source of ambiguity in the directive could affect the department’s success in implementing other planned encryption initiatives. Specifically, Directive 6504 did not provide explicit guidance on whether to encrypt laptops that were categorized as medical devices, which make up a significant portion of the population of laptops at VHA facilities. At facilities for patient care, laptops could be categorized both as equipment that operated in a mobile environment (and thus subject to VA’s encryption directive) and as medical devices (and thus subject to compliance with other federal guidance that may interfere with following the encryption directive). At the two medical centers we visited, which each have over 300 laptops, most laptops were considered medical devices. When VHA officials contacted the help desk for the encryption initiative, they were told that these laptops did not need encryption software installed. However, Directive 6504 had not made this clear, increasing the challenge to VHA facilities in implementing the encryption initiative. Without guidance that takes into consideration the environment in which laptops are used in different VA facilities and that clearly identifies devices that require encryption functionality, VA may not have assurance that all facilities in the department will be able to consistently implement encryption initiatives for all appropriate devices. Finally, the department did not maintain an accurate inventory of all laptops that had been encrypted, nor did it have an inventory of all laptops within the department. Each VA facility was responsible for maintaining an inventory of laptops, including what laptops had been encrypted, but the laptop inventories at four of the eight facilities we visited were inaccurate. For example, eight laptops listed in the inventories were not laptops, but scanners, personal computers or other devices. In some cases, the inventory listed a laptop as encrypted, but testing revealed that the machine was not encrypted. (The weaknesses identified with the inventories of laptops are similar to weaknesses identified in a report we recently issued, which noted significant IT inventory control weaknesses at VA). Because it did not maintain an accurate inventory of all equipment that has encryption installed, VA may not have adequate assurance that all equipment required to be encrypted has been. As part of its phased approach to acquiring appropriate software tools, the department is undertaking several information protection initiatives. For instance, the department is working to secure network transmissions to prevent user identification, passwords, and data from being transmitted in clear text. To provide port security and device control, VA is establishing access permission lists, audit and reporting capabilities, and lists of approved devices. For the protection of removable storage media, VA developed and documented Directive 6601, which provides guidance for use of removable devices, and it is in the process of acquiring encryption software for thumb drives, external hard drives, and CD-ROM and DVD drives. VA is also acquiring encryption for mobile devices such as Blackberries. In addition, the department is establishing a public key infrastructure and Internet gateway for secure e-mail transmission and document exchange. These initiatives are in varying stages of development and have not yet been implemented. Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such events if they take prompt steps to detect and respond to them before significant damage can be done. In addition, analyses of security incidents can pinpoint vulnerabilities that need to be eliminated, provide valuable input for risk assessments, help in prioritizing security improvement efforts, and be used to illustrate risks and related trends for senior management. FISMA requires that agencies develop procedures for detecting, reporting, and responding to security incidents. In addition, OMB Memo M-06-19 requires agencies to report all incidents involving personal identifiable information to the U.S. Computer Emergency Readiness Team (US-CERT) within 1 hour of discovering the incident. VA has improved its incident management capability since May 2006 by realigning and consolidating two centers with responsibilities for incident management, as well as developing and documenting key policies and procedures. Following the May 2006 security incident, VA hired a contractor to assist its Network Operations Center and Security Operations Center in developing plans for improved coordination between the two centers and for using a risk management approach to managing incidents. As part of its findings, the contractor recommended that the two centers be integrated at the regional and enterprise level. In February 2007, VA realigned and consolidated the two centers into the Network and Security Operations Center (NSOC), which is responsible for incident detection or identification, response, and reporting within the department. NSOC has also developed and documented a concept of operations for incident management and call center procedures, and it has developed a new incident report template to assist VA personnel in reporting incidents to the center within 1 hour of discovering the incident. Senior management officials also receive regular reports on security incidents within the department. In addition, VA has improved the reporting of incidents involving the loss of personal information within the department since the May 2006 incident. Following the incident, the Secretary issued a memorandum requiring all employees to take security and privacy training by June 30, 2006, as well as sign a statement of commitment and understanding regarding the handling of personal information of veterans. An analysis of reported incidents from 2003 to 2006 showed a significant increase in the reporting of incidents involving the loss of personal information to NSOC in 2006, as detailed in table 1. Of the incidents reported in 2006, 77 percent were reported after May. While the increase in reported incidents shows that the memorandum and updated security and privacy training are heightening VA employees’ awareness of their responsibility to report incidents involving loss of personal information, it also indicates that vulnerabilities remain in security controls designed to adequately safeguard information. To assist the department in improving its analysis of security incident data, NSOC merged three incident databases into one to streamline the collection of incident data gathered within the department. VA also developed a software tool with a Web-based interface (the Formal Event Review and Evaluation Tool) to analyze reported incidents and observe trends, and began using the tool in April 2007. The department has made a notable improvement in its notification of major security incidents to US-CERT, the Secretary, and Congress since the incidents in May 2006. However, the time it took to send notification letters to individuals was increased for some incidents because VA did not have adequate procedures for incident response and notification. Table 2 presents major security incidents occurring since May 2006, along with the times taken to make various notifications. As the table shows, delays in reporting incidents have generally decreased since May 2006. Coordination with other agencies. In the incident in Birmingham in January 2007, medical provider and physician information from the Centers for Medicare & Medicaid Services of the Department of Health and Human Services was lost, requiring VA to coordinate with this department to respond to the incident. At the time of the incident, VA had drafted interim procedures for incident response, including notifying individuals affected by security incidents. These draft procedures described steps to be taken to respond to incidents involving the loss of information on veterans. However, they did not include processes for coordinating incident response and mitigation activities with other agencies. This contributed to the fact that it took more time to determine the risks to medical providers, who were not notified until 85 days after the incident. To address the coordination issue, VA revised its interim procedures to indicate that incident response teams will work with other federal agencies and teams as needed to contract for independent analyses of the risk associated with compromise of the particular data involved. In March 2007, VA approved these revised interim procedures. However, the approved procedures are limited to contracting for risk analyses and do not incorporate processes for coordinating with other federal agencies on other appropriate mitigation activities. For example, although the procedures allow for the offer of credit monitoring to affected individuals, they do not address mitigating other types of risks, such as potential fraudulent claims for payment under Medicare, which were a potential risk for the Birmingham incident. Credit monitoring would not address this risk. Other coordination and mitigation activities may be needed, such as alerting the Centers for Medicare & Medicaid Services to the possibility of fraudulent claims involving specific providers to adequately address this potential risk or other risks, different from those experienced to date. Obtaining up-to-date contact information. VA’s procedures for incident response and notification do not include mechanisms for obtaining contact information on individuals (when necessary), which can also cause delays in sending out notification letters to individuals. A VA official noted that notification letters to individuals could be delayed, depending on whether the department could locate complete address information for the affected individuals and on the number of letters that must be sent. Such delays occurred in the case of the missing backup tape in May 2006 (when 159 days passed before notification letters were sent). The data and number of records that were on the backup tape were not immediately known, and the address information of veterans whose data were compromised in the incident had to be researched. Our recent report noted that agencies faced challenges in identifying address information for individuals affected by security incidents and that mechanisms should be in place to obtain contact information on individuals. However, VA’s draft and approved interim procedures do not include a mechanism for obtaining such contact information. As a result, the department’s response to incidents could be delayed when the compromised data do not include complete and accurate contact information (or there is uncertainty about the data). Risk analysis. As mentioned earlier, VA asked the Department of Health and Human Services to conduct an independent risk analysis on the provider data loss in the January 2007 incident in Birmingham; this analysis showed that there was a high risk that the loss of personal information could result in harm to the individuals concerned. Conducting such risk analyses after incidents is a recommended procedure, since appropriate incident response and notification depend on determining the level of risk associated with the particular information that is compromised. In addition, conducting periodic risk assessments before an incident occurs facilitates a rapid response, by enabling the development of mitigation activities and appropriate coordination for potential data losses. Assessments of both systems and the information they contain are important, particularly information with a high potential risk for inappropriate use or fraud. However, VA is still in the process of finalizing and approving its guidance for completing risk assessments on VA’s systems. As a result, the department does not have a current assessment of risk for the information located at its facilities and in its information systems, which could affect the coordination and mitigation activities that are developed by the department to respond to potential data losses. Until VA assesses the risk for information located at its facilities and in its information systems and uses this assessment to develop and document mitigation activities and appropriate coordination for potential data losses (particularly high-risk losses), it may not be able to adequately address potential risks associated with loss of sensitive information at its facilities and on its systems. Additional VA actions. VA has taken additional actions to improve incident response and notification. In February 2007, VA chartered the Incident Resolution Team Structure, a group of officials from organizations within the department who are responsible for responding to incidents and handling notification requirements at the national, regional, and local levels. This action was in response to an OMB memorandum issued in September 2006, which recommended that all departments and agencies develop a core management group responsible for incident response to losses of personal information, as well as a response plan for notifying individuals affected by security incidents. Roles and responsibilities within the Incident Resolution Team Structure are organized according to the level of activity, the nature of the incident, and how the incident is categorized based on risk levels. VA also uses the Formal Event Review and Evaluation Tool to determine what the risk category of a security incident should be, based on the severity of the incident. VA has also recently developed, with contractor assistance, interim regulations for security incident notification, data mining, fraud alerts, data breach analysis (that is, risk analysis of security incidents), credit monitoring, identity theft insurance, and credit protection services, as required under the Veterans Benefits, Health Care, and Information Technology Act of 2006. These interim regulations were approved by OMB and became effective on June 22, 2007. According to Standards for Internal Control in the Federal Government, internal controls at agencies should generally be designed to ensure that ongoing monitoring occurs in the course of normal operations. The methodology for evaluating an agency’s internal controls should be logical and appropriate and may include assessments using checklists or other tools, as well as a review of the control design and direct testing of the internal control. The evaluation team should develop a plan for the evaluation process to ensure a coordinated effort, analyze the results of evaluation against established criteria, and ensure that the process is properly documented. The agency should also ensure that corrective action is taken within established time frames and is followed up on to verify implementation. In an effort to promote internal controls within VA’s computer environment, VA has consolidated a number of IT compliance programs under one organization, the Office of IT Oversight and Compliance (ITOC). This office was established in January 2007. Previously, the Review and Inspection Division was responsible for conducting facility assessments and validating information entered into a database in response to VA’s annual FISMA self-assessment survey. The division was incorporated into the ITOC, which is now responsible for providing independent, objective, and quality oversight and compliance services in the areas of cyber security, records management, and privacy. It is also responsible for conducting assessments of VA’s facilities that (1) determine the adequacy of internal controls; (2) investigate compliance with laws, policies, and directives from VA and external organizations; and (3) ensure that proper safeguards are maintained. The results of these assessments are reported directly to the CIO and responsible supervisors at the facilities. The ITOC recommends corrective actions to remediate identified issues where necessary and also makes available a remediation team to assist the facility in addressing any recommendations. In January 2007, the ITOC began conducting assessments at facilities and by June 2007 had conducted 34 assessments. According to the Director of the ITOC, it recently became fully staffed with 127 personnel and will begin to conduct 12 to 18 assessments per month. VA facilities will be assessed every 3 years. Although the ITOC was formed to identify security weaknesses and ensure compliance with federal law and department policy, its approach to conducting assessments does not include basic elements necessary for evaluating and monitoring controls. For example, although the ITOC developed a checklist to conduct facility assessments, it did not develop a standard methodology for analysts to use when evaluating internal controls against the checklist, or specific criteria for each checklist item. As a result, the office lacks a process to ensure that its examination of internal controls is consistent across VA facilities. In addition, although the Director of the ITOC indicated that the assessment team recommendations to facilities are tracked in a database, no supporting documentation was provided. Further, according to the standards for internal control, organizations should follow up to ensure that corrective active is taken. However, the ITOC follows up to see if recommendations have been implemented only when a site is re-inspected. As a result, the office has no timely mechanism in place to ensure that its recommendations have been addressed. Until there are a standard methodology and established criteria for evaluating internal controls at facilities, as well as a mechanism in place to track recommendations and conduct regular follow-up on their status, VA will have limited assurance that its process for assessing its statutory and regulatory compliance and the effectiveness of its internal controls process is adequate and consistent across its facilities. Effective information security controls are critical to securing the information systems and information on which VA depends to carry out its mission. GAO and IG recommendations to address long-standing weaknesses within the department have not yet been fully implemented, nor is the implementation of the IG recommendations expected to be completed in the near future. Consequently, there is an increased risk that personal information of veterans and other individuals, such as medical providers, will be exposed to potential data tampering, disruptions in critical operations, fraud, and the inappropriate disclosure of sensitive information. Until VA addresses recommendations to resolve identified weaknesses, it will have limited assurance that it can adequately protect its systems and information. Although VA has begun or continued several initiatives to strengthen information security practices within the department, the shortcomings with the implementation of these initiatives could limit their effectiveness. If the department develops and documents processes, policies, and procedures; fills a key position and completes the implementation of major initiatives, then it will help ensure that these initiatives strengthen information security practices within the department. Sustained management commitment and oversight are vital to ensure the effective development, implementation, and monitoring of the initiatives that are being undertaken. Such involvement and oversight are critical to providing VA with a solid foundation for resolving long-standing information security weaknesses and continuously managing information security risks. To assist the department in improving its ability to protect its information and systems, we are recommending the Secretary of Veterans Affairs take the following 17 actions: Finalize and approve Handbook 6500 to provide guidance for developing, documenting, and implementing the elements of the information security program. Develop, document, and implement a process for reviewing on a regular basis the performance plans of senior executives to ensure that information security is included as an evaluation element. Develop, document, and implement a process for the Director of Field Operations and Security and Director of Cyber Security to coordinate with each other on the implementation of IT security policies and procedures throughout the department. Document clearly defined responsibilities in the organization book for the Director of Field Operations and Security and the Director of Cyber Security for coordinating the implementation of IT security policies and procedures within the department. Act expeditiously to fill the position of the Chief Information Security Officer. Revise Directive 6500 to reflect the new IT management structure and to ensure that roles and responsibilities are consistent in all VA IT directives. Develop, document, and implement procedures for the action plan to ensure that action items are addressed in an effective and timely manner. Establish tasks with time frames for implementation of policies and procedures in the action plan. Develop, document, and implement a process to validate the closure of action plan items. Include in the action plan the activities taken to address GAO recommendations. Develop, document, and implement clear guidance for identifying devices that require encryption functionality. Maintain an accurate inventory of all IT equipment that has encryption installed. Develop and document procedures that include a mechanism for obtaining contact information on individuals whose information is compromised in security incidents. Conduct an assessment of what constitutes high-risk data for the information located at VA facilities and in information systems. Develop and document a process for appropriate coordination and mitigation activities based on the assessment above. Develop, document, and implement a standard methodology and established criteria for evaluating the internal controls at facilities. Establish a mechanism to track ITOC recommendations made to facilities and conduct regular follow-up on the status of the recommendations. We received written comments on a draft of this report from the Deputy Secretary of Veterans Affairs (these are reprinted in appendix IV). The Deputy Secretary generally agreed with our findings and recommendations and stated that VA has already implemented or is working to implement all 17 recommendations. Additionally, the Deputy Secretary stated that the consolidation of all IT operations and maintenance under VA’s Chief Information Officer will enhance the department’s information security program, as well as correct long- standing deficiencies. In his comments, the Deputy Secretary also noted that the recommendation related to information security as an evaluation element in senior executive performance plans has already been implemented and that the recruitment announcement to fill the position of Chief Information Security Officer closed on July 27, 2007. He further stated that VA’s Directive 6500, issued in August 2006, remains valid. However, as mentioned in our report, Directive 6500 was not updated to reflect the new IT realignment structure that was approved by the Secretary in February 2007 and roles and responsibilities should be consistent in all department policies and directives. The Deputy Secretary also discussed some of the activities that were underway to implement our recommendations. In the draft report that was provided for comment, we indicated that VA had not implemented any of the IG’s 22 recommendations to improve information security. We have since received new information and have updated the report to reflect that VA has now implemented 2 of the 22 IG recommendations. As agreed, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we are sending copies of this report to interested congressional committees; the Secretary of Veterans Affairs; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Our objectives were to evaluate (1) whether the Department of Veterans Affairs (VA) has effectively addressed GAO and VA Office of Inspector General (IG) recommendations to strengthen its information security practices and (2) actions VA has taken since the May 2006 security incident to strengthen its information security practices and secure personal information. In doing this work, we analyzed relevant documentation including policies, procedures, and plans, and interviewed key department officials in Washington, D.C., to identify and assess VA’s progress in implementing recommendations and federal legislation to strengthen its information security practices. We also drew on previous GAO reports and testimonies, as well as on expert opinion provided in congressional testimony and other sources. We used certain applicable federal laws, other requirements, and guidelines, including Office of Management and Budget (OMB) memorandums, in assessing whether the Department's actions and initiatives can help ensure departmental compliance. For the first objective, we evaluated VA’s actions to address GAO and VA IG recommendations, respectively in our 2002 report and in the IG’s July 2006 and September 2006 reports. To review VA’s history of implementation efforts, we examined GAO reports, testimony from recent congressional hearings made by GAO and IG staff, as well as reports by the VA IG. To determine the implementation status of open GAO recommendations, we analyzed pertinent security policies, procedures, and plans and met with officials from VA to gather information on the department’s actions to address the recommendations. To determine the implementation status of open IG recommendations we met with officials from the VA IG Office of Audit to discuss the status of these recommendations and met with VA officials to learn what actions had been taken or were planned to take to fully address the recommendations. The VA IG concurred with the status information provided. For the second objective, we evaluated VA’s actions to strengthen its information security practices to comply with federal guidance, including recent OMB memorandums. We met with department officials to gather information on what initiatives VA had undertaken or planned to undertake to improve its information security practices. For each initiative, we obtained and analyzed supporting documentation and met with department officials responsible for the implementation of the initiatives to assess the extent to which the department had complied with federal requirements and other guidelines. In addition, we also performed audit procedures to determine the extent to which VA has installed encryption functionality on its laptop computers. Our detailed scope and methodology for the laptop encryption testing are below. Laptop Encryption Testing We examined 248 laptops at eight locations to determine whether encryption software had been installed on a selection of laptops as indicated by VA. We selected the locations to be visited based on (1) the type of facility and (2) number of facilities available to be tested in a geographic area. We identified different facility types in proximity to each other and to GAO offices. Clinics and cemeteries were excluded from the selection because the number of laptops at these locations would be quite small. We also selected a Research Enhancement Award Program location based on an incident in January 2007 involving this type of location. On the basis of the criteria listed above, we selected the following eight facilities: Baltimore Regional Office, Chicago Regional Office, Denver Health Administration Center, Denver Regional Office, Denver Research Enhancement Award Program, Hines Data Center, Hines Medical Center and the Washington, D.C., Medical Center. At each location, we obtained an inventory or population of “in use” laptops. We examined every laptop in the population that was available for review at the Baltimore Regional Office, Chicago Regional Office, Denver Research Enhancement Award Program, and the Hines Data Center because of the relatively small number of laptops in the population. We selected random samples of laptops with the intent of projecting the results to each population at the Denver Health Administration Center, Denver Regional Office, Hines Medical Center, and Washington, D.C., Medical Center. We conducted testing of encryption implementation on laptops at select VA facilities to determine whether the department’s laptops were in compliance with VA Directive 6504 which stated that if a laptop was in a mobile environment and contained sensitive information that it be encrypted using approved software that is validated against National Institute of Standards and Technology standards. We also tested laptops at the two medical facilities to see whether the laptops should be encrypted according to the facility inventory because multiple inventories were received from these locations. In addition, we tested the laptops at the two medical facilities to see whether the laptop was considered a medical device based on the definition of medical devices provided to us by VA. At each location there were a small number of laptops that were unavailable to us to be tested. Department officials cited several reasons for this, including that the laptop had been turned in to be disposed of or discarded according to VA policy, had a hard drive failure, or could not be brought in to the site for testing. In table 3, the “laptops tested” column represents the number of laptops the team was able to test. For all four locations where every laptop in the population was tested, we used the results of our test to determine whether the directive had been consistently implemented. For the Denver Health Administration Center and the Denver Regional Office, our sample results allowed us to estimate with 95 percent confidence that at least 93 percent of the laptops would have consistently implemented the directive. On the basis of these results, we concluded that at these six sites, VA had consistently implemented its directive. For the Hines Medical Center and the Washington, D.C., Medical Center, the results of our tests indicated that VA’s directive had not been consistently implemented for one laptop and three laptops at these facilities respectively. We performed our work at VA headquarters in Washington, D.C., and at the selected VA facilities listed above, in accordance with generally accepted government auditing standards, from November 2006 through August 2007. This appendix includes the actions the Department of Veterans Affairs (VA) has taken or is planning to take to address 17 recommendations related to Federal Information Security Management Act related findings made by the VA Office of Inspector General (IG) as reported to us by the completion of our review in August 2007. The Department of Veterans Affairs (VA) had at least 1500 security incidents reported between December 2003 and January 2007 which included the loss of personal information. Below is additional information on a selection of incidents, including all publicly reported incidents subsequent to May 3, 2006, that were reported to the department during this period and what actions it took to respond to these incidents. These incidents were selected from data obtained from VA to provide illustrative examples of the incidents that occurred at the department during this period. December 9, 2003: stolen hard drive with data on 100 appellants. A VA laptop computer with benefit information on 100 appellants was stolen from the home of an employee working at home. As a result, the agency office was going to recall all laptop computers and have encryption software installed by December 23, 2003. November 24, 2004: unintended disclosure of personal information. A public drive on a VA e-mail system permitted entry to folders/files containing veterans’ personal information (names, Social Security numbers, dates of birth, and in some cases personal health information such as surgery schedules, diagnosis, status, etc.) by all users after computer system changes made. All folders were restricted, and individual services were contacted to set up limited access lists. December 6, 2004: two personal computers containing data on 2,000 patients stolen. Two desktop personal computers were stolen from a locked office in a research office of a medical center. One of the computers had files containing names, Social Security numbers, next of kin, addresses, and phone numbers of approximately 2,000 patients. The computers were password protected by the standard VA password system. The medical center immediately contacted the agency Privacy Officer for guidance. Letters were mailed to all research subjects informing them of the computer theft and potential for identity theft. VA enclosed letters addressed to three major credit agencies and postage paid envelopes. This incident was reported to VA and federal incident offices. March 4, 2005: list of 897 providers’ Social Security numbers sent via e-mail. An individual reported e-mailing a list of 897 providers’ names and Social Security numbers to a new transcription company. This was immediately reported, and the supervisor called the transcription company and spoke with the owner and requested that the file be destroyed immediately. Notification letters were sent out to all 897 providers. Disciplinary action was taken against the employee. October 14, 2005: personal computer containing data on 421 patients stolen. A personal computer that contained information on 421 patients was stolen from a medical center. The information on the computer included patients’ names; the last four digits of their Social Security numbers; and their height, weight, allergies, medications, recent lab results, and diagnoses. The agency’s Privacy Officer and medical center information security officer were notified. The use of credit monitoring was investigated, and it was determined that because the entire Social Security number was not listed, it would not be necessary to use these services at the time. February 2, 2006: inappropriate access of VA staff medical records. A VA staff member accessed several coworkers’ medical records to find date of birth. Employee information was compromised and several records were accessed on more than one occasion. No resolution recorded. April 11, 2006: suspected hacker compromised systems with employee’s assistance. A former VA employee is suspected of hacking into a medical center computer system with the assistance of a current employee providing rotating administrator passwords. All systems in the medical center serving 79,000 veterans were compromised. May 5, 2006: missing backup tape with sensitive information on 7,052 individuals. An office determined it was missing a backup tape containing sensitive information. On June 29, 2006, it was reported that approximately 7,052 veterans were affected by the incident. On October 11, 2006, notification letters were mailed, and 5,000 veterans received credit protection and data breach analysis for 2 years. August 3, 2006: desktop computer with approximately 18,000 patient financial records stolen. A desktop computer was stolen from a secured area at a contractor facility in Virginia that processes financial accounts for VA. The desktop computer was not encrypted. Notification letters were mailed and credit monitoring services offered. September 6, 2006: laptop with patient information on an unknown number of individuals stolen. A laptop attached to a medical device at a VA medical center was stolen. It contained patient information on an unknown number of individuals. Notification letters and credit protection services were offered to 1,575 patients. January 22, 2007: external hard drive with 535,000 individual records and 1.3 million non-VA physician provider records missing or stolen. An external hard drive used to store research data with 535,000 individual records and 1.3 million non-VA physician provider records was discovered missing or stolen from a research facility in Birmingham, Alabama. Notification letters were sent to veterans and providers, and credit monitoring services were offered to those individuals whose records contained personally identifiable information. In addition to the individual named above, key contributions to this report were made by Charles Vrabel (Assistant Director), James Ashley, Mark Canter, Barbara Collier, Mary Hatcher, Valerie Hopkins, Leena Mathew, Jeanne Sung, and Amos Tevelow.
|
In May 2006, the Department of Veterans Affairs (VA) announced that computer equipment containing personal information on approximately 26.5 million veterans and active duty military personnel had been stolen. Given the importance of information technology (IT) to VA's mission, effective information security controls are critical to maintaining public and veteran confidence in its ability to protect sensitive information. GAO was asked to evaluate (1) whether VA has effectively addressed GAO and VA Office of Inspector General (IG) information security recommendations and (2) actions VA has taken since May 2006 to strengthen its information security practices and secure personal information. To do this, GAO examined security policies and action plans, interviewed pertinent department officials, and conducted testing of encryption software at select VA facilities. Although VA has made progress, it has not yet fully implemented most of the key GAO and IG recommendations to strengthen its information security practices. Specifically, VA has implemented two GAO recommendations: to develop a process for managing its plan to correct identified weaknesses and to regularly report on progress in updating its security plan to the Secretary. However, it has not fully implemented two other GAO recommendations: to complete a comprehensive security management program and to ensure consistent use of information security performance standards for appraising senior VA executives. In addition, the department has not yet fully implemented 20 of 22 recommendations made by the IG in 2006. For example, VA has not completed activities to appropriately restrict access to data, networks, and department facilities; ensure that only authorized changes and updates to computer programs are made; and strengthen critical infrastructure planning. Because these recommendations have not yet been implemented, unnecessary risk exists that the personal information of veterans and others, such as medical providers, will be exposed to data tampering, fraud, and inappropriate disclosure. Since the May 2006 security incident, VA has continued or begun several major initiatives to strengthen its information security practices and secure personal information within the department, but more remains to be done. These initiatives include continuing efforts begun in October 2005 to reorganize its management structure to provide better oversight and fiscal discipline over its IT systems; developing an action plan to correct identified weaknesses; establishing an information protection program; improving its incident management capability; and establishing an office responsible for oversight of IT within the department. However, implementation shortcomings limit the effectiveness of these initiatives. For example, no documented process exists between the Director of Field Operations and Security and the chief information security officer (CISO) to ensure the effective coordination and implementation of security policies and procedures within the department. In addition, the position of the CISO has been unfilled since June 2006. Although, 39 percent of items in the department's remedial action plan are tasks to develop, document, revise, or update a policy or program, 87 percent of these items have no corresponding task with an established time frame for implementation across the department. VA also did not have clear guidance for identifying devices that require encryption functionality, and it lacked adequate procedures for incident response and notification. Finally, VA's Office of IT Oversight and Compliance lacks a standard methodology and established criteria to ensure that its examination of internal controls is consistent across VA facilities. Until the department addresses recommendations to resolve identified weaknesses and implements the major initiatives it has undertaken, it will have limited assurance that it can protect its systems and information from the unauthorized disclosure, misuse, or loss of personal information of veterans and other personnel.
|
Interoperable communications is not an end in itself. Rather, it is a necessary means for achieving an important goal—the ability to respond effectively to and mitigate incidents that require the coordinated actions of first responders, such as multi-vehicle accidents, natural disasters, or terrorist attacks. Public safety officials have pointed out that needed interoperable communications capabilities are based on whether communications are needed for (1) “mutual-aid responses” or routine day- to-day coordination between two local agencies; (2) extended task force operations involving members of different agencies coming together to work on a common problem, such as the 2002 sniper attacks in the Washington, D.C. metropolitan area; or (3) a major event that requires response from a variety of local, state, and federal agencies, such as major wildfires, hurricanes, or the terrorist attacks of September 11, 2001. A California State official with long experience in public safety communications breaks the major event category into three separate types of events: (1) planned events, such as the Olympics, for which plans can be made in advance; (2) recurring events, such as major wildfires and other weather events, that can be expected every year and for which contingency plans can be prepared based on past experience; and (3) unplanned events, such as the September 11th attacks, that can rapidly overwhelm the ability of local forces to handle the problem. Interoperable communications are but one component, although a key one, of an effective incident command planning and operations structure. As shown in figure 1, determining the most appropriate means of achieving interoperable communications must flow from an comprehensive incident command and operations plan that includes developing an operational definition of who is in charge for different types of events and what types of information would need to be communicated (voice, data, or both) to whom under what circumstances. Other steps include: defining the range of interoperable communications capabilities needed for specific types of events; assessing the current capabilities to meet these communications needs; identifying the gap between current capabilities and defined requirements; assessing alternative means of achieving defined interoperable developing a comprehensive plan—including, for example, mutual aid agreements, technology and equipment specifications, and training—for closing the gap between current capabilities and identified requirements. Interoperable communications requirements are not static, but change over time with changing circumstances (e.g., new threats) and technology (e.g., new equipment), and additional available broadcast spectrum. Consequently, both a short- and long-term “feedback loop” that incorporates regular assessments of current capabilities and needed changes is important. In addition, the first responder community is extensive and extremely diverse in size and the types of equipment in their communications systems. According to SAFECOM officials, there are over 2.5 million public safety first responders within more than 50,000 public safety organizations in the United States. Local and state agencies own over 90 percent of the existing public safety communications infrastructure. This intricate public safety communications infrastructure incorporates a wide variety of technologies, equipment types, and spectrum bands. In addition to the difficulty that this complex environment poses for federal, state, and local coordination, 85 percent of fire personnel, and nearly as many emergency management technicians, are volunteers with elected leadership. Many of these agencies are small and do not have technical expertise; only the largest of the agencies have engineers and technicians. In the past, a stovepiped, single jurisdiction, or agency-specific communication systems development approach prevailed—resulting in none or less than desired interoperable communications systems. Public safety agencies have historically planned and acquired communications systems for their own jurisdictions without concern for interoperability. This meant that each state and local agency developed communications systems to meet their own requirements, without regard to interoperability requirements to talk to adjacent jurisdictions. For over 15 years, the federal government has been concerned with public safety spectrum issues, including communications interoperability issues. A variety of federal departments and agencies have been involved in efforts to define the problem and to identify potential solutions, such as the Department of Homeland Security (DHS), the Department of Justice (DOJ), the Federal Communications Commission (FCC), and the National Telecommunications and Information Agency (NTIA) within the Department of Commerce (DOC), among others. Today, a combination of federal agencies, programs, and associations are involved in coordinating emergency communications. DHS has several agencies and programs involved with addressing first responder interoperable communication barriers, including the SAFECOM program, the Federal Emergency Management Agency (FEMA), and the Office for Domestic Preparedness (ODP). As one of its 24 E-Gov initiatives, the Office of Management and Budget (OMB) in 2001 created SAFECOM to unify the federal government’s efforts to help coordinate the work at the federal, state, local, and tribal levels to establish reliable public safety communications and achieve national wireless communications interoperability. The SAFECOM program was brought into DHS in early 2003. In June 2003, SAFECOM partnered with the National Institute of Standards and Technology (NIST) and the National Institute of Justice (NIJ) to hold a summit that brought together over 60 entities involved with communications interoperability policy setting or programs. Several technical factors specifically limit interoperability of public safety wireless communications systems. First, public safety agencies have been assigned frequencies in new bands over time as available frequencies become congested and as new technology made other frequencies available for use. As a result, public safety agencies now operate over multiple frequency bands—operating on these different bands required different radios because technology was not available to include all bands in one radio. Thus, the new bands provided additional capabilities but fragmented the public safety radio frequency spectrum, making communications among different jurisdictions difficult. Another technical factor inhibiting interoperability is the different technologies or different applications of the same technology by manufacturers of public safety radio equipment. One manufacturer may design equipment with proprietary technology that will not work with equipment produced by another manufacturer. The current status of wireless interoperable communications across the nation—including the current interoperable communications capabilities of first responders and the scope and severity of the problems that may exist—has not been determined. Although various reports have documented the lack of interoperability of public safety first responders wireless communications in specific locations, complete and current data do not exist documenting the scope and severity of the problem at the local, state, interstate, or federal levels across the nation. Accumulating this data may be difficult, however, because several problems inhibit efforts to identify and define current interoperable communications capabilities and future requirements. First, current capabilities must be measured against a set of requirements for interoperable communications, and these requirements vary according to the characteristics of specific incidents at specific locations. Who needs to talk to whom, when they need to talk, and what set of communications capabilities should be built or acquired to satisfy these requirements depends upon whether interoperable communications are needed for day- to-day mutual aid, task force operations that occur when members of different agencies come together to work on a common problem such as the National Capitol Region sniper investigation, or major events such as a terrorist attack. Requirements for interoperable communications also may change with the expanding definition of first responders—from the traditional police, fire, and emergency medical providers to include such professions as health care providers and other professions—and the evolution of new technology. Establishing a national baseline for public safety wireless communications interoperability will be difficult because the definition of who to include as a first responder is evolving, and interoperability problems and solutions are situation specific and change over time to reflect new technologies and operational requirements. In a joint SAFECOM/AGILE program planning meeting in December 2003, participants agreed that a national baseline is necessary to know what the nation’s interoperability status really is, to set goals, and to measure progress. However, at the meeting, participants said they did not know how they were going to define interoperability, how they could measure interoperability, or how to select their sample of representative jurisdictions; this was all to be determined at a later date. SAFECOM has embarked on an effort to establish a national baseline of interoperable communications capabilities by July 2005, but SAFECOM is still working out the details of the study that would be used to develop the baseline. At the time of our review, SAFECOM officials acknowledged that establishing a baseline will be difficult and said they are working out the details of their baseline study but still expect to complete it by July 2005. DHS also has other work under way that may provide a tool for such self- assessments by public safety officials. An ODP official in the Border and Transportation Security Directorate of DHS said ODP is supporting the development of a communications and interoperability needs assessment for 118 jurisdictions that make up the Kansas City region. The official said the assessment will provide an inventory of communications equipment and identify how the equipment is used. He also said the results of this prototype effort will be placed on a CD-Rom and distributed to states and localities to provide a tool to conduct their own self assessments. SAFECOM officials said they will review ODP’s assessment tool as part of a coordinated effort and use this tool if it meets the interoperability requirements of first responders. Second, technical standards for interoperable communications are still under development. Beginning in 1989, a partnership between industry and the public safety user community developed what is known as Project 25 (P- 25) standards. According to the Public Safety Wireless Network (PSWN) program office, Project 25 standards remain the only user- defined set of standards in the United States for public safety communications. DHS purchased radios that incorporate the P-25 standards for each of the nation’s 28 urban search and rescue teams. PSWN believes P-25 is an important step toward achieving interoperability, but the standards do not mandate interoperability among all manufacturers’ systems. Standards development continues today as new technologies emerge that meet changing user needs and new policy requirements. Third, new public safety mission requirements for video, imaging, and high-speed data transfers, new and highly complex digital communications systems, and the use of commercial wireless systems are potential sources of new interoperability problems. Availability of new spectrum can also encourage the development of new technologies and require further development of technical standards. For example, the FCC recently designated a new band of spectrum, the 4.9 Gigahertz (GHz) band, for use and support of public safety. The FCC provided this additional spectrum to public safety users to support new broadband applications such as high- speed digital technologies and wireless local area networks for incident scene management. The FCC requested in particular comments on the implementation of technical standards for fixed and mobile operations on the band. NPSTC has established a task force that includes work on interoperability standards for the 4.9 GHz band. The federal government, states, and local governments have important roles to play in assessing interoperability needs, identifying gaps in meeting those needs, and developing comprehensive plans for closing those gaps. The federal government can provide the leadership, long-term commitment, and focus to help state and local governments meet these goals. For example, currently national requirements for interoperable communications are incomplete and no national architecture exists, there is no standard database to coordinate frequencies, and no common nomenclature or terminology exists for interoperability channels. States alone cannot develop the requirements or a national architecture, compile the nationwide frequency database, or develop a common nationwide nomenclature. Moreover, the federal government alone can allocate communications spectrum for public safety use. One key barrier to the development of a national interoperability strategy has been the lack of a statement of national mission requirements for public safety—what set of communications capabilities should be built or acquired—and a strategy to get there. A key initiative in the SAFECOM program plan for the year 2005 is to complete a comprehensive Public Safety Statement of Requirements. The Statement is to provide functional requirements that define how, when, and where public safety practitioners communicate. On April 26, 2004, DHS announced the release of the first comprehensive Statement of Requirements defining future communication requirements and outlining future technology needed to meet these requirements. According to DHS, the Statement provides a shared vision and an architectural framework for future interoperable public safety communications. DHS describes the Statement of Requirements as a living document that will define future communications services as they change or become new requirements for public safety agencies in carrying out their missions. SAFECOM officials said additional versions of the Statement will incorporate whatever is needed to meet future needs but did not provide specific details. A national architecture has not yet been prepared to guide the creation of interoperable communications. An explicit, commonly understood, and agreed-to blueprint, or enterprise architecture, is required to effectively and efficiently guide modernization efforts. For a decade, GAO has promoted the use of enterprise architectures, recognizing them as a crucial means to a challenging goal—agency operational structures that are optimally defined in both business and technological environments. SAFECOM officials said development of a national architecture will take time because SAFECOM must first assist state and local governments to establish their communications architectures. They said SAFECOM will then collect the state and local architectures and fit them into a national architecture that links federal communications into the state and local infrastructure. Technology solutions by themselves are not sufficient to fully address communication interoperability problems in a given local government, state, or multi-state region. State and local officials consider a standard database of interoperable communications frequencies to be essential to frequency planning and coordination for interoperability frequencies and for general public safety purposes. Police and fire departments often have different concepts and doctrines on how to operate an incident command post and use interoperable communications. Similarly, first responders, such as police and fire departments, may use different terminology to describe the same thing. Differences in terminology and operating procedures can lead to communications problems even where the participating public safety agencies share common communications equipment and spectrum. State and local officials have drawn specific attention to problems caused by the lack of common terminology in naming the same interoperability frequency. The Public Safety National Communications Council (NCC), appointed by the Federal Communications Commission (FCC) was to make recommendations for public safety use of the 700 MHz communications spectrum. The NCC recommended that the FCC mandate (1) Regional Planning Committee use of a standard database to coordinate frequencies during license applications and (2) specific names be designated for each interoperability channel on all pubic safety bands. The NCC said that both were essential to achieve interoperability because public safety officials needed to know what interoperability channels were available and what they were called. In January 2001, the FCC rejected both recommendations. It said that the first recommendation was premature because the database had not been fully developed and tested. The FCC directed the NCC to revisit the issue of mandating the database once the database was developed and had begun operation. The FCC rejected the common nomenclature recommendation because it said that it would have to change the rules each time the public safety community wished to revise a channel label. In its final report of July 25, 2003, the NCC renewed both recommendations. It noted that the FCC had received a demonstration of a newly developed and purportedly operational database, the Computer Assisted Pre-Coordination Resource and Database System (CAPRAD), and that its recommendations were consistent with previous FCC actions, such as the FCC’s designating medical communications channels for the specifc purpose of uniform useage. In 2001, the Office of Management and Budget (OMB) established SAFECOM to unify the federal government’s efforts to help coordinate work at the federal, state, local, and tribal levels in order to provide reliable public safety communications and achieve national wireless communications interoperability. However, SAFECOM was established as an OMB E-Gov initiative with a goal of improving interoperable communications within 18-24 months—a timeline too short for addressing the complex, long-term nature of the interoperability problem. In addition, the roles and responsibilities of various federal agencies within and outside DHS involved in communications interoperability have not been fully defined, and SAFECOM’s authority to oversee and coordinate federal and state efforts has been limited in part because it has been dependent upon other federal agencies for cooperation and funding and has operated without signed memorandums of understanding negotiated with various agencies. DHS, where SAFECOM now resides, announced in May 2004 that it had created an Office for Interoperability and Compatibility within the Science and Technology Directorate, to coordinate the federal response to the problems of wireless and other functional interoperability and compatibility. The new office is responsible for coordinating DHS efforts to address interoperability and compatibility of first responder equipment, to include both communications equipment and equipment such as personal protective equipment used by police and fire from multiple jurisdictions. The plan as approved by the Secretary of DHS states that by November 2004 the new office will be fully established and that action plans and a strategy will be prepared for each portfolio (type or class of equipment). The plan presents a budget estimate for creation of the office through November 2004 but does not include costs to implement each portfolio’s strategy. The plans for the new office do not clarify the roles of various federal agencies or specify what oversight authority the new office will have over federal agency communications programs. As of June 2004, the exact structure and funding for the office, including SAFECOM’s role within the office, were still being developed. DHS has not defined how it will convert the current short-term program and funding structures to a permanent program office structure. When it does, DHS must carefully define the SAFECOM mission and roles in relation to other agencies within DHS and in other federal agencies that have missions that may be related to the OMB-assigned mission for SAFECOM. SAFECOM must coordinate with multiple federal agencies, including ODP within DHS, AGILE and the Office for Community Oriented Policing Services (COPS) in DOJ, the Department of Defense, the FCC, the National Telecommunications and Information Administration within the Department of Commerce, and other agencies. For example, AGILE is the DOJ program to assist state and local law enforcement agencies to effectively and efficiently communicate with one another across agency and jurisdictional boundaries. The Homeland Security Act assigns the DHS Office for Domestic Preparedness (ODP) primary responsibility within the executive branch for preparing the United States for acts of terrorism, including coordinating or, as appropriate, consolidating communications and systems of communications relating to homeland security at all levels of government. An ODP official said the Homeland Security Act granted authority to ODP to serve as the primary agency for preparedness against acts of terrorism, to specifically include communications issues. He said ODP is working with states and local jurisdictions to institutionalize a strategic planning process that assesses and funds their requirements. ODP also plans to develop tools to link these assessments to detailed interoperable communications plans. SAFECOM officials also will face a complex issue when they address public safety spectrum management and coordination. The National Telecommunications and Information Administration (NTIA) within the Department of Commerce is responsible for federal government spectrum use and the FCC is responsible for state, local, and other nonfederal spectrum use. The National Governors’ Guide to Emergency Management noted that extensive coordination will be required between the FCC and the NTIA to provide adequate spectrum and to enhance shared local, state, and federal communications. In September 2002, GAO reported that FCC and NTIA’s efforts to manage their respective areas of responsibility were not guided by a national spectrum strategy and had not implemented long- standing congressional directives to conduct joint, national spectrum planning. The FCC and the NTIA generally agreed with our recommendation that they develop a strategy for establishing a clearly defined national spectrum plan and submit a report to the appropriate congressional committees. In a separate report, we also discussed several barriers to reforming spectrum management in the United States. On June 24, 2004, the Department of Commerce released two reports entitled Spectrum Policy for the 21st Century, the second of which contained recommendations for assessing and managing public safety spectrum. SAFECOM has limited authority to coordinate federal efforts to assess and improve interoperable communications. Although SAFECOM has developed guidance for use in federal first responder grants, SAFECOM does not have authority to require federal agencies to coordinate their grant award information. SAFECOM is currently engaged in an effort with DOJ to create a “collaborative clearinghouse” that could facilitate federal oversight of interoperable communications funding to jurisdictions and allow states access to this information for planning purposes. The database is intended to decrease duplication of funding and evaluation efforts, de-conflict the application process, maximize efficiency of limited federal funding, and serve as a data collection tool for lessons learned that would be accessible to state and locals. However, SAFECOM officials said that the challenge to implementing the coordinated project is getting federal agency collaboration and compliance. As of February 2004, the database contained award information from the 2003 COPS and FEMA interoperability communications equipment grants, but no others within or outside DHS. SAFECOM’s oversight authority and responsibilities are dependant upon its overall mission. OMB officials told us that they are currently in the process of refocusing the mission of the SAFECOM program into three specific parts: (1) coordination of federal activities through several initiatives, including participation in the Federal Interagency Coordination Council and establishment of a process for federal agencies to report and coordinate with SAFECOM on federal activities and investments in interoperability; (2) developing standards; and (3) developing a national architecture for addressing communications interoperability problems. They said identification of all current and planned federal agency communications programs affecting federal, state, and local wireless interoperability is difficult. According to these officials, OMB is developing a strategy to best utilize the SAFECOM program and examining options to enforce the new coordination and reporting process. SAFECOM officials said they are working to formalize the new reporting and coordination process by developing written agreements with other federal agencies and by obtaining concurrence of major state and local associations to the SAFECOM governance structure. SAFECOM officials noted that this newly refocused SAFECOM role does not include providing technical assistance or conducting operational testing of equipment. They said that their authority to conduct such activities will come from DHS enabling directives. SAFECOM officials also said that they have no enforcement authority to require other agencies to use the SAFECOM grant guidance in their funding decisions or to require agencies to provide grant program information to them for use in their database. States, with broad input from local governments, can serve as focal points for statewide planning to improve interoperable communications. The FCC has recognized the important role of states. In its rules and procedures, the FCC concluded that because states play a central role in managing emergency communications and are usually in control at large scale-events and disasters, states should administer the interoperability channels within the 700 MHz band of communications spectrum. States can play a key role in improving interoperable communications by establishing a management structure that includes local participation and input to analyze and identify interoperability gaps between “what is” and “what should be,” developing comprehensive local, state, and regional plans to address such gaps, and funding these plans. The states we visited or contacted—California, Florida, Georgia, Missouri, Washington and a five state Midwest consortium—were in various stages of formulating these management structures. However, states are not required to establish a statewide management structure or to develop interoperability plans, and there is no clear guidance on what should be included in such plans. In addition, no requirement exists that interoperability of federal communications systems be coordinated with state and local government communications systems. The use of a standard database on communications frequencies by public safety agencies within the state and common terminology for these frequencies in preparation and implementation of these statewide interoperable plans are essential but are also not required. Without planning, coordination, and applicable standards—in other words, without a commonly understood and accepted blueprint or national architecture—the communications systems developed between and among locations and levels of government may not be interoperable. States are key players in responding to normal all-hazards emergencies and to terrorist threats. Homeland Security Presidential Directive 8 notes that awards to states are the primary mechanism for delivery of federal preparedness assistance for these missions. State and local officials also believe that states, with broad local and regional participation, have a key role to play in coordinating interoperable communications supporting these missions. The Public Safety Wireless Network (PSWN), in its report on the role of the state in providing interoperable communications, agreed. According to the PSWN report, state leadership in public safety communications is key to outreach efforts that emphasize development of common approaches to regional and statewide interoperability. The report said that state officials have a vested interest in establishing and protecting statewide wireless infrastructures because public safety communications often must cross more than one local jurisdictional boundary. However, states are not required to establish a statewide capability to (1) integrate statewide and regional interoperability planning and (2) prepare statewide interoperability plans that maximize use of spectrum to meet interoperability requirements of day-to-day operations, joint task force operations, and operations in major events. Federal, state, and local officials are not required to coordinate federal, state, and local interoperability spectrum resources that, if successfully addressed, have significant potential to improve public safety wireless communications interoperability. As a result, states may not prepare comprehensive and integrated statewide plans that address the specific interoperability issues present in each state across first responder disciplines and levels of government. Several state and local agencies that we talked with emphasized that they are taking steps to address the need for statewide communications planning. State officials also told us that statewide interoperability is not enough because incidents first responders face could cross state boundaries. Thus, some states are also taking actions to address interstate interoperability problems. For example, Illinois, Indiana, Kentucky, Michigan, and Ohio officials said that their states have combined efforts to form the Midwest Public Safety Communications Consortium to promote interstate interoperability. According to these officials, they also have taken actions to form an interstate committee to develop interoperability plans and solicit support from key players, such as local public safety agencies. FCC recognized a strong state interest in planning and administering interoperability channels for public safety wireless communications when it adopted various technical and operational rules and polices for the 700 MHz band. In these rules and policies, FCC concluded that administration of the 2.6 MHz of interoperability channels in that band (approximately 10 percent) should occur at the state-level in a State Interoperability Executive Committee (SIEC). FCC said that states play a central role in managing emergency communications and that state-level organizations are usually in control at large-scale events and disasters or multi-agency incidents. FCC also found that states are usually in the best position to coordinate with federal government emergency agencies. FCC said that SIEC administrative activities could include holding licenses, resolving licensing issues, and developing a statewide interoperability plan for the 700 MHz band. Other SIEC responsibilities could include the creation and oversight of incident response protocols and the creation of chains of command for incident response and reporting. Available data indicate that 12 to 15 states did not create SIECs but have relied on Regional Planning Committees or similar planning bodies. A comprehensive statewide interoperable plan can provide the guiding framework for achieving defined goals for interoperability within a state and for regions within and across states (such as Kansas City, Mo and Kansas City, Kans.). NCC recommended that all SIECs prepare an interoperability plan that is filed with FCC and updated when substantive changes are made or at least every three years. NCC also recommended to FCC that SIECs, for Homeland Security reasons, should administer all interoperability channels in a state, not merely those in the 700 MHz band. According to NCC, each state should have a central point identified for information on a state’s interoperability capability. None of the four states we visited had finished preparation and funding of their state interoperability plans. Washington and Florida were preparing statewide interoperability plans at the time we visited. Georgia officials said they have a state interoperability plan but that it is not funded. However, one other state we contacted, Missouri, has extended SIEC responsibility for interoperability channels beyond the 700 MHz band. The Missouri SIEC has also designated standard operational and technical guidelines as conditions for the use of these bands. SIEC requires applicants to sign a MOU agreeing to these conditions in order to use these channels in the state of Missouri. The Missouri SIEC Chairman said the state developed its operational and technical guidelines because FCC had not established its own guidelines for these interoperability channels in the VHF and UHF bands. The chairman said Missouri borders on eight other states and expressed concern that these states will develop different guidelines that are incompatible with the Missouri guidelines. He said FCC was notified of Missouri’s actions but has not taken action to date. In another example, California intends to prepare a statewide interoperability plan. California’s SIEC is re-examining California’s previous stove piped programs of communications interoperability (separate systems for law enforcement, fire, etc.) in light of the need to maintain tactical channels within disciplines while promoting cross-discipline interoperability. FCC designated frequency coordinators told FCC that planning for interoperability channels should include federal spectrum designated for interoperability with state and local governments. We found several examples in our field work that support inclusion of federal agencies in future state and local planning for interoperable communications. For example, a Washington State official told us that regional systems within the state do not have links to federal communications systems and assets. In another example, according to an emergency preparedness official in Seattle, a study of radio interoperable communications in a medical center also found that federal agencies such as FBI are not integrated into hospital or health communications systems, and other federal agencies have no radio infrastructure to support and participate in a health emergency such as a bio-terrorism event. He told us that he has no idea what the federal communications plan is in the event of a disaster; he said he does not know how to talk to federal health officials responding to an incident or what the federal government needs when they arrive. The federal government is developing a system that could improve interoperable communications on a limited basis between state and federal government agencies. The Integrated Wireless Network (IWN) is a radio system that is intended to replace the existing radio systems for the DOJ, Treasury, and DHS. IWN is an exclusive federal law enforcement communications system that is intended to interact and interface with state and local systems as needed but will not replace these systems. According to DOJ officials, IWN is intended to improve federal to state/ local interoperability but will not address interoperability of state and local systems. However, federal interoperability with state and local wireless communications systems is hindered because NTIA and FCC control different frequencies in the VHF and UHF bands. To enhance interoperability, NTIA has identified 40 federal government frequencies that can be used by state and local public safety agencies for joint law enforcement and incident response purposes. FCC, however, designated different frequencies for interoperability in the VHF band and in the UHF band from spectrum it controls for use by state and local public safety agencies. Total one-time replacement of the nation’s communications systems is very unlikely, due to the costs involved. A 1998 study cited the replacement value of the existing public safety communication infrastructure nationwide at $18.3 billion. DHS officials said this estimate is much higher when infrastructure and training costs are taken into account. Furthermore, DHS recently estimated that reaching an accelerated goal of communications interoperability will require a major investment of several billion dollars within the next 5 to 10 years. As a result of these extraordinary costs, federal funding is but one of several resources state and local agencies must use in order to address these costs. Furthermore, given the high costs, the development of an interoperable communications plan is vital to useful, non-duplicative spending. However, the federal funding assistance programs to state and local governments do not fully support regional planning for communications interoperability. Federal grants that support interoperability have inconsistent requirements to tie funding to interoperable communications plans. In addition, uncoordinated federal and state level grant reviews limit the government’s ability to ensure that federal funds are used to effectively support improved regional and statewide communications systems. Local, state and federal officials agree that regional communications plans should be developed to guide decisions on how to use federal funds for interoperable communications; however, the current funding requirements do not support this planning process. Although recent grant requirements have encouraged jurisdictions to take a regional approach to planning, current federal first responder grants are inconsistent in their requirements to tie funding to interoperable communications plans. States and locals are not required to provide an interoperable communications plan as a prerequisite to receiving some federal grant funds. As a result, there is no assurance that federal funds are being used to support a well- developed strategy for improving interoperability. For example, the fiscal year 2004 Homeland Security Grant (HSG) and Urban Areas Security Initiative (UASI) grants require states or selected jurisdictions to conduct a needs assessment and submit a Homeland Security Strategy to ODP. However, the required strategies are high-level and broad in nature. They do not require that project narratives or a detailed communications plan be submitted by grantees prior to receiving grant funds. In another example, fiscal year 2003 funding provided by COPS and FEMA for the Interoperable Communications Equipment Grants did not require that a communications plan be completed prior to receiving grant funds. However, grantees were required to provide documentation that they were actively engaged in a planning process and a multi-jurisdictional and multidisciplinary project narrative was required. In addition to variations in requirements to create communications interoperability plans, federal grants also lack consistency in defining what “regional” body should conduct planning. State and local officials also said that the short grant application deadlines for recent first responder grants limited their ability to develop cohesive communications plans or perform a coordinated review of local requests. Federal officials acknowledged that the limited submission timeframes presents barriers to first responders for developing plans prior to receiving funds. For example, several federal grant programs—the Homeland Security Grant, UASI grant, COPs and FEMA communication equipment grants, Assistance to Firefighters Grant—allow states only 30 or 60 days from the date of grant announcement to submit a grant proposal. These time frames are sometimes driven by appropriations language or by the timing of the appropriations enactment. Furthermore, many grants have been awarded to state and locals for communications interoperability that have 1- or 2-year performance periods, and according to state and local officials, do not support long-term solutions. For example, Assistance to Fire Fighters Grants, COPS/ FEMA’s Interoperable Communications Equipment Grants, and National Urban Search and Rescue grants all have 1-year performance periods. UASI, HSG program, and Local Law Enforcement Block Grants have 2-year performance periods. The federal and state governments lack a coordinated grant review process to ensure that funds allocated to local governments are used for communication projects that complement each other and add to overall statewide and national interoperability. Federal and state officials said that each agency reviews its own set of applications and projects, without coordination with other agencies. As a result, grants could be given to bordering jurisdictions that propose conflicting interoperability solutions. In fiscal year 2003, federal officials from COPS and FEMA attempted to eliminate awarding funds to conflicting communication systems within bordering jurisdictions by coordinating their review of interoperable communications equipment grant proposals. However, COPS and FEMA are only two of several federal sources of funds for communications interoperability. In an attempt to address this challenge, in 2003 SAFECOM coordinated with other agencies to create the document Recommended Federal Grant Guidance, Public Safety Communications and Interoperability Grants, which lays out standard grant requirements for planning, building, and training for interoperable communications systems. The guidance is designed to advise federal agencies on who is eligible for the first responder interoperable communications grants, the purposes for which grant funds can be used, and eligibility specifications for applicants. The guidance recommends standard minimum requirements, such as requirements to “…define the objectives of what the applicant is ultimately trying to accomplish and how the proposed project would fit into an overall effort to increase interoperability, as well as identify potential partnerships for agreements.” Additionally, the guidance recommends, but does not require, that applicants establish a governance group consisting of local, tribal, state, and federal entities from relevant public safety disciplines and purchase interoperable equipment that is compliant with phase one of Project-25 standards. The House Committee on Appropriations report for the DHS FY 2004 appropriation states that the Committee is aware of numerous federal programs addressing communications interoperability through planning, building, upgrading, and maintaining public safety communication systems, among other purposes. The Committee directed that all DHS grant programs issuing grants for the above purposes incorporate the SAFECOM guidance and coordinate with the SAFECOM program when awarding funding. To better coordinate the government’s efforts, the Committee also encouraged all other federal programs issuing grants for the above purposes to use the guidelines outlined by SAFECOM in their grant programs. However, SAFECOM officials said that they have no enforcement authority to require other agencies to use this guidance in their funding decisions or to require agencies to provide grant program information to them for use in their database. A fundamental barrier to successfully addressing interoperable communications problems for public safety has been the lack of effective, collaborative, interdisciplinary, and intergovernmental planning. Jurisdictional boundaries and unique public safety agency missions have often fostered barriers that hinder cooperation and collaboration. No one first responder agency, jurisdiction, or level of government can “fix” the nation’s interoperability problems, which vary across the nation and often cross first responder agency and jurisdictional boundaries. Changes in spectrum available to federal, state and local public safety agencies— primarily a federal responsibility conducted through the FCC and NTIA— changes in technology, and the evolving missions and responsibilities of public safety agencies in an age of terrorism all highlight the ever-changing environment in which interoperable communications needs and solutions must be addressed. Interdisciplinary, intergovernmental, and multi- jurisdictional partnership and collaboration are essential for effectively addressing interoperability shortcomings. We are making recommendations to DHS and OMB to improve the assessment and coordination of interoperable communications efforts. We recommend that the Secretary of DHS: in coordination with the FCC and National Telecommunications and Information Administration, continue to develop a nationwide database of public safety frequency channels and a standard nationwide nomenclature for these channels, with clear target dates for completing both efforts; establish requirements for interoperable communications and assist states in assessing interoperability in their states against those requirements; through DHS grant guidance encourage states to establish a single, statewide body to assess interoperability and develop a comprehensive statewide interoperability plan for federal, state, and local communications systems in all frequency bands; and at the appropriate time, require through DHS grant guidance that federal grant funding for communications equipment shall be approved only upon certification by the statewide body responsible for interoperable communications that grant applications for equipment purchases conform with statewide interoperability plans. We also recommend that the Director of OMB, in conjunction with DHS, review the interoperability mission and functions now assigned to SAFECOM and establish those functions as a long-term program with adequate authority and funding. In commenting on a draft of this report, the Department of Homeland Security discusses actions the department is taking that are generally consistent with the intent of our recommendations but do not directly address specific steps detailed in our recommendations with respect to establishment of statewide bodies responsible for interoperable communications within the state, the development of comprehensive statewide interoperability plans and tying federal funds for communications equipment directly to those statewide interoperable plans. OMB did not provide written comments on the draft report. This concludes my prepared statement, Mr. Chairman, and I would be pleased to answer any questions you or other members of the Subcommittee my have at this time. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Lives of first responders and those whom they are trying to assist can be lost when first responders cannot communicate effectively as needed. This report addresses issues of determining the status of interoperable wireless communications across the nation, and the potential roles that federal state, local governments can play in improving these communications. In a November 6, 2003, testimony, GAO said that no one group or level of government could "fix" the nation's interoperable communications problems. Success would require effective, collaborative, interdisciplinary and intergovernmental planning. The present extent and scope nationwide of public safety wireless communication systems' ability to talk among themselves as necessary and authorized has not been determined. Data on current conditions compared to needs are necessary to develop plans for improvement and measure progress over time. However, the nationwide data needed to do this are not currently available. The Department of Homeland Security (DHS) intends to obtain this information by the year 2005 by means of a nationwide survey. However, at the time of our review, DHS had not yet developed its detailed plans for conducting this survey and reporting its results. The federal government can take a leadership role in support of efforts to improve interoperability by developing national requirements and a national architecture, developing nationwide databases, and providing technical and financial support for state and local efforts to improve interoperability. In 2001, the Office of Management and Budget (OMB) established the federal government's Wireless Public Safety Interoperable Communications Program, SAFECOM, to unify efforts to achieve national wireless communications interoperability. However, SAFECOM's authority and ability to oversee and coordinate federal and state efforts has been limited by its dependence upon other agencies for funding and their willingness to cooperate. OMB is currently examining alternative methods to implement SAFECOM's mission. In addition, DHS, where SAFECOM now resides, has recently announced it is establishing an Office for Interoperability and Compatibility to coordinate the federal response to the problems of interoperability in several functions, including wireless communications. The exact structure and funding for this office, which will include SAFECOM, are still being developed. State and local governments can play a large role in developing and implementing plans to improve public safety agencies' interoperable communications. State and local governments own most of the physical infrastructure of public safety communications systems, and states play a central role in managing emergency communications. The Federal Communications Commission recognized the central role of states in concluding that states should manage the public safety interoperability channels in the 700 MHz communications spectrum. States, with broad input from local governments, are a logical choice to serve as a foundation for interoperability planning because incidents of any level of severity originate at the local level with states as the primary source of support. However, states are not required to develop interoperability plans, and there is no clear guidance on what should be included in such plans.
|
Since the 1960s, the United States has used satellites to observe the earth and its land, oceans, atmosphere, and space environments. Satellites provide a global perspective of the environment and allow observations in areas that may be otherwise unreachable or unsuitable for measurements. Used in combination with ground, sea, and airborne observing systems, satellites have become an indispensable part of measuring and forecasting weather and climate. For example, satellites provide the graphical images used to identify current weather patterns, as well as the data that go into numerical weather prediction models. These models are used to forecast weather 1 to 2 weeks in advance and to issue warnings about severe weather, including the path and intensity of hurricanes. Satellite data are also used to warn infrastructure owners when increased solar activity is expected to affect key assets, including communication satellites or the electric power grid. When collected over time, satellite data can also be used to observe climate change—the trends and changes in the earth’s climate. For example, these data are used to monitor and project seasonal, annual, and decadal changes in the earth’s temperature, vegetation coverage, and ozone coverage. One key subset of satellite-provided data is climate data. These data are used in combination with ground and ocean observing systems to understand seasonal, annual, and decadal variations in the climate. Satellites provide land observations such as measurements of soil moisture, changes in how land is used, and vegetation growth; ocean observations such as sea levels, sea surface temperature, and ocean color; and atmospheric observations such as greenhouse gas levels (e.g., carbon dioxide), aerosol and dust particles, and moisture concentration. When these data are obtained over long periods of time, scientists are able to use them to determine short- and long-term trends in how the earth’s systems work and how they work together. For example, climate measurements have allowed scientists to better understand the effect of deforestation on how the earth absorbs heat, retains rainwater, and absorbs greenhouse gases. Scientists also use climate data to help predict climate cycles that affect the weather, such as El Niño, and to develop global estimates of food crop production for a particular year or season. Another subset of satellite-provided environmental information is space weather data. Satellite-provided observations of space weather generally describe changes in solar activity in the space environment. Just as scientists use observations of weather that occurs on the earth’s surface and in its atmosphere to develop forecasts, scientists and researchers use space weather observations to detect and forecast solar storms that may be potentially harmful to society. NASA, NOAA, and DOD all have responsibilities for acquiring, processing, and disseminating environmental data and information from research or operational satellites. In addition to these agencies, there are two interagency organizations—the U.S. Group on Earth Observations (USGEO) and the U.S. Global Change Research Program (USGCRP)—that are primarily responsible for coordinating federal efforts with respect to observations of the earth’s environment. The National Space Weather Program serves as the coordinating body for space weather. USGEO and USGCRP report to the Executive Office of the President through the National Science and Technology Council’s Committee on Environment and Natural Resources, while the National Space Weather Program coordinates its activities through NOAA’s Office of the Federal Coordinator for Meteorology. The Executive Office of the President provides oversight for federal space- based environmental observation. Within the Executive Office of the President, the Office of Science and Technology Policy (OSTP), the Office of Management and Budget (OMB), and the Council on Environmental Quality carry out these governance responsibilities. In addition, the National Science and Technology Council and its Committee on Environment and Natural Resources provide the Executive Office of the President with executive-level coordination and advice. Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar-orbiting Operational Environmental Satellite (POES) series, which is managed by NOAA, and the Defense Meteorological Satellite Program (DMSP), which is managed by the Air Force. Currently, there is one operational POES satellite and two operational DMSP satellites that are positioned so that they cross the equator in the early morning, midmorning, and early afternoon. In addition, the government is also relying on a European satellite, called the Meteorological Operational (MetOp) satellite. Together, they ensure that, for any region of the earth, the data provided to users are generally no more than 6 hours old. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program—NPOESS—capable of satisfying both civilian and military requirements. To manage this program, DOD, NOAA, and NASA formed a tri-agency Integrated Program Office, with NOAA responsible for overall program management for the converged system and for satellite operations; the Air Force responsible for acquisition; and NASA responsible for facilitating the development and incorporation of new technologies into the converged system. Since the program’s inception, NPOESS costs have grown by over $8 billion, and launch schedules have been delayed by over 5 years. In addition, as a result of a 2006 restructuring of the program, the agencies reduced the program’s functionality by decreasing the number of originally planned satellites, orbits, and instruments. The restructuring also led agency executives to mitigate potential data gaps by deciding to use a planned demonstration satellite, called the NPOESS Preparatory Project (NPP) satellite, as an operational satellite providing climate and weather data. Even after this restructuring, however, the program continued to encounter technical issues, management challenges, schedule delays, and further cost increases. To address these issues, in recent years we have made a series of recommendations to, among other things, improve executive-level oversight and develop realistic time frames for revising cost and schedule baselines. In August 2009, the Executive Office of the President formed a task force, led by OSTP, to investigate the management and acquisition options that would improve the NPOESS program. As a result of this review, the Director of OSTP announced in February 2010 that NOAA and DOD will no longer jointly procure the NPOESS satellite system; instead, each agency would plan and acquire its own satellite system. Specifically, NOAA is to be responsible for the afternoon orbit and the observations planned for the first and third NPOESS satellites. DOD is to be responsible for the early-morning orbit and the observations planned for the second and fourth NPOESS satellites. The partnership with the European satellite agencies for the midmorning orbit is to continue as planned. NOAA has developed preliminary plans for its new satellite acquisition program—called the Joint Polar Satellite System (JPSS)—to meet the requirements of the afternoon NPOESS orbit. Specifically, NOAA plans to acquire two satellites; the plans call for the first JPSS satellite to be available for launch in 2014, and the second JPSS satellite to be available for launch in 2018. NOAA will also provide the ground systems for both the JPSS and DOD programs. NOAA is also planning technical changes to the satellites, including using a smaller spacecraft than the one planned for NPOESS and removing sensors that were planned for the NPOESS satellites in the afternoon orbit. In addition, NOAA plans to transfer the management of acquisition from the NPOESS program office to NASA’s Goddard Space Flight Center, so that it can be co-located at a space system acquisition center as advocated by an independent review team. NOAA has developed a team to lead the transition from NPOESS to JPSS, and plans to begin transitioning in July and complete the transition plan— including cost and schedule estimates—by the end of September. DOD is at an earlier stage in its planning process, in part because it has more time before the first satellite in the morning orbit is needed. DOD officials are currently developing plans—including costs, schedules, and risks—for their new program, called the Defense Weather Satellite System. DOD expects to make final decisions on the spacecraft, sensors, procurement strategy, and staffing in August 2010, and begin the program immediately. Because neither agency has finalized plans for its acquisition, the full impact of OSTP’s decision on the expected cost, schedule, and capabilities is unknown. Cost: NOAA anticipates that the JPSS program will cost approximately $11.9 billion to complete through 2024. Although this estimated cost is less than the current baseline and recent estimates for the NPOESS program, DOD will still need to fund and develop satellites to meet the requirements for the early morning orbit. DOD’s initial estimates are for its new program to cost almost $5 billion through fiscal year 2015. Thus, the cost of the two acquisitions will likely exceed the baselined life-cycle cost of the NPOESS program. Schedule: Neither NOAA nor DOD has finalized plans that show the full impact of the restructuring on the schedule for satellite development. We have previously reported that restructuring a program like NPOESS could take significant time to accomplish, due in part to the time taken revising, renegotiating, or developing important acquisition documents, including contracts and interagency agreements. With important decisions and negotiations still pending, it is likely that the expected launch date of the first JPSS satellite will be delayed. Capabilities: Neither agency has made final decisions on the full set of sensors—or which satellites will accommodate them—for their respective satellite programs. Until those decisions are made, it will not be possible to determine the capabilities that these satellites will provide and their associated costs. Timely decisions on cost, schedule, and capabilities would allow both acquisitions to move forward and satellite data users to start planning for any data shortfalls they may experience. Until DOD and NOAA finalize their plans, it is not clear whether the new acquisitions will meet the requirements of both civilian and military users. Moving forward, the agencies face key risks in transitioning from NPOESS to their new programs, including loss of key staff and capabilities, delays in negotiating contract changes and establishing new program offices, failure to support the other agency’s requirements, insufficient oversight of new program management, and potential cost growth from contract terminations and other program changes. Loss of key staff and capabilities: The NPOESS program office is composed of NOAA, NASA, Air Force, and contractor staff with knowledge and experience in the status, risks, and lessons learned from the NPOESS program. This knowledge will be critical to moving the program forward both during and after the transition period. However, within the past year, the program office has lost its Program Executive Officer, Deputy Program Executive Officer, and System Program Director—the top three individuals who oversee day-to-day operations. Thus, final critical decisions on work slow downs and priorities will be made by a new Program Executive Officer, who has only overseen the program for a few weeks. In addition, program office staff have already begun leaving—or looking for other employment—due to the uncertainties about the future of the program office. Unless NOAA and DOD are proactive in retaining these staff, the new program may waste valuable time if staff must relearn program details and may repeat mistakes made and lose lessons learned by prior program staff. Delays in negotiating contract changes and establishing new programs: According to NOAA officials, the plan for JPSS may require negotiations with contractors and between contractors and their subcontractors. In addition, both NOAA and DOD will need to establish and fully staff program offices to facilitate and manage the transition and new programs. Until decisions are made on how the program is to proceed with contract changes and terminations, the contractors and program office cannot implement the chosen solution, and some decisions, such as how to hold schedule slips to a minimum, could become much more difficult. Failure to support the other agency’s requirements: As a joint program, NPOESS was expected to fulfill many military, civilian, and research requirements for environmental data. However, because the requirements of NOAA and DOD are different, the agencies may develop programs that meet their own needs but not the other’s. If the agencies cannot find a way to build a partnership that facilitates both efficient and effective decision- making on data continuity needs, the needs of both agencies may not be adequately incorporated into the new programs. Insufficient oversight of new program management: Under its new JPSS program, NOAA plans to transfer parts of the NPOESS program to NASA, but it has not yet defined how it will oversee NASA’s efforts. We have reported that NASA has consistently underestimated time and cost and has not adequately managed risk factors such as contractor performance. Because of these issues, we listed NASA’s acquisition management as a high-risk area in 1990, and it remains a high-risk area today. NOAA officials reported that they are developing a management control plan with NASA and intend to perform an independent review of this plan when it is completed. They could not provide a time frame for its completion. Without strong NOAA oversight of NASA’s management of program components, JPSS may continue to face the same cost, schedule, and contract management challenges as the NPOESS program. Cost growth resulting from contract and program changes: Because neither acquisition has fully developed plans for their respective programs, it is unclear whether contracts will need to be fully or partially terminated, and what the terminations and other program changes could ultimately cost. We have previously reported that if the government decides to terminate a contract for convenience, it must compensate the contractor— in the form of a termination settlement—for the work it has performed. However, a settlement only addresses the government’s obligation under a terminated contract, and there may be additional costs. For example, additional costs could result from awarding a new contract to replace a terminated contract. Until NOAA and DOD make decisions and plans for their programs, the full cost of contract and program changes will be unknown. NOAA, NASA, and DOD acknowledge that there are risks associated with the transition to new programs, but they have not yet established plans to mitigate these risks. While NOAA and DOD are developing plans for their new programs, the development of key NPOESS components is continuing. In recent months, the program completed the development of the critical imaging sensor, called the Visible/Infrared Imager/Radiometer Suite (VIIRS), and delivered it to NASA for integration onto the NPP satellite. Four of the five sensors intended for NPP are now on the spacecraft. In addition, the program continues to work on components of the first and second NPOESS satellites, which are to be transferred to NOAA and DOD to become part of their respective follow-on programs. However, the expected launch date of the NPP satellite has been delayed by 9 months (moving the launch date to September 2011 or later), due to technical issues in the development of the NPP sensor that has not yet been integrated. In addition, the development of the VIIRS sensor for the first NPOESS or JPSS satellite is experiencing significant cost overruns. Further, the program is slowing down and may need to stop work on key components because of potential contract liabilities and funding constraints, but it has not developed a prioritized list on what to stop first. Until the transition risks are effectively mitigated, and unless selected components are able to continue scheduled development, the launches of NPP and the first NOAA and DOD satellites could be further delayed. Further launch delays are likely to jeopardize the availability and continuity of weather and climate data. For example, the POES satellite currently in the afternoon orbit is expected to reach the end of its lifespan at the end of 2012. If NPP is delayed, there could be a gap in polar satellite observations in the afternoon orbit. Similarly, a delay in the launch of the first JPSS satellite may lead to a gap in satellite data after NPP reaches the end of its lifespan. For over a decade, the climate community has clamored for an interagency strategy to coordinate agency priorities, budgets, and schedules for environmental satellites over the long term—and the governance structure to implement that strategy. Specifically, in 1999, the National Research Council reported on the need for a comprehensive long-term earth observation strategy and, in 2000, for an effective governance structure that would balance interagency issues and provide authority and accountability for implementing the strategy. The National Research Council and others have repeated these concerns in multiple reports since then, including after the agencies responsible for NPOESS canceled key climate and space weather sensors from the program in 2006. Similarly, in 1999, the Administrators of NOAA and NASA wrote letters to OSTP noting the need for an interagency strategy and the means to implement it. While progress has been made in developing near-term interagency plans, this initiative is languishing without a firm completion date, and federal efforts to establish and implement a strategy for the long-term provision of satellite data are insufficient. Specifically, in 2005, the National Science and Technology Council’s Committee on Environment and Natural Resources established USGEO to develop an earth observation strategy and coordinate its implementation. Since that time, USGEO assessed current and evolving requirements, evaluated them to determine investment priorities, and drafted the Strategic Assessment Report—a report delineating near-term opportunities and priorities for earth observation from both space and ground. According to agency officials, this report is the first in a planned series, and it was approved by OSTP and multiple federal agencies in May 2009. However, OSTP has not yet forwarded the draft to the Committee on Environment and Natural Resources and the President’s National Science and Technology Council because it is reconsidering whether to revise or move forward with the plan. USGEO officials could not provide a schedule for completing this near-term interagency plan. This draft report is an important first step in developing a national strategy for earth observations, but it is not sufficient to ensure the long-term provision of data vital to understanding the climate. The draft report integrates different agencies’ requirements and proposes continuing or improving earth observations in 17 separate areas, using both satellite and land-based measuring systems. However, the report does not include costs, schedules, or plans for the long-term provision of satellite data. While the report does note the importance of continuing certain near-term plans for sensors, it does not make recommendations for what to do over the long term. In addition, the federal government lacks a clear process for implementing an interagency strategy. Key offices within the Executive Office of the President with responsibilities for environmental observations, including OSTP and the Council for Environmental Quality, have not established processes or time frames for implementing an interagency strategy— including steps for working with OMB to ensure that agencies’ annual budgets are aligned with the interagency strategy. As a result, even if an interagency strategy was finalized, it is not clear how OSTP and OMB would ensure that the responsibilities identified in the interagency strategy are consistent with agency plans and are funded within agency budgets. Until an interagency strategy for earth observation is established, and a clear process for implementing it is in place, federal agencies will continue to procure their immediate priorities on an ad hoc basis, the economic benefits of a coordinated approach to investments in earth observation may be lost, and the continuity of key measurements may be jeopardized. This will hinder our nation’s ability to understand long-term climate changes. While key federal agencies have taken steps to plan for continued space weather observations in the near term, they lack a strategy for the long- term provision of space weather data. Similar to maintaining satellite- provided climate observations, maintaining space weather observations over the long term is important. The National Space Weather Program, the interagency coordinating body for the United States space weather community, has repeatedly recommended taking action to sustain the space weather observation infrastructure on a long-term basis. Agencies participating in the National Space Weather Program have taken short-term actions that may help alleviate near-term gaps in space weather observations, but OSTP has not approved or released two reports that are expected to establish plans for obtaining space weather observations over the long term. Specifically, NOAA and DOD are seeking to replace key experimental space-observing satellites. Further, the National Space Weather Program recently developed two reports at the request of OSTP documenting specific recommendations for the future of space weather, one on what to do about a critical NASA space weather satellite, called the Advanced Composition Explorer, and the other on the replacement of the space weather capabilities removed from the NPOESS program. The program submitted the reports in October and November of 2009, respectively. However, OSTP officials do not have a schedule for approving or releasing the reports. While the agencies’ short-term actions and the pending reports hold promise, federal agencies do not currently have a comprehensive interagency strategy for the long-term provision of space weather data. Until OSTP releases the reports, it will not be clear whether they provide a clear strategy to ensure the long-term provision of space weather data—or whether the current efforts are simply ad hoc attempts to ensure short- term data continuity. Without a comprehensive long-term strategy for the provision of space weather data, agencies may make ad hoc decisions to ensure continuity in the near term and risk making inefficient decisions on key investments. In the report being released today, we are making recommendations to ensure that the transition from NPOESS to its successor programs is efficiently and effectively managed. Among other things, we are recommending that the Secretaries of Defense and Commerce direct their respective NPOESS follow-on programs to expedite decisions on the expected cost, schedule, and capabilities of their planned programs; direct their respective NPOESS follow-on programs to develop plans to address key transition risks, including the loss of skilled staff, delays in contract negotiations and setting up new program offices, loss of support for the other agency’s requirements, and oversight of new program management; and direct the NPOESS program office to develop priorities for work slowdown and stoppage to allow the activities that are most important to maintaining launch schedules to continue. In written comments on the NPOESS report, both NOAA and DOD agreed with our recommendations and identified plans to implement them. In addition, NASA made comments on two of our findings. For example, NASA commented on our finding that NOAA would need to provide enhanced oversight of NASA’s management of the JPSS program. NASA officials asserted that the proper basis for comparison should not be their leading-edge research missions, but, instead, should be their operational environmental satellite programs. However, the JPSS program does include leading-edge sensor technologies, and the complexity of these sensor technologies has been a key reason for the cost growth and schedule delays experienced to date on the NPOESS program. Thus, it will be important for both NOAA and NASA to ensure that the subcontractors are adequately managed so that technical, cost, and schedule issues are minimized or mitigated. The full text of the three agencies’ comments and our evaluation of those comments are provided in the accompanying report. In the report issued in April, we made recommendations to improve long- term planning for environmental satellites. Specifically, we recommended that the Assistant to the President for Science and Technology, in collaboration with key Executive Office of the President entities (including the Office of Science and Technology Policy, the Office of Management and Budget, the Council on Environmental Quality, and the National Science and Technology Council) establish a deadline to complete and release three key reports on environmental observations. We also recommended that the Assistant to the President direct USGEO to establish an interagency strategy to address the long-term provision of environmental observations from satellites that includes costs and schedules for the satellites, as well as a plan for the relevant agencies’ future budgets, and establish an ongoing process, with timelines, for obtaining approval of the interagency strategy and aligning it with agency plans and annual budgets. When asked to comment on our report, the Executive Office of the President did not agree or disagree with our recommendations; however, officials noted that OSTP is currently revising USGEO’s Strategic Assessment Report to update information on launch schedules and on the availability of certain measurements that have changed since completion of the report a year ago. In crafting this strategy, it will be important for OSTP to address long-term interagency needs and to work with OMB to ensure that the long-term plans are aligned with individual agencies’ plans and budgets. If the plan does not include these elements, individual agencies will continue to address only their most pressing priorities, other agencies’ needs may be ignored, and the government may lose the ability to effectively and efficiently address its earth observation needs. In summary, at the end of this fiscal year, the federal government will have spent 16 years and almost $6 billion to combine two legacy satellite programs into one, yet will not have launched a single satellite. Faced with expected cost growth exceeding $8 billion, schedule delays of over 5 years, and continuing tri-agency management challenges, a task force led by the President’s Office of Science and Technology Policy decided to disband NPOESS so that NOAA and DOD could pursue separate satellite acquisitions. While the two agencies are scrambling to develop plans for their respective programs, it is not yet clear what the programs will deliver, when, and at what cost, but it is very likely that they will cost more than the existing NPOESS baseline and recent program office estimates. Timely decisions on cost, schedule, and capabilities are needed to allow both acquisitions to move forward. In addition, the agencies face a number of transition risks, but neither agency has developed plans to mitigate these risks. Meanwhile, the NPOESS program is continuing to develop components of the NPP satellite and components of the first two satellites. However, program officials reported that they have slowed all development work, and may need to stop work on these deliverables. Slowing or stopping work could further delay the satellites’ launches, but the program has not developed a prioritized list of what to stop first to mitigate impacts on satellite launches. Until it does so, there may be an increased risk of gaps in satellite data. Although initial steps have been taken to ensure the short-term continuity of key climate and space weather measurements from satellites, the federal government has not taken the necessary steps to ensure the long- term sustainment of these critical measurements. For example, NOAA recently removed sensors from JPSS that were originally planned for the NPOESS satellites in the afternoon orbit, but it is unclear how this will affect other agencies and programs. Until an interagency strategy for earth observation is established, and a clear process for implementing it is in place, federal agencies will continue to procure their immediate priorities on an ad hoc basis, the economic benefits of a coordinated approach to investments in earth observation may be lost, and the continuity of key measurements may be lost. This will hinder our nation’s ability to understand long-term climate changes and risk our ability to measure, predict, and mitigate the effects of space weather. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at [email protected]. Other key contributors include Colleen Phillips, Assistant Director; Kate Agatone; Franklin Jackson; Kathleen S. Lovett; Lee McCracken; and John Ockay. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Environmental satellites provide data used for weather forecasting, measuring variations in climate over time, and predicting space weather. Due to the continuing cost, schedule, and tri-agency management challenges of the National Polar-orbiting Operational Environmental Satellite System (NPOESS)--a key satellite acquisition managed by the National Oceanic and Atmospheric Association (NOAA), the Department of Defense (DOD), and the National Aeronautics and Space Administration (NASA)--the White House's Office of Science and Technology Policy (OSTP) decided in February 2010 to disband NPOESS and, instead, to have NOAA and DOD undertake separate acquisitions. GAO was asked to summarize its report being released today on plans for NOAA's and DOD's separate acquisitions and the key risks of the transition, as well as its recent work on federal efforts to establish long-term strategies for satellite-provided climate and space weather data. OSTP's decision to disband NPOESS came at a time when the program's cost estimate had more than doubled--to over $15 billion, the launch date for a demonstration satellite had been delayed by over 5 years, and the tri-agency management structure had repeatedly proven to be ineffective. To implement the decision, NOAA and DOD have begun planning for separate acquisitions to replace NPOESS. NOAA has developed preliminary plans for its new program--called the Joint Polar Satellite System--to meet the requirements of the afternoon NPOESS orbit. DOD expects to make final decisions on the spacecraft and sensors in August 2010. However, because neither agency has completed its plans, the impact of the decision to disband the program on the expected costs, schedules, and capabilities has not yet been determined. Moving forward, the agencies face key risks in transitioning from NPOESS to their separate programs, including the loss of key staff and capabilities, delays in negotiating contract changes and establishing new program offices, the loss of support for the other agency's requirements, insufficient oversight of new program management, and cost growth resulting from contract and program changes. While NOAA and DOD are establishing plans for their new programs, the development of key NPOESS components is continuing. However, the launch date of the demonstration satellite--to be used operationally to ensure climate and weather data continuity--has been delayed by 9 months, and the program has slowed down work on all development activities. Until the transition risks are effectively mitigated, and unless components are able to continue scheduled development, it is likely that launch dates will continue to be delayed. Further delays are likely to jeopardize the availability and continuity of critical weather and climate data. For over a decade, the climate community has clamored for a national interagency strategy that coordinates agency priorities, budgets, and schedules for environmental satellites over the long-term--and the governance structure to implement that strategy. While the federal government has taken several steps to ensure the provision of environmental data from satellites for both climate and space weather in the short term, federal efforts to ensure the long-term provision of these environmental measurements are still lacking. Specifically, although both the climate and space weather communities have recently drafted reports for OSTP containing recommendations for climate and space weather satellites, respectively, the climate report focuses only on short-term needs and does not include longer term priorities, nor does it include budgets or schedules. Further, OSTP does not have plans for finalizing or releasing either the climate or space weather reports. Until an interagency strategy for environmental observation is established, and a clear process for implementing it is in place, federal agencies will continue to procure their immediate priorities on an ad hoc basis, the economic benefits of a coordinated approach to investments in earth observation may be lost, and our nation's ability to understand long-term climate changes may be limited. In its reports, GAO recommended that NOAA and DOD address key transition risks, and that the President's Assistant for Science and Technology implement interagency strategies for the long-term provision of environmental observations. NOAA and DOD agreed, while the Assistant's office neither agreed nor disagreed, but noted its plan to develop a strategy for earth observations.
|
Outreach efforts by EAC representatives indicate that most employees support many portions of the legislative proposal under consideration by the Subcommittee but have concerns about provisions in the proposal related to pay. Specifically, employees generally support provisions that make the authorities provided to GAO for voluntary early retirement pay incentives permanent, to provide enhancements in vacation time and relocation expenses deemed necessary by the Comptroller General to recruit and retain top employees, and to establish a private sector exchange program. However, many employees are concerned about the provisions that change the way that annual pay decisions are made and, to a lesser extent, the proposed change to traditional protections for pay retention. Employees had differing opinions about the proposed change to GAO’s name. Most employees support the Comptroller General’s proposed provisions to make permanent GAO’s 3-year authority to offer voluntary early retirement and voluntary separation payments to provide flexibility to realign GAO’s workforce. In addition, GAO employees recognize that attracting and retaining high-quality employees and managers throughout the organization is vitally important for the future of GAO. Employees thus generally support the provisions to offer flexible relocation reimbursements, provide upper-level hires with 6-hour leave accrual, and establish an executive exchange program with private sector organizations. Most employees commented positively on these authorities so long as there are internal controls to monitor and report on their use, as are present to provide accountability for other authorities throughout GAO. Many employees expressed concern about the provisions that affect the determination of annual pay increases and pay retention. The opinions expressed by employees generally fall into three categories: (1) general concerns and some supporting views regarding changes in traditional civil service employment rules that could reduce the amount of annual pay increases provided for economic adjustments but provide greater opportunity for rewarding performance, (2) concerns about making a portion of annual economic adjustments variable based on performance assessment, and to a lesser extent (3) concerns about the loss of traditional pay retention protections. The first area of employee concern is proposed changes to traditional federal civil service employment rules that have historically provided a fixed annual increase for all federal employees determined by the President and the Congress. Government employees in general, and GAO employees in particular, often conduct work that can have far reaching implications and impacts. Such work can positively or negatively affect segments of the population and thereby the general public’s perceptions of, and reactions to, the federal government, including Members of Congress. Over the years, the Congress has developed a bulwark of protections to shield federal workers from reprisals that might result from their service as employees. Included among these has been the process by which federal employees’ salaries are annually adjusted as a result of the passage of, and signing into law, of the annual budget. The historical process relies on passage of legislation which includes an annual increase in pay to reflect increases in inflation and overall employment costs, followed by determinations by the President (and the Office of Personnel Management) to calculate the distribution of the legislative economic adjustments between an overall cost-of-living adjustment and locality-based increases to reflect differences in cities across the nation. The current mechanism for annual federal pay adjustments is found in Public Law 101-509, the Federal Employees Pay Comparability Act. The Comptroller General has expressed his concern about trends in the executive branch that make it highly likely that the current civil service pay system will be the subject of comprehensive reform within the next few years. Citing federal agencies that already have many of these flexibilities, such as the Federal Aviation Administration and the new Department of Homeland Security, as well as agencies currently seeking reform, such as the Department of Defense, he has stated his belief that GAO needs to be “ahead of the curve.” Under the proposal, rather than relying on the administration’s determination and the Congress’ mandate for an annual salary adjustment, GAO can develop and apply its own methodology for the annual cost-of- living adjustments and compensation differences by locality that the Comptroller General believes would be more representative of the nature, skills, and composition of GAO’s workforce. Some employees have expressed following concerns. Removing GAO from the traditional process significantly alters a key element of federal pay protection that led some employees to seek employment in the federal sector. Changing this protection could diminish the attractiveness of federal service and result in the need for higher salaries to attract top candidates. A portion of appropriations historically intended to provide all federal employees with increases to keep pace with inflation and the cost of living in particular localities should not be tied to individual performance. GAO-based annual economic adjustments are more likely to be less than, rather than more than, amounts annually provided by the Congress; thus employees performing at lower (but satisfactory) levels who may not receive an equal or greater amount in the form of a bonus or dividend may experience an effective pay cut from amounts traditionally provided. The flexibility for the Comptroller General to use funds appropriated for cost-of-living adjustments for pay-for-performance purposes could imperil future GAO budgets by making that portion of the annual budget discretionary where it was once mandatory. The wide latitude provided in the proposal gives the Comptroller General broad discretion and limited accountability for determining whether employees receive annual across-the-board economic adjustments, the amount of such adjustments, and the timing of adjustments could result in unfair financial harm for some employees if the broad authorities were improperly exercised. The Comptroller General has not made a compelling case regarding the need for these pay-related and other legislative changes, for example by showing that existing cost-of-living adjustment mechanisms are inaccurate or that the agency has had difficulty in attracting and retaining high-quality employees. On the other hand, some employees also recognize that the proposed pay provisions may offer some distinct advantages for some employees. Some employees commented in support of the provision indicating that the existing system for calculating inflation and local cost adjustments may not accurately reflect reality; most employees would not likely be harmed by a system that allocates a greater share of pay to performance-based compensation; the authorities would allow GAO managers to provide greater financial rewards to the agency’s top performers, as compared to the present pay- for-performance system; making a stronger link between pay and performance could facilitate GAO’s recruitment of top talent. In addition, the provision may, to a limited extent, address a concern of some field employees by providing alternatives to reductions in force in times when mandated pay increases are not fully funded or in other extraordinary circumstances. For example, from 1992 to 1997, GAO underwent budgetary cuts totaling 33 percent (in constant fiscal year 1992 dollars.) To achieve these budgetary reductions, GAO staff was reduced by 39 percent, primarily through field office closures and the associated elimination of field-based employees. While we hope the agency will never again have to manage budget reductions of this magnitude, this provides a painful example of the vulnerability of staffing levels, particularly in the field, to budgetary fluctuations. The proposed pay provisions would provide the Comptroller General with greater flexibility to manage any future budget crises by adjusting the annual pay increases of all employees without adversely and disproportionately impacting the careers and lives of field-based employees. In addition to the revised basis for calculating annual economic adjustments, employees are concerned about the provision that transforms a portion of the annual pay increases that have historically been granted to federal employees for cost-of-living and locality-pay adjustments into variable, performance-based pay increases and bonuses. Because the GAO workforce is comprised of a wide range of highly qualified and talented people performing a similarly wide range of tasks, employees recognize that it is likely that some employees at times have more productive years with greater contributions than others. Therefore, most agree with the underlying principle of the provision to provide larger financial rewards for employees determined to be performing at the highest level. However, in commenting on the proposal, some employees said that GAO management already has multiple options to reward high performers through bonuses, placement in top pay-for-performance categories, and promotions. Others expressed concern that increased emphasis on individual performance could result in diminished teamwork, collaboration, and morale because GAO work typically is conducted in teams, often comprised of employees who are peers. “The PFP (pay-for-performance) process involves managers making very fine distinctions in staff’s performance in order to place them in discrete performance management categories. These categories set artificial limits on the number of staff being recognized for their contributions with merit pay and bonuses.” Related to concerns about subjectivity in the performance assessment system, Council representatives and employees expressed concern about data indicating that as a group, minorities, veterans, and field-based employees have historically received lower ratings than the employee population as a whole. While the data indicate that the disparity is considerably improved or eliminated for employees who have been with the agency fewer than 5 years, some employees have serious reservations about providing even greater discretion in allocating pay based on the current performance management system. To a lesser extent, some employees expressed concerns about the elimination of traditional federal employment rules related to grade and pay retention for employees who are demoted due to such conditions as a workforce restructuring or reclassification. The proposed legislation will allow the Comptroller General to set the pay of employees downgraded as a result of workforce restructuring or reclassification at their current rates (i.e., no drop in current pay), but with no automatic annual increase to basic pay until their salaries are less than the maximum rates of their new grades or bands. Employee concern, particularly among some Band II analysts and mission support staff, focuses on the extent to which this provision may result in a substantial erosion in future pay, since there is a strong possibility that these two groups may be restructured in the near future. For example, one observation is that the salary range within pay bands is such that senior analysts who are demoted would likely wait several years for their next increase in pay or bonus. In this circumstance, employees would need to reconcile themselves to no permanent pay increases regardless of their performance. Some employees cited this potential negative impact on staff motivation and productivity and emphasized that to continue providing service at the level of excellence that the Congress and the American people expect from GAO, this agency needs the best contributions of all its midlevel and journeymen employees. However, the EAC recognizes that, absent this kind of authority and given some of the authorities already provided to the Comptroller General, some employees who may be demoted could otherwise face termination rather than diminished salary increases. Finally, employees had differing opinions regarding the provision to change GAO’s name to the Government Accountability Office. Some employees are concerned that the proposed change in GAO’s name to more accurately reflect the work that we do will damage GAO’s “brand recognition.” Most employees who oppose the name change do not see the current name as an impediment to doing our work or to attracting quality employees. Some employees expressed concern that the legacy of high-quality service to the Congress that is embedded in the name “United States General Accounting Office” might be lost by changing the name. Other employees support the name change and cited their own experiences in being recruited or recruiting others and in their interaction with other federal agencies. In their opinion, the title “General Accounting Office” reflects misunderstandings and incorrect assumptions about GAO’s role and function by those who are not familiar with our operations and may serve as a deterrent to attracting employees who are otherwise not interested in accounting. We appreciate the Comptroller General’s efforts to involve the Employee Advisory Council and to solicit employee input through discussions of the proposal. As a result of employee feedback and feedback from GAO managers and the EAC, the Comptroller General has made a number of revisions and clarifications to the legislative proposal along with commitments to address concerns relating to the annual pay adjustment by issuing formal GAO policy to formally establish his intent to retain employees’ earning power in implementing the authorities; by revising the performance management system; and by deferring implementation of pay changes until 2005. Key among the commitments made by the Comptroller General is his assurance to explicitly consider factors such as cost-of-living and locality- pay differentials among other factors, both items that were not in the preliminary proposal. In addition, the Comptroller General has said that employees who are performing adequately will be assured of some annual increase that maintains spending power. He outlined his assurance in GAO’s weekly newsletter for June 30th that successful employees will not witness erosion in earning power and will receive an annual adjustment commensurate with locality-specific costs and salaries. According to the Comptroller General, pay protection commitments that are not included in the statute will be incorporated in the GAO orders required to implement the new authorities. This is consistent with the approach followed when GAO made similar pay protection commitments during the conversion to broad bands in the 1980s. To the extent that these steps are taken, overall employee opinion of the changes should improve because much of the concern has focused on making sure that staff who are performing adequately do not witness economic erosion in their pay. In response to concerns regarding the performance management system and the related variable elements of annual pay increases raised by the EAC, employees, and senior managers, the Comptroller General has told employees that he will provide increased transparency in the area of ratings distributions, for example by releasing summary-level performance appraisal results. In addition, the Comptroller General has stated that he plans to take steps to improve the performance management system that could further reduce any disparities. Specifically, on June 26, the Comptroller General released a "Performance Management System Improvement Proposal for the FY 2003 Performance Cycle" that outlines proposed short-term improvements to the analyst performance management system that applies to the majority of GAO employees. These include additional training for staff and performance managers and a reduction in the number of pay categories from five to four. A number of longer-term improvements to the performance appraisal system requiring validation are also under consideration, including weighting competencies and modifying, adding, or eliminating competencies. For all employees to embrace any additional pay-for-performance efforts, it is vital that the Comptroller General take steps that will provide an increased level of confidence that the appraisal process is capable of accurately identifying high performers and fairly distinguishing between levels of performance. Finally, the Comptroller General has agreed to delay implementation of the pay-for-performance provisions of the proposal until October 1, 2005. This change should provide an opportunity to assess efforts to improve the annual assessment process and lessen any impact of changes in the permanent annual pay increase process for employees approaching retirement. It should also provide an opportunity to implement a number of measures designed to improve confidence in the annual assessment process. In summary, as GAO employees we are proud of our work assisting the Congress and federal agencies to make government operations more efficient and effective. Although all of us would agree that our agency is not perfect, the EAC believes GAO is making a concerted effort to become a more effective organization. We will continue to work closely with management to improve GAO, particularly in efforts to implement and monitor any additional authorities granted to the Comptroller General. We believe that it is vital that we help to develop and implement innovative approaches to human capital management that will enable GAO to continue to meet the needs of the Congress; further improve the work environment to maximize the potential of our highly skilled, diverse, and dedicated workforce; and serve as a model for the rest of the federal government. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony discusses how the Comptroller General formed the Employee Advisory Council (EAC) about 4 years ago to be fully representative of the GAO population and to advise him on issues pertaining to both management and employees. The members of the EAC represent a variety of employee groups and almost all employees outside of the senior executive service (more than 3,000 of GAO's 3,200 employees or 94 percent). The EAC operates as an umbrella organization that incorporates representatives of GAO's long-standing employee organizations including groups representing the disabled, Hispanics, Asian-Americans, African-Americans, gays and lesbians, veterans, and women, as well as employees in various pay bands, attorneys, and administrative and professional staff. The EAC serves as an advisory body to the Comptroller General and other senior executives by (1) seeking and conveying the views and concerns of the individual employee groups it represents while being sensitive to the mutual interests of all employees, regardless of their grade, band, or classification group, (2) proposing solutions to concerns raised by employees, as appropriate, (3) providing input by assessing and commenting on GAO policies, procedures, plans, and practices and (4) communicating issues and concerns of the Comptroller General and other senior managers to employees.
|
As noted earlier, before a rule can become effective, it must be filed in accordance with the statute. Prior to the March 10 hearing, GAO conducted a review to determine whether all final rules covered by CRA and published in the Federal Register were filed with the Congress and GAO. We performed this review to both verify the accuracy of our database and to ascertain the degree of agency compliance with CRA. Our review covered the 10-month period from October 1, 1996, to July 31, 1997. In November 1997, we submitted to OIRA a computer listing of the rules that we found published in the Federal Register but not filed with our Office. This initial list included 498 rules from 50 agencies. OIRA distributed this list to the affected agencies and departments and instructed them to contact GAO if they had any questions regarding the list. Beginning in mid-February, because 321 rules remained unfiled, we followed up with each agency that still had rules which were unaccounted for. OIRA did not participate in the follow-up effort. Our Office experienced varying degrees of responses from the agencies. Several agencies, notably the Environmental Protection Agency and the Department of Transportation, took immediate and extensive corrective action to submit rules that they had failed to submit and to establish fail-safe procedures for future rule promulgation. Other agencies responded by submitting some or all of the rules that they had failed to previously file. Several agencies are still working with us to assure 100 percent compliance with CRA. Some told us they were unaware of CRA or of the CRA filing requirement. Overall, our review disclosed, as of the March 10 hearing, that: 279 rules should have been filed with us; 264 of these have subsequently 182 were found not to be covered by CRA as rules of particular applicability or agency management and thus were not required to be filed; 37 rules had been submitted timely and our database was corrected; and 15 rules from six agencies had not been filed. As we noted at the hearing, we believe OIRA should have played a role in ensuring that agencies were both aware of the CRA filing requirements and were complying with them. Last week, our Office concluded a second review covering the 5-month period from August 1, 1997, to December 31, 1997, which we conducted in the same manner as the prior review. The initial list which we forwarded to OIRA on April 2 for distribution to the concerned agencies contained 115 rules from 21 agencies. On June 2, OIRA agreed to follow up with the agencies that had not responded. As of June 11, 45 of the 115 rules had been filed; 25 were found not to be subject to CRA because they were rules of particular applicability or agency management and 24 had been previously timely submitted and our database was corrected. Twenty-one rules from eight agencies remain unfiled. I would like to point out two areas which show improvement. First, the number of unfiled rules which should have been filed were 66 for the 5-month period. This is down markedly from the 279 for the prior 10-month review, thus indicating a more concerted effort on the part of the agencies to fulfill their responsibilities under CRA. Secondly, OIRA has become more involved and conducted the follow-up contacts with the agencies after OIRA’s distribution of the initial list. Some agencies failed to delay the effective date of some major rules for 60 days as required by section 801(a)(3)(A) of the Act. At the time of my prior testimony, the effective date of eight major rules had not been delayed. Agencies were not budgeting enough time into their regulatory timetable to allow for the delay and were misinterpreting the “good cause” exception to the 60-day delay period found in section 808(2). Section 808(2) states that, notwithstanding section 801, “any rule which an agency for good cause finds (and incorporates the finding and a brief statement of reasons therefor in the rule issued) that notice and public procedure thereon are impracticable, unnecessary, or contrary to the public interest” shall take effect at such time as the federal agency promulgating the rule determines. This language mirrors the exception in the Administrative Procedure Act (APA) to the requirement for notice and comment in rulemaking. 5 U.S.C. § 553(b)(3)(B). In our opinion, the “good cause” exception is only available if a notice of proposed rulemaking was not published and public comments were not received. Many agencies, following a notice of proposed rulemaking, have stated in the preamble to the final major rule that “good cause” existed for not providing the 60-day delay. Examples of reasons cited for the “good cause” exception include (1) that Congress was not in session and thus could not act on the rule, (2) that a delay would result in a loss of savings that the rule would produce, or (3) that there was a statutorily mandated effective date. The former administrator of OIRA disagreed with our interpretation of the statutory “good cause” exception. She believed that this interpretation would result in less public participation in rulemaking because agencies would forgo issuing a notice of proposed rulemaking and receipt of public comments in order to invoke the CRA “good cause” exception. OIRA contends that the proper interpretation of “good cause” should be the standard employed for invoking section 553(d)(3) of the APA, “as otherwise provided by the agency for good cause found and published with the rule,” for avoiding the 30-day delay in a rule’s effective date required under the APA. Since CRA’s section 808(2) mirrors the language in section 553(b)(B), not section 553(d)(3), it is clear that the drafters intended the “good cause” exception to be invoked only when there has not been a notice of proposed rulemaking and comments received. In the last 3 months, our Office has not reviewed a major rule that did not properly comply with the 60-day delay requirement. Also, the “good cause” exception has been properly employed in those instances where no notice of proposed rulemaking was issued or comments received. Finally, agencies are alerting the public, in the final rule publication in the Federal Register, that the 60-day effective date stated in the rule may be delayed due to the need to comply with the CRA. The Health Care Financing Administration (HCFA) of the Department of Health and Human Services, in a recent Medicare rule, contained such a notice, and since HCFA’s submission of the rule was 5 days later than the publication of the rule in the Federal Register, the effective date was delayed in accordance with the CRA. One early question about implementation of CRA was whether executive agencies or OIRA would attempt to avoid designating rules as major and thereby avoid GAO’s review and the 60-day delay in the effective date. While we are unaware of any rule that OIRA deliberately misclassified to avoid the major rule designation, mistakes have been made in major rule classifications. Also, the failure of agencies to identify some issuances as “rules” at all has meant that some major rules have not been identified. CRA contains a broad definition of “rule,” including more than the usual “notice and comment” rulemakings under the Administrative Procedure Act which are published in the Federal Register. “Rule” means the whole or part of an agency statement of general applicability and future effect designed to implement, interpret, or prescribe law or policy. Recently, we compared an OIRA-prepared list of important final rules that it reviewed during the first year of the CRA to the list of rules that OIRA and the agencies had identified to us as major during the same period. We found that 12 rules on our list of major rules were not on OIRA’s list. OIRA officials said that, in retrospect, they and the agencies should not have identified 7 of those 12 rules as major. The OIRA list also contained 8 rules that were not on our list of 122 major rules. Of these, OIRA officials said that all eight should have been identified and submitted to us as major rules. OIRA officials noted that all of these rules were issued in the first year of the congressional review process, and that they and the agencies were still learning how to respond to the statutory requirements. We are currently following up with OIRA and the agencies that issued these rules to determine whether they should be added to or subtracted from our list of major rules. As I noted in my prior testimony, on occasion, our Office has been asked whether certain agency action, issuance, or policy constitutes a “rule” under CRA such that it would not take effect unless submitted to our Office and the Congress in accordance with CRA. For example, in response to a request from the Chairman of the Subcommittee on Forests and Public Land Management, Senate Committee on Energy and Resources, we concluded that a memorandum issued by the Secretary of Agriculture in connection with the Emergency Salvage Timber Sale Program constituted a “rule” under CRA and should have been submitted to the Houses of Congress and GAO before it could become effective. Likewise, we concluded that the Tongass National Forest Land and Resource Management Plan issued by the United States Forest Service was a “rule” under CRA and should have been submitted for congressional review. There are 123 forest plans covering all 155 forests in the National Forest System. Each plan must be revised and reissued every 10 years. OIRA stated that, if the plan was a rule, it would be a major rule. In testimony before the Senate Committee on Energy and Natural Resources and the House Committee on Resources regarding the Tongass Plan, the Administrator of OIRA stated that, as was the practice under the APA, each agency made its own determination of what constituted a rule under CRA and by implication, OIRA was not involved in these determinations. We continue to believe that for CRA to achieve what the Congress intended, OIRA must assume a more active role in guiding or overseeing these types of agency decisions. Other than an initial memorandum following the enactment of CRA, we are unaware of any further OIRA guidance. Because each agency or commission issues many manuals, documents, and directives which could be considered “rules” and these items are not collected in a single document or repository such as the Federal Register, it is difficult to ascertain if agencies are fully complying with CRA. We note certain congressional committees are taking an active role in overseeing agency compliance with the CRA. For example, the Joint Committee on Taxation has corresponded with the Internal Revenue Service (IRS) as to what should be submitted. Therefore, IRS procedures, rulings, regulations, notices, and announcements are forwarded as CRA submittals. Also, in response to the request of the House Committee on Education and the Workforce, the Departments of Labor and Education deliver their CRA submissions with a monthly summary directly to the Committee, in addition to our Office and both Houses of Congress as required by the CRA. As we discussed at your March hearing, we have attempted to work with executive agencies to get more substantive information about the rules and to get such information supplied in a manner that would enable quick assimilation into our database. An expansion of our database could make it more useful not only to GAO for its use in supporting congressional oversight work, but directly to the Congress and to the public. In the initial development of the questionnaire, we consulted with executive branch officials to insure that the requested information would not be unnecessarily burdensome. We circulated the questionnaire for comment to 20 agency officials with substantial involvement in the regulatory process, including officials from OIRA. The Administrator of OIRA submitted a response in her capacity as Chair of the Regulatory Working Group, consolidating comments from all the agencies represented in that group. It was the position of the group that the completion of this questionnaire for each of the 4,000 to 5,000 rules filed each year is too burdensome for the agencies concerned. On April 22 of this year we again contacted OIRA officials with a modified version of our questionnaire, which we believed addressed the major concerns raised with the initial version. We have subsequently met with officials from OIRA and a select group of executive agency officials, at their request, to explore additional ways to capture the information. We are currently reviewing an alternative, but we believe inadequate, version of the questionnaire proposed by those officials and will meet next week to continue negotiations on this matter. We continue to believe that it would further the purpose of CRA for a database of all rules submitted to GAO to be available for review by Members of Congress and the public and to contain as much information as possible concerning the content and issuance of the rules. We believe that further talks with the executive branch, led by OIRA, can be productive and that there may be alternative approaches that address both congressional and executive branch concerns. CRA gives the Congress an important tool to use in monitoring the regulatory process, and we believe that the effectiveness of that tool can be enhanced. Executive Order 12866 requires that OIRA, among other things, provide meaningful guidance and oversight so that each agency’s regulatory actions are consistent with applicable law. After 2 years’ experience in carrying out our responsibilities under the Act, we can suggest several areas in which OIRA should exercise more leadership within the executive branch regulatory community, consistent with the intent of the Executive Order, to enhance CRA’s effectiveness and its value to the Congress and the public. We believe that OIRA should: develop a standardized reporting format that can readily be incorporated into GAO’s database providing the information of most use to the Congress, the public, and GAO; establish a system to monitor compliance with the filing requirement on an provide clarifying guidance as to what is a rule that is subject to CRA and oversee the process of identifying such rules. Thank you, Mr. Chairman. This concludes my prepared remarks. I would be happy to answer any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed its experience in fulfilling its responsibilities under the Congressional Review Act (CRA) and its efforts to coordinate implementation of the Act with the Office of Management and Budget's Office of Information and Regulatory Affairs (OIRA). GAO noted that: (1) under CRA two types of rules, major and nonmajor, must be submitted to both Houses of Congress and GAO before either can take effect; (2) CRA specifies that the determination of what rules are major is to be made by OIRA; (3) its primary role under CRA is to provide Congress with a report on each major rule concerning GAO's assessment of the promulgating federal agency's compliance with the procedural steps required by various acts and executive orders governing the regulatory process; (4) although the law is silent as to GAO's role relating to the nonmajor rules, it believes that basic information about the rules should be collected in a manner that can be of use to Congress and the public; (5) to do this, GAO has established a database that gathers basic information about the 15-20 rules GAO receives on the average each day; (6) GAO conducted a review to determine whether all final rules covered by CRA and published in the Federal Register were filed with Congress and GAO; (7) the review, covering the 10-month period from October 1, 1996, to July 31, 1997, identified 498 rules from 50 agencies that were not properly submitted for congressional review; (8) GAO submitted the list to OIRA in November 1997; (9) OIRA distributed this list to the affected agencies and instructed them to contact GAO if they had any questions; (10) beginning in mid-February, because 321 rules remained unfiled, GAO followed up with each agency that had rules unaccounted for; (11) OIRA did not participate in the follow-up effort; (12) GAO's office experienced varying degrees of responses from the agencies during the followup; (13) GAO conducted a second review covering the 5-month period from August 1, 1997, to December 31, 1997; (14) GAO noted two areas of improvement: (a) the number of unfiled rules which should have been filed were 66, down from the 279 for the prior 10-month review, indicating a more concerted effort by agencies to fulfill their responsibilities under CRA; and (b) OIRA has become more involved and conducted the follow-up contacts with agencies after distribution of the list; (15) while GAO is unaware of any rule the OIRA deliberately misclassified to avoid the major rule designation, mistakes have been made in major rule classifications; and (16) the failure of agencies to identify some issuances as rules at all has meant that some major rules have not been identified.
|
Congress created FDIC in 1933 to restore and maintain public confidence in the nation’s banking system. The Financial Institutions Reform, Recovery, and Enforcement Act of 1989 sought to reform, recapitalize, and consolidate the federal deposit insurance system. It created the Bank Insurance Fund and the Savings Association Insurance Fund, which are responsible for protecting insured bank and thrift depositors, respectively, from loss due to institutional failures. The act also created the FSLIC Resolution Fund to complete the affairs of the former FSLIC and liquidate the assets and liabilities transferred from the former Resolution Trust Corporation. It also designated FDIC as the administrator of these funds. As part of this function, FDIC has an examination and supervision program to monitor the safety of deposits held in member institutions. FDIC insures deposits in excess of $3.3 trillion for about 9,400 institutions. Together the three funds have about $49.5 billion in assets. FDIC had a budget of about $1.2 billion for calendar year 2002 to support its activities in managing the three funds. For that year, it processed more than 2.6 million financial transactions. FDIC relies extensively on computerized systems to support its financial operations and store the sensitive information it collects. Its local and wide area networks interconnect these systems. To support its financial management functions, it relies on several financial systems to process and track financial transactions that include premiums paid by its member institutions and disbursements made to support operations. In addition, FDIC uses other systems that maintain personnel information for its employees, examination data for financial institutions, and legal information on closed institutions. At the time of our review, about 7,000 individuals were authorized to use FDIC’s systems. FDIC’s acting CIO is the corporation’s key official for computer security. The objectives of our review were to assess (1) the progress FDIC had made in correcting or mitigating weaknesses reported in our calendar year 2001 financial statement audit and (2) the effectiveness of information system general controls. These information system controls also affect the security and reliability of other sensitive data, including personnel, legal, and bank examination information maintained on the same computer systems as the corporation’s financial information. Our evaluation was based on (1) our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the integrity, confidentiality, and availability of computerized data; and (2) our May 1998 report on security management best practices at leading organizations, which identifies key elements of an effective information security program. Specifically, we evaluated information system controls intended to protect data and software from unauthorized access; prevent the introduction of unauthorized changes to application and provide segregation of duties involving application programming, system programming, computer operations, information security, and quality assurance; ensure recovery of computer processing operations in case of disaster or other unexpected interruption; and ensure an adequate information security management program. To evaluate these controls, we identified and reviewed pertinent FDIC security policies and procedures, and conducted tests and observations of controls in operation. In addition, we reviewed corrective actions taken by FDIC to address vulnerabilities identified in our calendar year 2001 audit. We performed our review at FDIC’s headquarters in Washington, D.C.; its computer facility in Arlington, Virginia; and FDIC’s Dallas regional office, from October 2002 through March 2003. Our review was performed in accordance with U.S. generally accepted government auditing standards. FDIC has made progress in correcting previously identified computer security weaknesses. Of the 41 weaknesses identified in our calendar year 2001 audit, FDIC has corrected 19 and is taking action intended to resolve the 22 that remain. FDIC has addressed key access control, application software, system software, and service continuity weaknesses previously identified. Specifically, FDIC limited access to certain critical programs, software, and data; reduced the number of users with physical access to computer facilities; enhanced its review procedures of system software changes; strengthened its procedures for reviewing changes to application expanded tests of its disaster recovery plan; and defined the roles and responsibilities of its information security officers. In addition to responding to previously identified weaknesses, FDIC established several other computer controls to enhance its information security. For example, it enhanced procedures to periodically review user access privileges to computer programs and data to ensure that access is granted only to those who need it to perform their jobs. Likewise, FDIC strengthened its physical security controls by establishing criteria for granting access to computer center operations, and developed procedures for periodically reviewing access to ensure that it remained appropriate. Further, FDIC enhanced its system software change control process by developing procedures requiring technical reviews of all system software modifications prior to their implementation. In addition, it established a process to periodically review application software to ensure that only authorized computer program changes were being made. FDIC also improved its disaster recovery capabilities by establishing an alternate backup site to support its computer network and related system platforms, and by conducting periodic unannounced walk-through tests of its disaster recovery plan. The following sections summarize the results of our review. Our “Limited Official Use Only” report details specific weaknesses in information systems controls that we identified, provides our recommendations for correcting each weakness, and indicates FDIC’s planned actions or those already taken for each weakness. An evaluation of the adequacy of this action plan will be part of our future work at FDIC. Although FDIC established many policies, procedures, and controls to protect its computing resources, the corporation did not always effectively implement them to ensure the confidentiality, integrity, and availability of financial and sensitive data processed by its computers and networks. In addition to the previously reported weaknesses that remain not fully addressed, 29 new information security weaknesses were identified during this review. The weaknesses identified included instances in which FDIC did not adequately restrict mainframe access, secure its network, or establish a complete program to monitor access activities. In addition, new weaknesses in other information system controls, including physical security, application software, and service continuity, further increase the risk to FDIC’s information systems. Collectively they place the corporation’s systems at risk of unauthorized access, which could lead to unauthorized disclosure, disruption of critical operations, and loss of assets. A basic management control objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modification, disclosure, or deletion. Organizations can protect this critical information by granting employees the authority to read or modify only those programs and data that they need to perform their duties and by periodically reviewing access granted to ensure that it is appropriate. Effective mainframe access controls should be designed to restrict access to computer programs and data, and prevent and detect unauthorized access. These controls include access rights and permissions, system software controls, and software library management. While FDIC restricted access to many users who previously had broad access to critical programs, software, and data, instances remained in which the access granted specific users was still not appropriate. A key weakness in FDIC’s controls was that it did not adequately limit user access, as described below. Nineteen users had access to production control software that would allow them to modify software outside the formal configuration control process. This risk was further heightened because FDIC was not maintaining audit logs of software changes. Without such logs, unauthorized software changes could be made to critical financial and sensitive systems, possibly without detection. This software was especially vulnerable because it could allow an unauthorized user to bypass security controls. Further, an excessive number of users had access to 14 of 19 production job control systems we reviewed, allowing them to obtain exact details of production programs and data, which could then be used to gather information to circumvent controls. An excessive number of users had access that allowed them to read user identifications (IDs) and passwords used to transfer data among FDIC production computer systems. With these IDs and passwords, the users could gain unauthorized access to financial and sensitive corporation information, possibly without detection. FDIC did not adequately restrict users from viewing sensitive information. For example, about 70 users had unrestricted read access to all information that the corporation printed from its mainframe computer. This included information on bank examinations, payroll and personnel data, legal reports, vendor payments, and security monitoring information. One reason for FDIC’s user access vulnerabilities was that the corporation, while making progress, still had not fully established a process for reviewing the appropriateness of individual access privileges. Specifically, FDIC’s process did not include a comprehensive method for identifying and reviewing all access granted to any one user. Such reviews would have allowed FDIC to identify and correct inappropriate access. In response, FDIC said that it has since taken steps to restrict access to sensitive resources. Further, the corporation stated that it has improved its audit logging of user access activities, enhanced its process for identifying and reviewing access granted, and further reduced access to the minimum necessary for users to perform their job functions. Network security controls are key to ensuring that only authorized individuals gain access to sensitive and critical agency data. Effective network security controls should be established to authenticate local and remote users. These controls include a variety of tools such as user passwords, intended to authenticate authorized users who access the network from local and remote locations. In addition, network controls provide safeguards to ensure that system software is adequately configured to prevent users from bypassing network access controls or causing network failures. Since our last audit, FDIC took major steps to secure its network through enhancements to its firewall and establishment of procedures to review contractor network connections; further, it recently implemented actions to review the effectiveness of network security controls. Nonetheless, weaknesses in the way the corporation configured its network servers, managed certain user IDs and passwords, and provided network services have not yet been corrected. One system was using a default vendor account with broad access that would allow the user to read, copy, modify, or delete sensitive network configuration files. Information on default vendor accounts is available in vendor-supplied manuals, which are readily available to hackers. With this ability, a malicious user or intruder could seriously disable or disrupt network operations by taking control of key segments of the network or by gaining unauthorized access to critical applications and data. A network service was not configured to restrict access to sensitive network resources. As a result, anyone—including contractors—with access to the FDIC network could obtain copies or modify configuration files containing control information such as access control lists and user passwords. With the ability to read, copy, or modify these files, an intruder could disable or disrupt network operations by taking control of sensitive and critical network resources. A key network server was not adequately configured to restrict access. As a result, anyone—again, including contractors—with connectivity to the FDIC network could copy or modify files containing sensitive network information. With this level of access, an unauthorized user could control key segments of the network. Further, FDIC did not adequately secure its network against known vulnerabilities or minimize the operational impact of a potential failure in a critical network device. Failure to address known vulnerabilities increases the risk of system compromise, such as unauthorized access to and manipulation of sensitive system data, disruption of services, and denial of service. In response to our findings, FDIC’s acting CIO said that the corporation had taken steps to improve network security. Specifically, he said that FDIC had removed the vendor default account, reconfigured network resources to restrict access, and installed software patches to secure against known vulnerabilities. A program to monitor access activities is essential to ensuring that unauthorized attempts to access critical programs and data are detected and investigated. Such a program would include routinely reviewing user access activity and investigating failed attempts to access sensitive data and resources, as well as unusual and suspicious patterns of successful access to sensitive data and resources. To effectively monitor user access, it is critical that logs of user activity be maintained for all critical processing activities. This includes collecting and monitoring activities on all critical systems, including mainframes, network servers, and routers. A comprehensive monitoring program should include an intrusion-detection system to automatically log unusual activity, provide necessary alerts, and terminate access. While FDIC has made progress in developing systems to identify unauthorized or suspicious access activities for both its mainframe and network systems, it still has not completed a program to fully monitor such activities. As a result, reports designed to provide security staff with information on network access activities, including information on unusual or suspicious access, were not available due to technical problems in producing them. Consequently, security staff and administrators did not have the information they needed to effectively monitor the network for unauthorized or inappropriate access. Further, FDIC was not monitoring the access of certain employees and contractors with access that allowed them to modify specific sensitive system software libraries that can perform functions that circumvent all security controls. While these users were granted these access privileges, FDIC did not maintain audit logs of access to ensure that only authorized modifications were made to these libraries. As a result, these users could make unauthorized modifications to financial data, programs, or system files, possibly without detection. According to the acting CIO, the corporation has taken action to improve its program to monitor access activities. This includes developing and implementing new reports for monitoring network access and initiating action to fully implement its intrusion-detection system. In addition to information system access controls, other important controls necessary to ensure the confidentiality, integrity, and availability of an organization’s system and data were ineffective at FDIC. These controls include policies, procedures, and techniques that physically secure data- processing facilities and resources, prevent unauthorized changes to application software, and effectively ensure the continuation of computer processing service if an unexpected interruption occurs. Although FDIC has implemented numerous information system controls, remaining weaknesses in these areas increase the risk of unauthorized disclosure, disruption of critical operations, and loss of assets. Physical security controls should be designed to prevent vandalism and sabotage, theft, accidental or deliberate alteration or destruction of information or property, and unauthorized access to computing resources. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms in which these resources are housed, and periodically reviewing access granted to ensure that it continues to be appropriate based on criteria established for granting such access. FDIC has taken several actions to strengthen its physical security, including reducing the number of staff who have access to those areas where computer resources are housed. However, while it has established policies for granting access to its computer facilities and procedures for periodically reviewing the continued need for such access, it has not yet developed a process to ensure compliance with these policies and procedures. For example, while FDIC’s policy provides that contractor access may only be granted for up to 6 months, 24 of 126 contractors had access to FDIC’s computer center for periods exceeding 6 months, some for several years. Without a process to ensure compliance with established policies and procedures, FDIC cannot ensure that physical access to critical computer resources is adequately controlled. In response to our finding, the acting CIO, has since established additional controls to ensure compliance with its physical access policies relating to length of time access may be granted and maintenance of authorized access request forms. Further, FDIC recently filled a position whose duties specifically include providing daily compliance, monitoring, and oversight to ensure that physical access policies and procedures are properly followed. Standard application software change control practices prescribe that only authorized, fully tested, and reviewed changes should be placed in operation. Further, these practices provide a process for reviewing all software modifications made. This should include reviews of changes made to software used to link applications to computer data and programs needed to support their operations. While FDIC has implemented a procedure to review application software changes for evidence of unauthorized code, fraud, or other inappropriate actions, the procedure does not include a review of other types of changes, such as those made to software used to facilitate access to software files and data. As a result, unauthorized changes could be made that alter computer program logic. In response, FDIC has expanded its application software change process to include reviews of other software modifications, including those that facilitate access to files and data. Service continuity controls should be designed to ensure that when unexpected events occur, critical operations continue without interruption or are promptly resumed, and critical and sensitive data are protected. An essential element is up-to-date, detailed, and fully tested service and business continuity plans. To be effective, these plans should be understood by all key staff and to include surprise testing. FDIC has acted to enhance its service continuity program. For example, it (1) updated and conducted tests of its service continuity plan, (2) completed business continuity plans for all its facilities and conducted tests of these plans, and (3) established an alternate backup site to support its network and other computing resources. However, FDIC has not yet performed unannounced testing of its business continuity plan. Such tests are more realistic than announced tests and more accurately measure the readiness of staff for emergency situations. Further, FDIC had not ensured that the emergency personnel lists included in its business continuity plan are current. We identified 66 FDIC employees whose names were in the emergency personnel list but who had separated from FDIC, including 13 staff listed as key emergency team members. Without current emergency personnel lists, FDIC risks not being able to restore its critical business operations in a timely manner. FDIC has since established new procedures to ensure that emergency personnel lists remain current. FDIC officials said that they would incorporate unannounced testing of the business continuity plan into the 2003 operating plan, and would conduct these unannounced tests by December 31 of this year. The primary reason for FDIC’s continuing weaknesses in information system controls is that it has not yet fully developed and implemented a comprehensive corporate program to manage computer security. As described in our May 1998 study of security management best practices, a comprehensive computer security management program requires the following five elements, all essential to ensuring that information system controls work effectively on a continuing basis: a central security management structure with clearly defined roles and appropriate policies, procedures, and technical standards; periodic risk assessment; and an ongoing program of testing and evaluation of the effectiveness of policies and controls. We previously recommended to FDIC that it fully develop and implement a comprehensive security management program that includes each of these elements. FDIC has made progress in implementing a security management program. Specifically, it (1) established a central security management structure; (2) implemented security policies, procedures, and technical standards; and (3) enhanced security awareness training. However, the steps taken to address periodic risk assessment and ongoing testing and evaluation of policies and controls have not yet been sufficient to ensure continuing success. Central security management structure. FDIC has established a central security function and has appointed information security managers for each of its divisions, with defined roles and responsibilities. Further, it has provided guidance to ensure that security managers coordinate with the central security function on security-related issues. It has also developed the support of divisional senior management for the central security function. Appropriate policies, procedures, and technical standards. FDIC has updated its security policies and procedures to cover all aspects of the organization’s interconnected environment and all computing platforms. It has also established technical security standards for its mainframe and network systems and security software. Security awareness. Computer attacks and security breakdowns often occur because computer users fail to take appropriate security measures. FDIC has enhanced its security awareness program, which all employees and contractors are required to complete annually. It has also developed specialized security awareness training to address the specific needs of its security managers. Periodic risk assessment. Regular assessments, assist management in making decisions on necessary controls by helping to ensure that security resources are effectively distributed to minimize potential loss. And by increasing awareness of risks, these assessments generate support for the adopted policies and controls, which helps ensure that the policies and controls operate as intended. Further, Office of Management and Budget Circular A-130, appendix III, prescribes that risk be assessed when significant changes are made to the system but at least every 3 years. FDIC has not fully developed a framework for assessing and managing risk on a continuing basis. While it has taken some action, including developing a framework of assessing risk when significant changes are made to computer systems and providing tools for its security managers to use in conducting risk assessments, it has not developed a process for conducting these assessments. Our study of risk assessment best practices found that a process for performing such assessments should specify (1) how the assessments should be initiated and conducted, (2) who should participate, (3) how disagreements should be resolved, (4) what approvals are needed, and (5) how these assessments should be documented and maintained. In response, FDIC’s acting CIO said that the corporation is taking steps to develop risk assessment guidance. Testing and evaluation. A program that assesses the effectiveness of policies and controls includes processes for monitoring compliance with established information system control policies and procedures and testing the effectiveness of those controls. During the past year, FDIC has taken steps to establish such a program of testing and evaluation. Specifically, it has established a self-assessment program to evaluate information system controls and has implemented a program to monitor compliance with established policies and procedures that includes performing periodic reviews of system settings and tests of user passwords. Nonetheless, FDIC’s program does not cover all critical evaluation areas. Missing is an ongoing program that targets the key control areas of physical and logical access, segregation of duties, system and application software, and service continuity. In response, FDIC’s acting CIO said that the corporation is taking steps to establish an oversight program to cover its control environment that will include steps to assess areas such as access controls, segregation of duties, system and application software, and service continuity. Further, FDIC plans to address each of these areas as part of its evolving self-assessment process. Until a comprehensive program to monitor and test each of these control areas is in place, FDIC will not have the oversight needed to ensure that many of the same type of information system control weaknesses previously identified are not repeated. An effective ongoing comprehensive program to monitor compliance with established procedures can be used to identify and correct information security weaknesses, such as those discussed in this report. For example, a comprehensive process to review all access authority granted to each user to ensure that access was limited to that needed to complete job responsibilities could identify inappropriate access authority granted to users. A comprehensive program to regularly test information system controls can be used to detect network security weaknesses. For example, our technical reviews of network servers identified default system passwords in use that are readily known to hackers and could be used by them to gain the access needed to exploit the network and launch an attack on FDIC systems. Appropriate technical reviews of the network servers and routers can identify these types of exposures. FDIC has made progress in correcting information system control weaknesses and implementing controls, including limiting and reducing access, altering software change procedures, expanding testing of disaster recovery plans, and defining the roles and responsibilities of information security officers. Nonetheless, continuing and newly identified security weaknesses exist. FDIC has not adequately restricted mainframe access, sufficiently secured its network, or completed a program for fully monitoring access activity. Weaknesses in physical security, application software, and service continuity increase the level of risk. The effect of these weaknesses—including prior and current year—further increases the risk of unauthorized disclosure of critical financial and sensitive personnel and bank examination information, disruption of critical financial operations, and loss of assets. Implementation of FDIC’s plan to correct these weaknesses is essential to establish an effective information system control environment. The primary reason for FDIC’s continuing weaknesses in information system controls is that it has not yet been able to fully develop and implement a comprehensive program to manage computer security. While it has made progress in the past year in establishing key elements of this program—including a security management structure, security policies and procedures, and promoting security awareness—its systems will remain at heightened risk until FDIC establishes a process for assessing and managing risks on a continuing basis and fully implements a comprehensive, ongoing program of testing and evaluation to ensure policies and controls are appropriate and effective. Until FDIC takes steps to correct or mitigate its information system control weaknesses and fully implements a computer security management program, FDIC will have limited assurance that its financial and sensitive information are adequately protected from inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction. To establish an effective information system control environment, in addition to completing actions to resolve prior year weaknesses that remain open, we recommend that the Chairman instruct the acting CIO, as the corporation’s key official for computer security, to ensure that the following actions are completed. Correcting the 29 information system control weaknesses related to mainframe access, network security, access monitoring, physical access, application software, and service continuity identified in our current (calendar year 2002) audit. We are also issuing a report designated for “Limited Official Use Only,” which describes in more detail the computer security weaknesses identified and offers specific recommendations for correcting them. Fully develop and implement a computer security management program. Specifically, this would include (1) developing and implementing a process for performing risk assessments and (2) establishing an effective ongoing program of tests and evaluations to ensure that policies and controls are appropriate and effective. In providing written comments on a draft of this report, FDIC’s Chief Financial Officer (CFO) agreed with our recommendations. His comments are reprinted in appendix I of this report. Specifically, FDIC plans to correct the information systems control weaknesses identified and fully develop and implement a computer security management program by December 31, 2003. According to the CFO, significant progress has already been made in addressing the identified weaknesses. We are sending copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member of the House Committee on Financial Services; members of the FDIC Audit Committee; officials in FDIC’s divisions of information resources management, administration, and finance; and the FDIC inspector general. We will also make copies available to others parties upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-3317 or David W. Irvin, Assistant Director, at (214) 777-5716. We can also be reached by e-mail at [email protected] and [email protected], respectively. Key contributors to this report are listed in appendix II. In addition to the person named above, Edward Alexander, Gerald Barnes, Angela Bell, Nicole Carpenter, Lon Chin, Debra Conner, Anh Dang, Kristi Dorsey, Denise Fitzpatrick, David Hayes, Jeffrey Knott, Harold Lewis, Duc Ngo, Eugene Stevens, Rosanna Villa, Charles Vrabel, and Chris Warweg made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
|
Effective controls over information systems are essential to ensuring the protection of financial and personnel information and the security and reliability of bank examination data maintained bythe Federal Deposit Insurance Corporation (FDIC). As part of GAO's 2002 financial statement audits of the three FDIC funds, we assessed (1) the corporation's progress in addressing computer security weaknesses found in GAO's 2001 audit, and (2) the effectiveness of FDIC's controls. FDIC has made progress in correcting information system controls since GAO's 2001 review. Of the 41 weaknesses identified that year, FDIC has corrected or has specific action plans to correct all of them. GAO's 2002 audit nonetheless identified 29 new computer security weaknesses. These weaknesses reduce the effectiveness of FDIC's controls to safeguard critical financial and other sensitive information. Based on our review, mainframe access was not sufficiently restricted, network security was inadequate, and a program to fully monitor access activities was not implemented. Additionally, weaknesses in areas including physical security, application software, and service continuity further increased the risk to FDIC's computing environment. The primary reason for these continuing weaknesses is that FDIC has not yet completed development and implementation of a comprehensive program to manage computer security across the organization. FDIC has, among other things, established a security management structure, but still has not fully implemented a process for assessing and managing risk on a continuing basis or an ongoing program of testing and evaluating controls. The corporation's acting chief information officer has agreed to complete actions intended to address GAO's outstanding recommendations by December 31 of this year.
|
The state and local government sector consists of 50 state governments and 87,525 local governments. These local governments include 3,034 county governments, 19,429 municipal governments, 16,504 townships, 13,506 school districts, and 35,052 special districts. State and local governments provide vital services to citizens such as law enforcement, public education, and sewage treatment. Local governments derive their authority from the states, and the powers and responsibilities granted to local governments vary considerably. For example, while states generally provide authority to local governments to tax real property, local governments vary in their authority to levy other types of taxes, such as personal income or sales taxes. State and local governments collect receipts and receive federal funds to provide services to their constituents. In 2006, state and local governments received $1.9 trillion in total receipts. Taxes, such as property taxes, sales and excise taxes, personal income tax, and corporate income taxes, make up a large component of these receipts—fully $1.2 trillion. In addition, the federal government provided over $400 billion to state and local governments in the form of various grants (including Medicaid), loans, and loan guarantees. These federal funds accounted for approximately 22 percent of state and local government total receipts. State and local governments also obtain revenues from several other sources, such as income receipts on financial assets; certain receipts from businesses and individuals (such as vehicle and licensing fees); and, in some years, from surpluses on government-run enterprises that provide services such as energy, liquor, lotteries, and public transit. State and local governments fund a broad range of services such as public safety, housing, education, and public transportation programs. In 2006, state and local governments spent $691 billion on education—the largest expenditure category for the sector. These governments also spent $263 billion on projects such as highways, public transit, agriculture, and natural resources, and $242 billion on public safety services such as police and fire departments as well as prisons. State and local governments also provide a broad range of other services, such as income security for the poor and disabled, health-related services, housing and community development, recreation services such as parks, and utilities such as water, sewage, and energy. Budget processes vary considerably across the 50 states. According to the National Association of State Budget Officers (NASBO), about half of states enact budgets annually, while most others enact biennial budgets, and a few undertake a mix of annual and biennial budgeting. Most states budget separately for current operating costs and capital expenditures. The capital budget is used for states’ capital projects, and states frequently issue debt to help fund these investments. Most states have some form of balanced budget requirement for general funds—the fund that covers current operating costs—but the nature of these balanced budget requirements varies considerably. For example, some states require governors to submit a balanced budget, while others mandate that legislatures pass a balanced budget. Some direct governors to sign a balanced budget, and some require governors to execute a balanced budget. Many of the balanced budget provisions allow states to run small, short-term deficits. Our base case model rests on certain key assumptions. In particular, in the base case we assume that current federal, state, and local policies remain constant. On the receipt side, this translates into an assumption that the current tax structures of state and local governments are maintained in future years and that tax receipt growth reflects past experience, except that we remove the effect of past policy changes and the effects of unusual capital gains. On the expenditure side, we make assumptions that would generally be consistent with the maintenance of current policies in the provision of services to citizens. Since compensation of state and local employees is a large cost component of providing services to citizens, our assumptions about the growth in the number of state and local employees over time, as well as the growth in their wages, are significant components through which we implement the assumptions of the maintenance of current policy. Since 1980, the level of employment in the state and local sector has grown significantly faster than the U.S. population, but for our simulation we maintain state and local government employment as a steady share of the population over time. This would be consistent with the maintenance of current policy if there were no productivity gains in the sector, or a modest increase in real services, to the extent that state and local workers experience gains in productivity. Also, we assume that employees of state and local governments receive pay increases each year equal to those of private sector workers—an assumption that is generally consistent with historical experience. Finally, we assume that the total cost of many goods procured by the sector to provide services will rise with increases in the population being served and the rate of inflation in the economy. Table 1 summarizes the assumptions of the base case model. We calculated two measures of fiscal balance for the state and local government sector for each year until 2050. The measures are: Net lending or borrowing—the balance of all receipts and expenditures during a given time frame. This indicates the need for the sector to borrow funds or draw down assets to cover its expenditures. This measure is roughly analogous to the federal unified surplus or deficit. Operating balance net of funds for capital expenditures—or simply the “operating balance”—a measure of the ability of the sector to cover its current expenditures out of current receipts. In developing this measure we subtract funds used to finance longer-term projects—such as investments in buildings and roads—from receipts since these funds would not be available to cover current expenses. (See app. I for more detail on the measurement of this balance). Figure 1 shows values of the two balance measures—net lending or borrowing and operating balance—as a percentage of GDP under our base- case assumptions. Historical data from 1980 to 2006 are shown along with our model simulations beginning in 2007 and running through 2050. The figure shows that the two measures generally track one another. It also shows that, historically, net lending or borrowing has typically been negative, but rarely by more than 1 percent of GDP. This indicates that the sector generally issues debt—primarily to fund capital expenditures—but has done so at a reasonably stable pace. Additionally, the operating balance measure has historically been positive most of the time, ranging from about zero to about 1 percent of GDP. Thus, the sector usually has been able to cover its current expenses with incoming receipts. But the simulation suggests that while projected balances for both net lending or borrowing and the operating balance remain in their historical ranges for the next several years, the balances will soon begin to decline and will fall below their historical ranges within a decade. That is, the model suggests that the state and local government sector will face increasing fiscal stress in just a few years. Our simulations also indicate that by the mid-2020s the balance measures will both be well below their historical ranges, and will continue to fall throughout the remainder of the simulation time frame. These projected deficits—worsening throughout the projection time frame under an unchanged policy scenario—indicate that, because most state and local governments cannot actually run such deficits for any length of time, these governments will need to make tough choices on spending and tax policy to meet their budget requirements and to promote favorable bond ratings. Another way of measuring the long-term challenges faced by the state and local sector is through a measure known as the “fiscal gap.” With deficits rising rapidly as shown in figure 1, the outstanding debt of the state and local sector will experience unprecedented growth. The fiscal gap is an estimate of the action needed today and maintained for each and every year to achieve fiscal balance over a certain period. We measured the gap as the amount of spending reduction or tax increase needed to maintain debt as a share of GDP at or below today’s ratio. For the state and local sector, we calculated that to close the fiscal gap would require action today equal to a 15.2 percent tax increase or a 12.9 percent reduction in spending financed by non-grant revenues. The fiscal gap can also be expressed as a share of the economy or in present value dollars. We calculated that in 2007 dollars the fiscal gap amounts to $10.6 trillion, which represents 1.4 percent of the discounted value of GDP over the same time frame. Based on our review of the evidence, we estimate that expenditure growth for health care will be significant and these expenditures will constitute a rapidly growing burden for state and local governments. Two types of state and local expenditures in particular will likely rise quickly due to escalating medical costs. First, under CBO’s intermediate projections, federal Medicaid grants to states per recipient will rise by 1 percent more than GDP per capita in the coming years. Since Medicaid is a federal and state program with federal Medicaid grants based on a matching formula, these estimates indicate that expenditures for Medicaid by state governments will rise quickly as well. Second, we estimated future expenditures for health insurance for state and local employees and retirees. Specifically, we assume that the excess cost factor—the growth in these health care costs per capita above GDP per capita—will average 1.4 percentage points per year through 2035 and then begin to decline, reaching 0.6 percent by 2050. This results in a rapidly growing burden from these health-related activities in state and local budgets. In contrast, our implementation of assumptions about current policies indicated that, in aggregate, other expenditure categories grow less than GDP in our base-case simulations. For example, even though the wages and salaries of primary and secondary education employees are a large expenditure of the state and local sector, under our base-case assumptions these costs are not expected to grow any faster than the rate of growth of the general economy and will not represent an increasing burden on governments relative to their revenues. These base-case assumptions differ from historical experience in which real spending on primary and secondary education per pupil has risen in the past few decades. If such a trend were to continue, spending on education could place a growing burden on state and local governments in future years. Figure 2 shows the projected expenditures of the sector as a percentage of GDP. On the receipt side, state and local governments impose a variety of taxes. Our model projections suggest that most of these tax receipts will show modest growth in the future—and some are projected to experience a modest decline—relative to GDP. Figure 3 shows the expected path of several tax revenue sources. We found that personal state income taxes will show a small rise relative to GDP in coming years. This likely reflects that some state governments have a small degree of progressivity in their income tax structures. Sales taxes of the sector are expected to experience a slight decline as a percentage of GDP in the coming years. Property taxes—which are mostly levied by local governments—should rise slightly as a share of GDP in the future. These differential tax growth projections indicate that any given jurisdiction’s tax revenue prospects may be uniquely tied to the composition of taxes it imposes. The only source of revenue we expect to grow rapidly is federal grants to state governments for Medicaid. However, since Medicaid is a matching formula grant program, the projected escalation in federal Medicaid grants simply reflects expected increased Medicaid expenditures that will be shared by state governments. That is, we assume that current policy remains in place and the shares of Medicaid expenditures borne by the federal government and the states remain unchanged. Federal grants unrelated to Medicaid are projected, based on CBO analysis, to decline somewhat relative to GDP in the coming years. We developed several scenarios with alternative assumptions to better understand the sensitivity of our results. For these analyses, we focused on the operating balance measure because this is a proxy for the operating budgets that most state and local governments have requirements to generally keep in balance. The assumptions varied in these alternative scenarios include (1) the growth of tax receipts, (2) the growth in state and local expenditures, and (3) the rate of growth in health care costs. In the base-case model, we assume that current policy, such as tax rates and structures, will remain unchanged. We also modeled alternative scenarios with different assumptions about the growth rate of tax receipts. In the first alternative, we use the historical growth of tax revenues for the sector since 1980. The second alternative is a “maintain balance” scenario in which we assume that taxes are raised to whatever level would be required to maintain a nonnegative operating balance in every year of the simulation. Figure 4 shows the tax growth path for the base case and two alternative scenarios. Under the base case, we found that aggregate tax revenues for the entire state and local sector will likely remain about a constant percentage of GDP. In the historical growth scenario, receipts would rise somewhat in the future relative to GDP. For the “maintain balance” scenario, tax receipts need to rise considerably faster than under either of the other cases to fulfill the requirements of the scenario. In fact, by 2050, state and local taxes as a percentage of GDP would have to rise by about 17 percent above the base case to avoid fiscal deficits. In other words, it would take a substantial increase in taxes—a considerably faster increase than that experienced historically—to maintain a nonnegative operating balance solely through increased taxes. Our base-case model assumes that current policies are maintained, primarily by holding the number of employees in the sector constant as a percentage of population, assuming state and local workers receive pay increases equal to those of private-sector employees, and assuming the total cost of many goods procured by the sector to provide services rises with increases in the population being served and the rate of inflation in the economy. We also developed an alternative scenario that calculates how much the sector would have to limit expenditures in the aggregate in order to avoid a negative operating balance. Figure 5 shows these two expenditure paths. Figure 5 shows that under the base case, expenditures rise considerably over the simulation time frame. In contrast, maintaining balance solely through spending restraint would require holding expenditure growth to a much lower rate than the base case. Since a large percentage of expenditures of the sector are related to compensation of employees, this would likely mean that the workforce would not be able to grow as fast as we allow it to under the base case. That is, the ratio of employees to the population would need to decline. These state and local governments would also likely need to reduce their purchases of other goods and services procured to provide government services relative to what would have occurred under the base case. Since the base case was designed to reflect current policies, the results of the maintain balance scenario imply that there would need to be substantial cuts in expenditures and therefore services to citizens, relative to the base case. For the base-case model, we assumed that Medicaid expenditures grow according to CBO’s intermediate projections—1 percentage point more than the growth in GDP per capita—and that employee and retiree health insurance expenditures grow over the next 30 years by an average of 1.4 percentage points more than GDP per capita and slowing to 0.6 percentage points by 2050. Given the importance of health care expenditures as a driver for the long-term fiscal outlook, we also model the impact of different health care expenditure growth assumptions. For a more optimistic scenario, we lowered Medicaid expenditure growth to CBO’s lower spending path assumption. Under this path, Medicaid expenditure growth would equal the growth in GDP per capita. We also assumed no “excess cost growth” for the rate of increase in expenditures on employee and retiree health insurance, meaning that we hold the growth in these expenditures to the rate of growth in GDP per capita. For a more pessimistic scenario, we used CBO’s high spending path assumption for Medicaid, under which costs rise at GDP plus 2.5 percent per capita, and we doubled the per capita rate of growth above GDP for health insurance expenditures. Figure 6 shows the projected operating balance under all three health care cost-growth scenarios. The differences among the outcomes of these scenarios highlight the importance of health care to the long-term fiscal balance of the sector. With more rapidly rising health care expenditures, the operating balance falls off considerably more quickly than in the base case. Conversely, holding the growth in health care costs per capita to the overall per capita economic growth enables the sector to avert deficits during the projection time frame. Neither historical experience nor expert opinion, however, suggests that the cost growth of health care will likely be held to the level embodied in this optimistic scenario in the near future. Since 1992, we have produced long-term simulations of what might happen to federal deficits and debt under various policy scenarios. Our most recent long-term federal simulations show ever-larger deficits resulting in a very large and growing federal debt burden over time. Just as in the state and local government sector, the federal fiscal difficulties stem primarily from an expected explosion of health-related expenditures. As we have noted elsewhere, the expected continued rise in health care costs poses a fiscal challenge not just to government budgets, but to American business and society as a whole. The fundamental fiscal problems of the federal government and these subnational governments are similar and are linked. Figure 7 shows two simulations for the federal fiscal path under alternative assumptions, and overlays the simulated fiscal imbalance of the state and local government sector. For the federal fiscal simulation denoted “baseline extended,” we follow CBO baseline projections for the next 10 years: tax provisions that are scheduled to expire are assumed to do so and discretionary spending is assumed to grow with inflation. After the first 10 years, we use the Social Security and Medicare Trustees’ 75-year intermediate (“best”) estimates for those programs and CBO’s midrange Medicaid estimates. All other expenditures and receipts are held constant as a share of GDP after the first 10 years. Under the alternative federal simulation, we assume that during the first 10 years of the simulation, expiring tax provisions are extended and that discretionary spending grows with GDP—a faster pace than inflation. After the 10-year time frame, we assume that action is taken to return and keep revenue at its historical share of GDP plus an additional amount attributable to deferred taxes (i.e., taxes on withdrawals from retirement accounts). This alternative also incorporates somewhat higher Medicare estimates reflecting a more realistic scenario for physician payments. The overlay of the base case state and local simulation shows that the state and local fiscal situation imposes further burden on the nation’s economy in the next several decades. We did our work from September 2007 through December 2007 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Bureau of Economic Analysis of the Department of Commerce for technical review. This report was prepared under the direction of Stanley J. Czerwinski, Director, Strategic Issues, who can be reached at (202) 512-6806 or [email protected], and Thomas J. McCool, Director, Center for Economics, who can be reached at (202) 512-2642 or [email protected] if there are any questions. Amy Abramowitz, Carol Henn, Richard Krashevski, James McTigue, Michelle Sager, Michael Springer, Jeremy Schwartz, and Melissa Wolf made key contributions to this publication. This appendix describes our simulations of state and local fiscal conditions. As an organizing framework and basic data source, the state and local government model relies on the National Income and Product Accounts (NIPA), prepared by the U.S. Department of Commerce. Table 3.3, State and Local Government Current Receipts and Expenditures, of the NIPA provides data on receipts and expenditures of all state and local governments in aggregate. We also use tables underlying table 3.3 to obtain more detailed information for some of the expenditure classifications. We project the growth in each category of receipts and expenditures using the Congressional Budget Office’s (CBO) economic assumptions whenever possible. In several cases we were not able to obtain existing projections and needed to develop our own assumptions about the likely future growth path of certain receipts or expenditures. We also developed detailed models to project items such as necessary pension fund contributions, the costs of health insurance for employees and retirees, and several tax receipt categories. Our base-case model assumes current policies remain in place. Throughout this appendix we describe how that basic assumption is realized. Once all receipts and expenditures of the sector are simulated forward through 2050, we develop summary indicators of the state and local government sector’s fiscal status. Because the model covers the state and local government sector in the aggregate, the fiscal outcome of individual states and localities cannot be captured. Also, the model does not identify whether it is the state or the local government sector that faces greater fiscal challenges. The remainder of this appendix describes (1) how each of the receipt categories is projected; (2) how each of the expenditure categories is projected (with the exception of required pension contributions and the costs of health care, which are discussed more fully in app. III); and (3) how we develop measures of fiscal balance. The model provides projections for each type of receipt of state and local governments. The Bureau of Economic Analysis of the Department of Commerce assembles the NIPA based on data from the quinquennial Census of Governments, annual surveys of Government Finances, and other sources. In the NIPA, receipts are divided into five major categories: tax receipts, contributions for government social insurance, income receipts on assets, transfer receipts, and the current surplus of government enterprises. Figure 8 shows these categories as well as the breakdown of receipts within each of these classifications. As noted above, our base-case simulation is based on current policy and does not project any possible policy changes that would affect receipts during the simulation period. In the case of taxes, this means that we simulate the receipts that would be collected if tax rates and structures were to remain unchanged. Accordingly, several tax receipt categories grow at the same rate as their underlying tax bases. For several tax categories, however, it is more appropriate to project tax receipts themselves instead of their tax bases. Our tax receipt projections are based on a set of economic assumptions, many of which come from CBO. However, most of CBO’s projections extend only 10 years into the future. In order to project beyond 10 years, therefore, we used GDP values from GAO’s long-term federal budget simulations in conjunction with extrapolations of CBO’s economic assumptions. Some states have progressive rate structures resulting in receipts growing faster than incomes. Because of this, for state income tax receipts, our modeling reflects this progressivity as we project receipts themselves rather than assume that receipts grow at the same rate as the tax base. As such, we simulate future state personal income tax receipts by estimating the long-run responsiveness, or elasticity, of receipts to taxable personal income. The long-run elasticity estimate depicts the extent to which tax receipts grow in response to income growth but does not capture their short-run reaction to changes in income over the business cycle. We adjusted the historical state income tax receipt data to remove the effects of both policy changes as well as unusual capital gains that influenced past receipts. It is necessary to purge past data of policy changes because these can substantially influence the estimated elasticity, and including those effects would not maintain the “policy neutral” paradigm of our base case. Similarly, unusual swings in capital gains represent past events that may have had a significant impact on receipts but are not expected to recur in any predictable way. To purge the effect of policy changes from the receipts data, we used data from The Fiscal Survey of the States: December 2006, National Association of State Budget Officers (NASBO). These data provide estimates of the aggregate effect of tax changes each year. To remove the effect of atypical capital gains realizations on receipts, we estimated a relationship between the share of federal income tax receipts from capital gains receipts and the highest marginal tax rate on capital gains. Any deviation between the actual share of capital gains receipts and the share implied by this relationship was removed from state personal income taxes. Using the adjusted receipts data, we estimated that a 1 percent increase in real taxable personal income generates an approximate 1.1 percent increase in real personal income tax receipts. The somewhat greater growth in receipts than in income reflects the progressivity exhibited in some state income tax rate structures. To project state personal income taxes in future years using this relationship, future values of taxable personal income are required. Ten- year taxable income projections come from CBO. Thereafter, taxable income is held constant as a share of GDP at CBO’s projected tenth-year level. The estimated long-run elasticity of approximately 1.1 therefore implies that state personal income taxes increase slightly as a share of the economy over the projection period. See appendix IV equation 34 for more information on the state income tax analysis. In contrast to state personal income taxes, local personal income taxes as well as other personal taxes have displayed no discernable trend as a share of taxable personal income over the last 2 decades. In the base-case projections, therefore, we simply let these personal taxes grow at the same rate as taxable personal income. This implies that the local income and other personal tax rates remain unchanged over the simulation period. Because we allow these personal tax receipts to grow with taxable income and taxable income grows with GDP, local income and other personal taxes remain constant as a share of the economy in our long-term projections. See appendix IV equations 35 and 36 for more information on the local income tax and other personal tax analysis. The model divides sales tax receipts into two categories, general and selective (or excise) sales taxes. General sales taxes are levied as a percentage of the price of the items purchased. In contrast, selective sales taxes—which are levied on such goods as liquor, gasoline, and tobacco— are often exacted in terms of dollars and/or cents per item purchased, and the amount of the tax may be adjusted only intermittently. Accordingly, the model uses different relationships for the two types of sales taxes. In the absence of policy changes, general sales tax receipts should grow at the same rate as the consumption categories subject to the tax—that is, to the sales tax base. To evaluate the outlook for state and local government general sales tax receipts, we estimated the long-term responsiveness of our measure of the sales tax base to aggregate wage and salary income. Given projections of aggregate income, this elasticity provides a future path for the sales tax base. Receipts are then assumed to grow at the same rate as the tax base, implying that the average sales tax rate remains constant over the simulation period. The first step in this analysis is to develop a broad consumption measure as a proxy for the tax base using previous work as a guide. The proxy used here is total consumption expenditures excluding food and services, because the two categories are often not part of the sales tax base. In addition, because the sales tax base has been negatively affected by increases in mail order and Internet purchases, we also used Census data to remove an estimate of remote sales from the tax base. We then estimated the long-run elasticity of this sales tax base with respect to aggregate wages and salaries using historical data. We found that the estimated elasticity is 0.93, suggesting that over the long run, a 1.0 percent increase in real wages and salaries results in a 0.93 percent increase in the real sales tax base. In the projections, sales tax receipts grow at the same rate as the sales tax base. The sluggish growth in sales tax receipts relative to the economy reflects the shift in consumer spending toward services and remote sales, both of which are excluded from our proxy for the sales tax base. To accommodate the possible success of efforts to include remote sales in the tax base, we also estimated the long-run income elasticity of a sales tax base measure that includes remote sales. The resulting estimate of 0.97 implies that, even if remote sales were to be taxed, general sales tax receipts would still be unlikely to fully keep pace with overall economic growth. See appendix IV equations 37, 38, 39, and 41 for more information on the general sales tax analysis Selective Sales (Excise) Tax Receipts In addition to the general sales tax, state and local governments impose a variety of selective sales (or excise) taxes on gasoline, alcoholic beverages, tobacco, public utilities, insurance receipts, and other items. Selective sales taxes generally take the form of a given amount of tax for each unit purchased, (i.e., a unit tax). Most states, for example, levy a tax on gasoline which takes the form of a certain number of cents per gallon. Because the amount of the unit tax is adjusted only periodically, selective sales tax receipts tend to grow less rapidly than the value of the sales on which they are levied and less than incomes. Accordingly, we estimated the responsiveness of selective sales tax receipts to income rather than the responsiveness of the tax base to income. Our estimates indicated that the long-run elasticity, based on historic data, was 0.80. This implies that these receipts tend to grow much less than income and the general economy. See appendix IV equation 43 for more information on the selective sales (excise) tax analysis. In our simulations, corporate tax receipts grow at the same rate as CBO’s projections of corporate profits. After an initial adjustment period, CBO assumes profits will keep pace with overall economic growth. In our long- run simulations, therefore, corporate income taxes remain constant as a share of the economy. See appendix IV equation 49 for more information on the corporate income tax analysis. Property tax receipts are assumed to grow with our projections of the property tax base. In turn, property tax base projections are based on our estimate of the relationship between real GDP and the real market value of real estate owned by both the household sector and the nonfarm, nonfinancial business sector. Data for the market value of real estate are obtained from the sectors’ balance sheets in the Federal Reserve Board’s flow of funds accounts. We estimated that the long-run responsiveness of property values to GDP is 1.13, which implies that the property tax base tends to grow somewhat more than GDP. See appendix IV equations 44, 45, 46, and 47 for more information on the property tax analysis. States and localities also collect a variety of other taxes, fees, and assessments that are classified in the NIPA as “other taxes on production.” These include such items as motor vehicle license fees, severance taxes, and special assessments. In our projections, we assume these receipts grow at the same rate as GDP. See appendix IV equation 48 for more information on the analysis of other taxes on production. In the NIPA, estate and gift taxes are considered capital transfer receipts rather than current tax receipts. This distinction has some relevance for how fiscal balance measures are calculated, as discussed later in this appendix. In the projections, we assume that taxes on estates and gifts grow at a rate equal to the yield on 10-year Treasury securities. See appendix IV equation 90 for more information on the estate and gift tax analysis. Contributions for government social insurance are a small component of state and local governments’ total current receipts related to payments for items such as disability insurance and unemployment coverage. The model assumes these tax receipts grow at the same rate as total wage and salary disbursements in the economy. See appendix IV equation 50 for more information on the analysis of contributions for government social insurance. Income receipts on assets include interest receipts, dividends, and rents and royalties earned on the financial holdings of state and local governments. Projections of income receipts on assets require future values for both the effective rate earned on financial assets and total financial assets owned by the sector. These calculations are discussed in appendix II. Current transfer receipts include several major categories of state and local government nontax receipts. These include three types of federal grants- in-aid: federal Medicaid grants, other federal grants for current expenditures, and federal investment grants. The sector also receives transfers from business and persons, such as fines and tobacco settlement payments. To project federal Medicaid grants in our base case, the model uses CBO’s intermediate growth Medicaid outlay projections. For the first 10 years these are available in the most recent edition of CBO’s Budget and Economic Outlook. Thereafter, we use CBO’s projections for Medicaid outlays as a share of GDP from the most recent long-term budget publication. Following the NIPA treatment, we subtract the “clawback” from Medicaid grants. Because of the passage of Medicare Part D, states pay the federal government a portion of the costs of prescription drugs previously covered under the Medicaid program. Thus, the Medicare Part D program will save states some of the payments they would have made towards prescription drugs under Medicaid to enrollees. The returned portion that states are required to remit is known as the clawback. CBO has estimated the clawback for the first 10 years. Thereafter, we extrapolate the clawback as a constant percentage of total federal Medicaid grants in the tenth year of CBO’s estimates. See appendix IV equations 54 and 55 for more information on the analysis of federal Medicaid grants. In addition to Medicaid grants, the federal government provides grants intended to finance other current expenditures of state and local governments. Examples include grants for education, welfare and social services, and housing and community services. For the first 10 years, we project other federal grants for current spending by subtracting CBO’s Medicaid grant projections from CBO’s projection of total grants for current expenditures. After the first 10 years, we assume that other current grants grow with inflation plus population growth, keeping the real grant level per person stable, which is our implementation of a current policy scenario. Because federal investment grants finance investment, rather than current expenditures, the NIPA classifies investment grants as a form of capital transfer. While capital transfers are not part of the state and local government sector’s current receipts, they are included in the sector’s total receipts. For the first 10 years of the projections, we assume that federal investment grants grow at the same rate as CBO’s projections for federal capital transfers. After the first 10 years, we assume that investment grants grow with inflation plus population growth, which again reflects our implementation of a current policy scenario. Transfers from business include state and local fines and other nontaxes, such as tobacco settlements. Similarly, personal transfer payments to government include donations, fees, and fines. Both types of private transfers are assumed to grow with GDP in the projections. See appendix IV equations 56 and 57 for more information on the analysis of transfers from businesses and persons. The current surplus or deficit of government enterprises is the difference between receipts and costs for a variety of businesslike operations of state and local governments. These enterprises provide such services as water, sewerage, gas, electricity, toll facilities, liquor stores, air and water terminals, housing and urban renewal, public transit, and a residual category covering such items as lotteries. As we examined the trends in the various enterprises we found that some types of enterprises tend to have surpluses (e.g., lotteries, liquor stores), some tend to have deficits (e.g., public transit, public housing), and some tend to run roughly at a break-even level (e.g., electricity, water). The overall balance for the entire state and local enterprise sector was sometimes positive and sometimes negative. We determined that no trend could be established and we therefore assumed that across all state and local governments and across all the types of enterprises, the budgets are balanced. We therefore set the balance for the enterprise sector equal to zero. See appendix IV equation 58 for more information on the analysis of the current surplus or deficit of government enterprises. In the NIPA, expenditures are divided into five categories, some much larger than others. Figure 9 shows the five types. Consumption expenditures, the largest category, includes such items as the compensation of state and local government employees. Transfer payments include Medicaid payments. Smaller classifications are: interest paid on the outstanding debt of these governments, subsidies, and expenditures for investments in fixed capital and nonproduced assets. Consumption expenditures include an array of expenses related to direct spending to finance current operations. The largest component of these expenses is compensation of state and local government workers. In addition, the implicit cost of depleted capital—or depreciation—is a consumption expenditure. Other consumption expenditures include intermediate goods and services used to provide current services. Certain offsets are also made against consumption expenditures in the form of receipts for services the sector provides and the costs of investment goods the sector itself produces. The model projects several categories of employee compensation. These include (1) wages and salaries, (2) pension plan contributions, (3) employee and retiree health care costs, and (4) other employee benefits. In our base-case projections, which hold policy constant, we let employment grow with the population projections in the 2007 Social Security Trustees’ report and allowed wages and salaries per employee to grow with CBO’s projected increase in the employment cost index for private sector wages and salaries. These assumptions permit the sector to keep the number of employees per citizen constant while paying salaries that keep pace with those in the private sector. These assumptions reflect our judgment about how to implement a current policy scenario for these factors. See appendix IV equation 9 for more information on the projections of wages and salaries. The model includes a set of relationships that solve for the contribution that state and local sector employers must make in order to fund pension plans on an ongoing basis. These relationships are explained in appendix III. We developed estimates of the aggregate cost to state and local governments for the health insurance of both active and retired employees. These costs are projected on a pay-as-you-go basis and their derivation is described in appendix III. Other employee benefits include life insurance and workers compensation contributions. These expenditures are assumed to grow with employment in the sector plus increases in the employee cost index for wages and salaries. See appendix IV equation 68 for more information on the projections of other employee benefits. Consumption of fixed capital, or depreciation, is another component of NIPA state and local government consumption. We calculated depreciation in a given year as a constant percentage of the prior year’s capital stock. Thus, the projections of investment, depreciation, and the capital stock are all interrelated. From a starting point for the level of capital stock in 2006, we increase the level of capital going forward each year by an estimate of gross investment. We assume that gross investment grows with population growth and inflation—our implementation of a current policy scenario. We then subtract a portion of the previous year’s capital stock to reflect depreciation. The depreciation rate used in the projections is 2.8 percent, which is based on values for real consumption of fixed capital relative to real capital stock reported in recent years of NIPA data. See appendix IV equations 70 through 75 for more information on the projections of the consumption of fixed capital (depreciation). The final component of consumption expenditures is a miscellaneous category that is equal to other consumptions expenditures minus two items: own-account investment and sales to other services. 1. Other consumption expenditures include such intermediate purchases as rent, gasoline, utilities, and supplies. 2. Own-account investment is the compensation of employees and the expenditures related to the sector’s own production of investment goods, such as software and other capital assets. Because own-account investment expenditures represent the acquisition of long-term assets and are included in purchases of fixed assets, they must be subtracted from consumption expenditures to avoid double counting these expenses. 3. Sales to other sectors include tuition and related educational charges, health and hospital charges, and other sales of goods and services sold by the state and local sector that are not considered enterprise sales. Since these revenues are derived from the provision of services funded by consumption expenditures, they are netted against the costs of providing those services. Despite the three separable components contained within this final classification of consumption expenditures, the model does not include explicit relationships explaining each of these components. Instead, we assume that other consumption will grow with inflation plus population growth. See appendix IV equation 69 for more information on the projections of miscellaneous consumption expenditures. The model divides state and local government transfer payments to persons into two categories, medical care payments and other transfers to persons. Medical care transfers include both Medicaid and other medical care payments, the latter of which consist of general medical assistance and the state children’s health insurance program (SCHIP). The other transfer category includes a broad array of payments to individuals such as workers’ compensation, temporary disability, and family assistance. Because the Medicaid program provides matching grants to state governments, Medicaid grants and medical care transfer payments generally have been closely related. The close relationship between Medicaid grants and transfer payments supports modeling medical care transfers as a constant multiple of CBO’s projection for Medicaid grants. In recent years, the ratio of the sector’s medical transfer payments to Medicaid grants was 1.726, which implies that the federal government ultimately paid about 58 percent of total state and local medical care payments (including Medicaid, general medical assistance, and SCHIP payments) while states financed 42 percent of the total with their own funds. This relationship is applied to CBO’s Medicaid grant projections— which are available to 2050—for our projections of state and local medical care payments. See appendix IV equation 77 for more information on the projections of medical care transfer payments. Nonmedical transfer payments include a broad array of transfers such as temporary disability insurance, workers’ compensation, family assistance, education assistance, foster care, adoption assistance, and expenditures for food under the supplemental program for Women, Infants, and Children. In our base-case projections of these payments, real spending per capita is kept constant, reflecting our current policy scenario. Equivalently, payments grow with inflation and population growth. See appendix IV equation 78 for more information on the projections of nonmedical transfer payments to persons. State and local governments pay interest on their outstanding debt. Interest payment projections require estimates of future effective interest rates on debt and the amount of debt outstanding. This discussion requires more detail and is explained in appendix II. Subsidies are a very small remaining category of current expenditures consisting mainly of payments to railroads. California’s payments to electricity suppliers from 2001 through 2003 were also classified as subsidies. In the simulations, subsidies are assumed to grow with inflation and population. See appendix IV equation 84 for more information on the projections of subsidies. Because they are capital outlays, gross investment and purchases of nonproduced assets are considered to be investment expenditures. As such they are not considered a current expenditure. Investment expenditures cover the acquisition of all longer-lived assets including structures, equipment, and software, and nonproduced assets consist of the net acquisition of land less oil bonuses. We grow both of these factors at the combined rate of inflation and population growth. See appendix IV equations 70, 71, and 92 for more information on the projections of gross investment and the purchase of nonproduced assets. We use two measures of fiscal balance in our report: net lending or borrowing and the operating balance. In addition, a third measure—net saving—is not directly discussed but is a conceptually important measure. The first projected balance measure is NIPA’s net lending or borrowing, which is the difference between total receipts and total expenditures and is analogous to the federal unified surplus or deficit. Thus, this balance is measured as the sum of all receipts discussed in this appendix minus all expenditures, with one exception. While the measure of total expenditures used to calculate net lending or borrowing includes both current expenditures and capital expenditures, it excludes the consumption of fixed capital (depreciation) because the latter is not a cash outlay. The value of net lending or borrowing must be financed by some combination of changes in financial assets and liabilities. See appendix IV equation 93 for more information on measurement of net lending or borrowing. The second balance measure we use is a GAO-developed measure that we call the operating balance excluding funds for capital expenditures. This measure is designed to be roughly akin to the operating budgets of subnational governments—budgets which these governments are generally required to balance or nearly balance. We develop a measure of receipts not available to finance current spending as the difference between investment spending and the change in medium- and long-term debt. Subtracting this amount from total receipts leaves the estimated receipts that are available to finance current expenditures. The expenditure component of the balance measure excludes both investment spending and depreciation. Our operating balance measure includes two further adjustments to NIPA- based totals. First, we exclude the current surplus/deficit of government enterprises from receipts because state and local government operating budgets exclude government enterprises. This adjustment has no effect on our base case simulations because we assume the balance is equal to zero, but its incorporation accommodates potential alternative assumptions about the current balance of government enterprises. We also exclude a category of funds that we call the net balance of social insurance funds. As noted earlier, state and local employees as well as employers make contributions to social insurance funds to pay for such items as temporary disability and workers’ compensation insurance. Although not explicitly mentioned earlier, payments from these funds are embedded in transfer payments that governments pay to workers when they are disabled or otherwise entitled to payments from these insurance funds. In our simulations, the balance is assumed to grow with total wage and salary disbursements. While governments hold balances in these funds, the funds are not available for operating expenses. See appendix IV equation 94 for more information on the measurement of the operating balance net of funds for capital expenditures. The model also solves for net state and local government saving, which is a key balance measure in the NIPA that has important macroeconomic implications. Net state and local government saving is simply the sum of all current receipts (that is, all receipts discussed earlier except investment grants and estate taxes) minus the sum of all current expenditures, where current expenditures include the consumption of fixed capital (depreciation) but exclude investment spending. See appendix IV equation 85 for more information on the measurement of net saving. This appendix describes how we developed estimates of financial earnings and interest paid on outstanding debt for the state and local government sector. This analysis starts with a method for translating state and local government budget surpluses or deficits—as measured by the difference between its total receipts and expenditures and labeled net lending or borrowing—into changes in the sector’s financial assets and financial liabilities. We also describe how we estimated the effective rate earned on the sector’s financial assets and the effective rate paid on its credit market liabilities in each year and applied these rates to the prior year’s assets and liabilities, respectively, to provide estimates of the sector’s asset income and interest payments. For any entity, there is a direct relationship between budget outcomes and changes in financial position. In particular, if expenditures exceed receipts, the gap needs to be financed by some combination of changes in financial assets and changes in financial liabilities—that is, if governments spend more than they take in, they must pay for it by issuing debt, cashing in assets, or some combination of the two. Conversely, if receipts exceed expenditures and the sector is a net lender, its net financial investment (the net change in financial assets minus the net change in financial liabilities) must equal the budget surplus. The relationship between budget outcomes and the sector’s financial position is shown in the following accounting identity: total receipts - total expenditures = change in financial assets - change in financial liabilities For a given difference between total receipts and total expenditures, that is, the value of net lending or borrowing, various combinations of changes in financial assets and changes in financial liabilities can satisfy this identity. To leverage this relationship for our projections we use two methods. The first applies when net lending or borrowing is in its historical range. The second is necessary for a good portion of our simulations because the ever-growing deficits that we find are inconsistent with historical experience, and relying on the first method would produce unrealistic results. Traditionally, total expenditures have usually exceeded total receipts, and the sector has been a net borrower. But the gap has rarely been large, so the borrowing requirements have usually been modest. If our estimate of net lending or borrowing falls into a range similar to historical experience, we invoke the accounting identity above by estimating the growth in four components of financial liabilities of the sector as provided in the Federal Reserve’s Flow of Funds Accounts. These components include three types of credit market instruments: short-term municipal securities, medium- and long-term municipal securities, and U.S. government loans, as well as “trade payables,” which are related to the acquisition of goods and services for conducting operations, and are not credit market liabilities. The growth in the values of these four types of liabilities, along with the estimate of net lending or borrowing, then determines the change in assets necessary to satisfy the identity. When the size of the balances is consistent with historical experience, the model projects each of these financial liabilities as follows: Short-term debt. The model includes an econometric equation linking short-term debt to net saving. The equation also includes several dummy variables controlling for periods of unusual changes in short- term debt, and autoregressive and moving average error terms to control for serial correlation of the residuals and improve the equation’s fit. The equation indicates that short-term debt issuance is inversely related to the sector’s net saving, which implies that past deficits were financed in part by short-term borrowing. Medium- and long-term debt. Changes in medium- and long-term municipal debt are mostly linked to capital expenditures (including land) and their financing. Some combination of tax receipts, federal investment grants, and debt can be used to finance state and local government investment. Accordingly, a relationship was estimated in which the change in the municipal bond rate explains how much debt is used to finance the gap between investment spending and federal investment grants. The equation also includes dummy variables covering periods when tax considerations and other unusual factors had an important role in the amount of debt issued. These dummy variables control for unusually large long-term debt issuances in 1978 and 1985 and unusually large decreases in outstanding long-term debt in 1994 and 1995. The projections assume that similar events will not occur over the simulation period. The relationship also includes an autoregressive term to control for the serial correlation of the error term. Borrowing from the U.S. Treasury. The state and local government sector also borrows modest amounts from the U.S. Treasury. Our estimates imply that real growth in borrowing from the Treasury is negatively affected by real GDP growth. During periods of relatively strong growth, the sector borrowed less from the Treasury and during periods of slow or negative growth, the sector borrowed more. The estimated equation also includes dummy variables to control for unusual borrowing increases in 1984 and 1985 and an unusual decrease in borrowing in 1988, along with autoregressive and moving average error terms. Trade payables. Trade payables help finance the goods and services the sector acquires in the conduct of its operations. Accordingly, our base-case simulations let trade payables grow at the same rate as other consumption expenditures, which, in turn, grow with inflation plus population growth. As noted above, the historical tendency has been for the state and local government sector to run small deficits and an occasional surplus as measured by net lending or borrowing. The method we have described documents how we use the accounting identity to grow financial assets and liabilities in similar circumstances. If the sector runs large deficits, however, as we find within a few years of our simulation, this methodology generates unrealistic financial outcomes. In particular, if the method were used throughout our simulation analysis, ever-increasing deficits would lead to declining values of financial assets—because under this method, assets are the residual variable that balances the accounting identity. In later years assets would decline so substantially that they would become negative. Since this makes no economic sense because governments require funds to meet current expenses, we developed an alternative method that is triggered when key relationships become out of balance in our simulation. Our methodology “switch” is triggered when receipts fall so substantially short of expenditures—i.e., the sector is a substantial net borrower—that assets grow less than gross domestic product (GDP). If this occurs, in the next period the model changes how short-term debt is projected. Rather than being independently projected, short-term debt then becomes the residual variable that satisfies the accounting identity. In this alternative case, assets grow with GDP. Income receipts on assets are part of the sector’s receipts while its interest payments are part of its expenditures. We have described how the model determines the change in assets and liabilities in each year. These earnings or payments are calculated by setting an appropriate rate and applying that rate to these asset or liability values, respectively. In this section we describe how we determine the effective rates earned and paid and how we use those rates and the values of assets and liabilities to project asset income and interest payments of the state and local sector. Income receipts on assets are reported as a category of receipts in the National Income and Product Accounts (NIPA). We divide the income receipts on assets by the value of financial assets at the end of the previous year to calculate historical values for the effective rate earned on assets in each past year. The evolution of these past effective rates reflects the turnover of old assets and the acquisition of new financial assets by state and local governments. This process can be captured by setting the effective rate earned on assets in a given year equal to a weighted average of the prior period’s effective rate and the given year’s prevailing market rate on the types of assets that the sector purchases. Using a simple regression model we developed weights of 0.81 for the prior year’s effective rate earned and 0.19 for the given year’s yield on 3-month Treasury securities, projections of which are available from the Congressional Budget Office (CBO). As stated, these weights reflect the gradual turnover and replacement of assets with newer issues. The product of the effective rate earned and the prior period’s financial assets equals the income earned on assets. A similar method is used to derive interest paid on outstanding debt of the sector. First, we divide the sector’s interest paid by the value of credit market liabilities outstanding at the end of the previous year to calculate historical values for the effective rate paid on liabilities in each past year. To develop weights for the simulations, we then model the effective rate of interest paid as a weighted average of the effective return in the previous period and the Aaa municipal bond rate for the given year. Based on our analysis, we set the effective rate paid equal to 0.88 times the prior year’s effective rate paid plus 0.12 times the given year’s projected Aaa municipal bond yield. These weights reflect the gradual turnover and replacement of municipal securities with newer issues. We generated our own projections of the municipal bond yield based on a relationship we estimated between the Moody’s Aaa municipal bond yield and the 10-year Treasury yield. We then use the estimated relationship and CBO’s projections of the 10-year Treasury yield to calculate future values of the municipal bond yield. The sector’s interest payments are equal to the product of the effective interest rate paid and the sector’s prior year liabilities excluding trade payables. In the model, therefore, explicit interest payments only apply to the sector’s credit market liabilities. This appendix provides information on the development of simulations of future pension and health care expenditures for retirees of state and local governments. In particular, we provide information on (1) the development of several key demographic and economic factors such as future employment, retirement, and wages for the state and local workforce that are necessary for the simulations of future pension and retiree health care costs; (2) how we project the necessary contribution rate to pension funds of state and local governments; and (3) how we project the future yearly pay-as-you-go expenditures of employee and retiree health insurance. Key underlying information for the pension and health care expenditure simulations relate to future levels of employment, retirees, and wages. In particular, to estimate the expenditures for the post-retirement promises the sector has and will continue to make as well as expenditures for health care for active employees, we need to project the number of employees and retirees in each future year, as well as the dollar value of pension benefits that will be earned and the extent to which those benefits will be funded through employee contributions to pension funds. The cost of health care and the estimate of employees and retirees receiving health care benefits are discussed later in this appendix. We project the following key factors for each year during the simulation time frame: (1) the number of state and local government employees, (2) state and local government real wages, (3) the number of pension beneficiaries, (4) average real benefits per beneficiary, and (5) yearly employee contributions to state and local government pension plans. To project the level of employment in each future year, we assume that state and local employment grows at the same rate as total population under the intermediate assumptions of the Board of Trustees of the Old Age, Survivors, and Disability Insurance (OASDI), commonly referred to as Social Security—that is, we assume that the ratio of state and local employment to total population remains constant. The Trustees assume that population growth gradually declines from 0.8 percent during the next decade to a steady rate of 0.3 percent per year beginning in 2044. Accordingly, state and local government employment growth displays the same pattern in our projections. The pension benefits that employees become entitled to are a function of the wages they earned during their working years. We assume that the real employment cost index for the state and local sector will grow at a rate equal to the difference between the Congressional Budget Office (CBO) assumptions for the growth in the employment cost index (ECI) for private sector wages and salaries and inflation as measured by the consumer price index for all urban consumers. CBO’s assumptions for growth in the ECI and the Consumer Price Index for All Urban Consumers (CPI-U) are 3.3 percent and 2.2 percent per year, respectively, implying a real wage growth of 1.1 percent per year during the simulation time frame. Aggregate real wages are assumed to grow at the combined rate of growth in the real employment cost index we have just described, and the level of employment. As noted previously, the Trustees project that population growth slows from 0.8 percent in the upcoming decade to a steady rate of 0.3 percent after 2044. Because population growth drives employment in our projections, this slowdown implies that aggregate real wage growth slows from 1.9 percent per year to a steady long-run rate of 1.4 percent. Future growth in the number of state and local government retirees—many of whom will be entitled to pension and health care benefits—is largely driven by the size of the workforce in earlier years. While actuaries use detailed information and assumptions regarding the age, earnings, service records, and mortality rates applicable to the entities they evaluate, information in such detail is not available for the state and local government sector as a whole. This lack of detailed data necessitated the development of a method of projecting aggregate state and local beneficiary growth that is much simpler than the methods that actuaries employ. The method we developed reflects the logic that each year’s growth in the number of beneficiaries is linked to past growth in the number of employees. Total state and local government employment from 1929 through 2005 was obtained from the national income and product accounts (NIPA) tables 6.4a, b, c, and d. The Census Bureau provided data on the number of state and local pension beneficiaries from 1992 through 2005 during which continuous observations were available. Cyclical swings in the employment series were removed using a Hodrick-Prescott filter. Then, both the employment and beneficiary series were logged and first- differenced, transforming the data from levels to proportionate changes. We developed a routine that searched across 45 years of lagged employment growth to select a set of weights for the years in which past employment growth best explained a given year’s growth in beneficiaries. The routine included the restrictions that the weights must be nonnegative and sum to one. The method produced a relationship that reflected the contribution of a particular past year’s employment change in explaining a given year’s change in retirees. In particular, the estimated relationship suggests that beneficiary growth in a given year is largely determined by employment growth 21, 22, 23, and 34 years prior to the given period. This pattern appears consistent with the categories of workers that the sector employs. Many fire and police positions, for example, offer faster pension accrual or early retirement due to the physical demands and risks of the work, while many other state and local workers have longer careers. While, in the long run, the average real benefit level should grow at the same rate as real wages—that is, at 1.1 percent per year—in the first decades of the projection the average real benefit will be affected by real wage changes that occurred before the projection period. Accordingly, we developed a relationship that reflects how the average real benefit level will change over time according to changes in the number and average real benefit level of three subsets of the retiree population: (1) new retirees entering the beneficiary pool, (2) new decedents leaving the pool, and (3) the majority of the previous year’s retirees who continue to receive benefits during the given period. Each group’s real benefit is linked to the real wage level in the average year of retirement for that group. Thus, to determine the average real benefit overall in any future year, we need weights and real wage indexes for the three groups that can be used to develop a rolling average real wage of the recipient pool in each future year. Earlier we described how we project the percentage change in the total number of beneficiaries between two successive years, but this difference is actually comprised of two elements: the percentage change in new retirees minus the percentage change in decedents. Therefore, to determine the weight for new retirees, we also need an estimate of the number of new decedents in each year. In order to estimate a “death rate,” we utilize Social Security Administration data on terminated benefits and total Social Security recipients, which excludes disability recipients. Our death rate for the forecast period is set equal to the number of terminated Social Security recipients divided by the total number of Social Security recipients in 2003—3.67 percent. This analysis then enables a derivation of weights for each of the three groups as follows: weight for new retirees: the number of beneficiaries this year, less the number of beneficiaries last year who are still alive, divided by the number of beneficiaries this year; weight for continuing recipients is equal to last year’s beneficiaries divided by this year’s beneficiaries; and weight for the deceased is the death rate multiplied by last year’s beneficiaries divided by this year’s beneficiaries. Next, we identied the real employment cost index that determines the real benefit level for each of these three groups. We do so by estimating the average retirement year applicable to each of the three groups. First, we assume the average retirement age is 60. We developed this estimate based on an analysis of the March Supplement to the Current Population Survey (CPS) for 2005-2006, which indicated that the average state and local government retiree had retired at 60 years of age. We also analyzed detailed data on the age distribution of Social Security recipients provided by the Office of the Actuary of the Social Security Administration. These data showed that the average age for new decedents is about 81 during the initial years of OASDI’s simulations, and we thus used a 21-year lag—81 minus 60—to estimate the real wage applicable to this group. For the newly retired group, we use the given year’s employment cost index. For the remaining retirees—those already retired and remaining in the group— we use information from CPS for 2005 which indicated that the average age of a retired state or local retiree was 68. Therefore, we apply an 8-year lag to the real employment cost index to determine real benefits of this group. We then use this information to create a weighted average employment cost index for the retiree pool in any given year. The ratio of the given year’s weighted average real wage index to the previous year’s weighted average real wage index should equal the ratio of the current to the previous year’s average real benefit levels. Thus, a given year’s average real benefit level grows at the same rate as the rolling index of real wages. The relationship has the desired property of capturing the effect of historical real wage growth in the initial decades of the projection before converging to a long-run average annual growth rate of 1.1 percent, which is consistent with our assumption for real wage growth. To calculate aggregate real pension benefit payments, the average real benefit is multiplied by the number of beneficiaries projected. Employee contributions represent an important funding source for state and local government pension plans. In 2006, for example, NIPA data indicate that employees contributed 4.5 percent of their wages and salaries to their retirement funds. To estimate future employee contributions, we simply assume that the 2006 contribution level is held constant as a share of aggregate wages. The purpose of the pension simulations is to estimate the steady contribution rate that state and local governments would need to make each year going forward to ensure that their pension systems are fully funded on an ongoing basis. Our goal is to estimate the financial commitments to employees that have been and are likely to continue to be made by the state and local sector to better understand the full fiscal outlook for the sector. As such, our analysis projects the liabilities that the sector is likely to continue to incur in the future. In the previous section we discussed how we calculate a variety of critical demographic and economic factors that are necessary for this analysis. The necessary contribution rate can now be derived according to straightforward logic: the benefits that are promised to employees (including liabilities already made and promises that will be made in the future) must be paid from three sources: (1) existing pension fund assets at our starting point in 2006, (2) contributions that employees will continue to make to those funds in the future, and (3) contributions that employers will make to those funds in the future. Mathematically we start with the present value of future pension benefits. We then subtract two things: the value of pension fund financial assets in 2006—which was approximately $2.979 trillion—and the present value of employee contributions. The present value of the remaining liability is the value that the governments must fund. We then divide that present value by the present value of future wages. This yields the steady level of employer contribution, relative to wages, that would need to be made in every year between 2006 and 2050 to fully fund promised pension benefits. Although we are only interested in developing necessary contribution rates over the simulation time frame—that is, until 2050—we actually have to derive the contribution rate for a longer time frame in order to find the steady level of necessary contributions. This longer time frame is required because the estimated contribution rate increases as the projection horizon increases and eventually converges to a steady state. If the projection period is of insufficient length, the steady level of contribution is not attained and the necessary contribution rate is understated. As such, all of the flows in the calculation extend 400 years into the future. We use a real rate of return on pension assets of 5.0 percent to discount future flows when deriving present values. Applying this analysis, we found that in aggregate, state and local government contributions to pension funds would need to increase by less than half a percent to fund, on an ongoing basis, the pension liabilities they have and will continue to incur. In particular, the 2006 pension contributions for the sector amounted to 9 percent of wages, and our base- case estimate is that the level would need to be 9.3 percent each year to fully fund pensions. To examine the sensitivity of our model results we altered our assumptions regarding the expected real yield, and found that the model results are highly sensitive to this rate. For our primary simulations, we based the expected real yield on actual returns on various investment instruments over the last 40 years as well as the disposition of the portfolio of assets held by the sector over the last 10 years. This generated a real yield of 5 percent. But some pension experts have expressed concern that returns on equities in the future may not be quite as high as those in the past. In fact, some analysts believe that an analysis of this type should only consider “riskless returns.” Under such an approach we would assume that all pension funds are invested in very safe financial instruments such as government bonds. We estimated the necessary steady level of employer contributions holding all elements in the model stable except the real expected yield. In particular, we analyzed a 4 percent real yield and a 3 percent real yield—the latter of which is a reasonable proxy for a riskless rate of return. We found that if returns were only 4 percent, the necessary contribution rate would rise to 13.4 percent, and if we used a risk-free return of roughly 3 percent, the necessary contribution rates would need to be much higher—nearly 18.1 percent of wages. On the other hand, if real returns were higher than our base-case level—perhaps 6 percent—the necessary contribution rate would only be only 4.4 percent, much lower than the current contribution rate. Most state and local governments pay for employee and retiree health insurance on a pay-as-you-go basis—that is, these benefits are generally not prefunded. We made projections of the pay-as-you-go expenditures for health care for the sector, as a percentage of wages, in each year until 2050. To estimate expenditures for employee and retiree health insurance in future years, we made many of the same assumptions as for the pension analysis. In particular, we use the same method to develop projections of employment in the sector, the number of retirees, and the level of wages. An additional assumption for the health care analysis is that in future years, the same percentage of employees and retirees of state and local governments will be enrolled in health insurance through their previous employer as we observe were enrolled in 2004—the most recent year for which data were available. For retirees, we developed this measure from two data sources. The Census Bureau’s State and Local Government Employee-Retirement System survey provided data on the total number of state and local retirees, and the Health and Human Services Department’s Medical Expenditure Panel Survey (MEPS) provided data on state and local government retirees who are covered by employer-provided health insurance. Based on these data sources we found that the share of retirees with health insurance is 44 percent, and we hold this constant through the simulations. From the latter data source we also obtained the most recent year state and local government spending on health care for retirees. For active employees we also used MEPS data on employees covered by health insurance and compared that to BEA data on the total employment in the sector. This provided us with a finding that 71 percent of active employees are receiving health benefits. Again, we hold this value constant during the simulation time frame. One of the most central assumptions we must make to estimate the pay-as- you-go health care expenditures for employees and retirees in future years is the cost growth of health care itself. The cost of health care has been increasing faster than gross domestic product (GDP) for many years. As such, we developed assumptions about how much faster health care costs would grow, relative to the economy, in future years. The extent to which the per-person cost of health care is expected to grow beyond GDP per capita is called the “excess cost factor.” We developed these estimates based on our own research and discussions with experts. In particular, we assume that the excess cost factor averages 1.4 percentage points per year through 2035, and then begins to decline, reaching 0.6 percentage points by 2050. Using these assumptions we developed projections for the expenditures on health care for employees and retirees each year through 2050. We found that the projected expenditures for retiree health insurance, while not a large component of state budgets, will more than double as a percentage of wages over the next several decades. In 2006, these expenditures amounted to approximately 2.1 percent of wages, and by 2050 we project that they will grow to nearly 5.1 percent of wages—a 150 percent increase. As with the projections of necessary pension contributions, our estimates of these expenditures are highly sensitive to certain of our assumptions. In particular, the assumptions regarding health care cost growth are critical. For example, if health costs were to only rise at the rate of GDP per capita, expenditures for retiree health care would only grow, as a percentage of wages, from 2.1 percent today to 3.0 percent by 2050. Conversely, if health costs were to grow by twice the rate we assume in the base case, these costs, as a percentage of wages, would constitute 8.7 percent by 2050. Active employees’ expenditures on health care amounted to 12.8 percent of wages in 2006 and by the end of the simulations in 2050 are expected to be 22.2 percent of wages. In the case of the optimistic scenario—with lower escalation in the cost of health care—we found that expenditures on employee health care will only rise slightly to 13 percent of wages by 2050. However, under the pessimistic scenario characterized by more rapidly growing health costs, expenditures on health care for active employees rise to 37.7 percent of wages in 2050. This appendix lists the 105 equations that are used to simulate the base case for the State and Local Model. Variable(-X): Represents the variable lagged X periods. (Expression>=0): Is an indicator term that is one when the expression evaluates to greater than zero and is zero otherwise. AR(X): Indicates an auto-regressive of order X is included in the econometric specification. MA(X): A moving average term of order X is included in the specification. YEAR: The current year being forecasted. 1. EGSLALL = ((NP / NP(-1)) ) * EGSLALL(-1) * (LYFCST - YEAR>=0) + EGSLALL(-1) * (EGSLALL(-1) / EGSLALL(-2)) * (LYFCST - YEAR<0) 2. EGSLALL_HP = (EGSLALL / EGSLALL(-1)) * EGSLALL_HP(-1) 3. GSL = (EGSLALL / EGSLALL(-1)) * EGSL(-1) 4. DLOG(BENEFICIARIES) = 0.5594068 * DLOG(EGSLALL_HP(-34)) + 0.0003020 * DLOG(EGSLALL_HP(-25)) + 0.0002169 * DLOG(EGSLALL_HP(-24)) + 0.0225695 * DLOG(EGSLALL_HP(-23)) + 0.1913009 * DLOG(EGSLALL_HP(-22)) + 0.2262039 * DLOG(EGSLALL_HP(-21)) 5. JECISTLC = JECISTLC(-1) * (JECIWSP / JECIWSP(-1)) 6. 59. GSLEXPC = GSLC + YPTRFGSL + GSLINTPAY + SUBGSL + 60. GSLC = GSLCWSS + GSLCKF + GSLCO 61. GSLCWSS = GSLCWAGE + GSLCPEN + GSLCHLTH + 62. GSLCHLTH = RETGSLCHLTH + EEGSLCHLTH 63. (EEGSLCHLTH / EGSLHLTH ) = (EEGSLCHLTH (-1) / EGSLHLTH (- 1)) * (HLTHNHEEXCGR) * ( (GDP / NP) / (GDP(-1) / NP(-1))) 64. (RETGSLCHLTH / RETHLTH ) = (RETGSLCHLTH (-1) / RETHLTH (- 1)) * (HLTHNHEEXCGR) * ( (GDP / NP) / (GDP(-1) / NP(-1))) 65. RETHLTH = RETHLTHPERBEN * BENEFICIARIES * (EGSL / EGSLALL) 66. RETHLTHPERBEN = RETHLTHPERBEN(-1) 67. EGSLHLTH = EGSL * (EGSLHLTH(-1) / EGSL(-1)) 68. GSLCOTHBEN = GSLCOTHBEN(-1) * (JECISTLC / JECISTLC(-1)) * (EGSL / EGSL(-1)) 69. GSLCO = GSLCO(-1) * (NP / NP(-1)) * (JPGDP / JPGDP(-1)) 70. GSLGI = GSLGI(-1) * (NP / NP(-1)) * (JPGDP / JPGDP(-1)) 71. GSLGIR = GSLGI / (JPGDP / 100) 72. KGSLR = KGSLR(-1) + GSLGIR - GSLCKFALLR 73. GSLCKFALLR = 0.027508 * KGSLR(-1) 74. GSLCKFALL = GSLCKFALLR * (JPGDP / 100) 75. GSLCKF = GSLCKF(-1) * (GSLCKFALL / GSLCKFALL(-1)) 76. YPTRFGSL = YPTRFGSLPAM + YPTRFGSLPAO 77. YPTRFGSLPAM = 1.726 * GFAIDSLSSMED 78. YPTRFGSLPAO = YPTRFGSLPAO(-1) * (NP / NP (-1)) * (JPGDP / JPGDP (-1)) 79. RMMUNIAAA_RESID = RMMUNIAAA_RESID(-1) 80. RMMUNIAAA = 0.707151184659468 + 0.761815685970831 * 81. GSLINTPAY = (RATEOWED / 100) * SLG_LCRED(-1) 82. RATEOWED = 0.8765652676 * RATEOWED(-1) + (1 - 0.8765652676) * 83. NETASSETPAY = GSLINTPAY - YGSLA 84. SUBGSL = SUBGSL(-1) * (NP / NP(-1)) * (JPGDP / JPGDP(-1)) 95. D(DBTGSLLT) / (GSLGI + GSLNETPCHNA - IGRANT) = 0.478671765326665 - 0.0678320738849175 * D(RMMUNIAAA) + 0.469267348080956 * D78 + 1.3549185597115 * D85 - 0.571578002864546 * D94 - 0.549417581324232 * D95 + [AR(1) = 0.720101110280723] 96. D(DBTGSLST) / GDP = (0.000435862040461702 - 0.237982866875603 * D(NETSAVGSL) / GDP - 0.00116135948551944 * D75 - 0.00305076556061719 * D76 - 0.00187119472727474 * D77 - 0.00199332729108819 * D87 + [AR(1) = 0.419998514150027 , AR(3) = 0.377010382422796 , MA(1) = - 0.378513568750189 , MA(2) = 0.320241719235162 , MA(3) = - 0.93530313308922 , BACKCAST = 1964]) * (1 - SLG_AFINLSWITCH(-1)) + ((D(SLG_AFINL) - D(DBTGSLLT) - D(TRADEPAYABLES) - D(DBTGSLUS) - NETLENDGSL) / (GDP)) * ( SLG_AFINLSWITCH(-1)) 97. DDBTGSLSTGDP_GRECON = (0.000435862040461702 - 0.237982866875603 * D(NETSAVGSL) / GDP - 0.00116135948551944 * D75 - 0.00305076556061719 * D76 - 0.00187119472727474 * D77 - 0.00199332729108819 * D87 + [AR(1) = 0.419998514150027 , AR(3) = 0.377010382422796 , MA(1) = - 0.378513568750189 , MA(2) = 0.320241719235162 , MA(3) = - 0.93530313308922 , BACKCAST = 1964]) 98. DBTGSLTE = DBTGSLLT + DBTGSLST 99. DLOG(DBTGSLUS / (JPGDP / 100)) = 0.026989088699108 - 1.46693539308972 * DLOG(GDPR) + 0.671368273891861 * D84 + 0.347532134271165 * D85 - 1.11842691662586 * D88 + [AR(2) = - 0.304187166509849 , MA(1) = - 0.961550584145944 , BACKCAST = 1970] 100. TRADEPAYABLES = TRADEPAYABLES(-1) * (GSLCO / GSLCO(-1)) 101. SLG_LCRED = DBTGSLLT + DBTGSLST + DBTGSLUS 102. SLG_LFINL = SLG_LCRED + TRADEPAYABLES 103. SLG_AFINLSWITCH = ((SLG_AFINL/ SLG_AFINL(-1)) - (GDP/GDP(- 1))<=0) * (YEAR>(LYACTUAL + 1)) * (DSLG_AFINL_GRALT / SLG_AFINL(-1) - (GDP/GDP(-1)-1)<=0) 104. SLG_AFINL_GRALT = GDP * DDBTGSLSTGDP_GRECON + D(DBTGSLLT) + D(TRADEPAYABLES) + D(DBTGSLUS) + NETLENDGSL 105. D(SLG_AFINL) = (NETLENDGSL + D(SLG_LFINL)) * (1 - SLG_AFINLSWITCH(-1)) + (SLG_AFINL(-1) * ((GDP / GDP(-1)) - 1)) * ( SLG_AFINLSWITCH(-1)) This appendix describes the variables in the state and local model as well as their sources. BENEFICIARIES = Total retired state and local government beneficiaries receiving periodic benefit payments, thousands; Census Bureau Government Retirement System. Values prior to 1981 are imputed using a constant growth rate between available data. CBASE = Personal consumption less food, services, electronic and mail- order sales, billions of dollars; U.S. Commerce Department, Bureau of Economic Analysis NIPA table 2.3.5 lines 1 - 7 - 13 less Census Current Business Reports, Annual Revision of Monthly Retail and Food Services: Sales and Inventories—January 1992 Through February 2006 Table 2 NAICS code 4541 (for 1992 and later years) http://www.census.gov/prod/www/abs/br_month.html or Census Historical Retail Trade Data (SIC-Based) http://www.census.gov/mrts/www/mrtshist.html SIC code 5961 (through 1991). CBASER = Real personal consumption less food, services, electronic and mail-order sales, billions of 2000 dollars; calculated by GAO. CBASER_RESID = Residual from the sales tax base equation, calculated by GAO. CELECMAIL = Electronic and mail order sales, billions of dollars; Census Bureau NAICS 4541 for 1992 – 2006, SIC Code 5961 for 1978–1991 and estimated with an exponential function for 1960–1977. CLAWBACK = Payments from states to the federal government related to the savings incurred as part of Medicare Part D, billions of dollars; CBO, The Budget and Economic Outlook (Washington, D.C.: January 2007) Box 3-2. CLAWBACKPER = Payments from states to the federal government related to the savings incurred as part of Medicare Part D, as a percentage of Medicaid Grants from the Federal Government; calculated by GAO. CPIU=Consumer price index all urban, index 1982 - 1984 = 100; CBO, The Budget and Economic Outlook: An Update (Washington, D.C.: August 2007), Table C-1. Di = Dummy variable; 1 in year i, 0 in other years. DBTGSLLT = Medium and long-term municipal securities outstanding, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table L.105 line 22. DBTGSLST = Short-term municipal securities outstanding, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table L.105 line 21. DBTGSLTE = Municipal securities outstanding, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table L.105 line 20. DBTGSLUS = U.S. Government loans to state and local governments outstanding, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table L.105 line 23. DDBTGSLSTGDP_GRECON = Projected growth in short-term municipal securities outstanding based on econometric specification, percentage of GDP; calculated by GAO. DEATHRATE = Percentage of OASDI beneficiaries that have been terminated; Calculated by the GAO as terminated OASDI beneficiaries from SSA table 6.F1 and total OASDI beneficiaries from table 5.A4. Missing values imputed with a constant growth rate. DEPRATE = Consumption of fixed capital, as a percentage of the prior period’s net capital stock of state and local governments; calculated by the GAO as GSLCKFALLR/KGSLR(-1). DSLG_AFINL_GRALT = Alternate projection of the change in total financial assets of state and local governments based on econometric projection of short-term municipal securities outstanding, billions of dollars; calculated by GAO. EECONPEN = Aggregate pension contributions by state and local employees, billions of dollars; U.S. Commerce Department, Bureau of Economic Analysis NIPA Table 6.11A, 6.11B and 6.11C Line 50 and 6.11D Line 52. EECONPENR = Aggregate pension contributions by state and local employees, billions of dollars deflated by the consumer price index; calculated by GAO. EEGSLCHLTH = State and local government health care contributions for active employees, billions of dollars; U.S. Department of Health and Human Services, Agency for Healthcare Research and Quality, Medical Expenditure Panel Survey. ESTATETAX = State and local government estate and gift taxes paid by persons, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA table 5.10 line 9. EGSL = State and local general government employees, thousands; U.S. Department of Commerce, Bureau of Economic Analysis Table 6.4A, 6.4B, and 6.4C. Full-Time and Part-Time Employees by Industry line 83 and 6.4D Full-Time and Part-Time Employees by Industry line 93. EGSLALL = State and local government employees, thousands; U.S. Department of Commerce, Bureau of Economic Analysis Table 6.4A, 6.4B and 6.4C Full-Time and Part-Time Employees by Industry line 82 and 6.4D Full-Time and Part-Time Employees by Industry line 92. EGSLALL_HP = Hodrick-Prescott filtered series of EGSLALL, thousands; calculated by GAO. EGSLHLTH = State and local government employees receiving health care benefits, thousands; U.S. Department of Health and Human Services, Agency for Healthcare Research and Quality, Medical Expenditure Panel Survey. EXOGEXPSHIFT = Exogenous change in expenditures for state and local governments. Variable is zero in baseline scenario and non-zero in sensitivities involving alternative expenditures, billions of dollars; calculated by GAO. EXOGTAXSHIFT = Exogenous change in tax revenue for state and local governments. Variable is zero in baseline scenario and non-zero in sensitivities involving alternative tax revenues, billions of dollars; calculated by GAO. GDP = Gross domestic product, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 1.1.5 line 1; exogenous, projections from CBO, The Budget and Economic Outlook: An Update (Washington, D.C.: August 2007), Table C-1. GDPR = Gross domestic product, billions of chained 2000 dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 1.1.6 line 1; exogenous, projections from CBO, The Budget and Economic Outlook: An Update (Washington, D.C.: August 2007), Table C-1. GFAIDSL = Federal grants-in-aid to state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 17; exogenous, projections from CBO, The Treatment of Federal Receipts and Expenditures in the National Income and Product Accounts (Washington, D.C.: August 2007) Table 2; interpolated from fiscal to calendar year by GAO. GFAIDSLO = Federal non-Medicaid grants to state and local governments, billions of dollars; calculated by GAO using U.S. Department of Commerce data as the difference between GFAIDSL and GFAIDSLSSMED; exogenous projection values calculated in the same way. GFAIDSLSSMED = Federal Medicaid grants, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.21U line 12; exogenous, projections from CBO, The Budget and Economic Outlook: An Update (Washington, D.C.: August 2007) Table 1-4 and CBO, The Long-Term Budget Outlook (Washington, D.C.: December 2005), p. 31 and http://www.cbo.gov/ftpdocs/69xx/doc6982/Data.xls. GSLC = Total consumption expenditures of state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 22. GSLCHLTH = State and local government health benefit contributions, billions of dollars; U.S. Department of Health and Human Services, Agency for Healthcare Research and Quality, Medical Expenditure Panel Survey. GSLCKF = Consumption of general government fixed capital, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA table 3.10.5 line 51. GSLCKFALL = Consumption of fixed capital, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA table 3.3 line 38. GSLCKFALLR= Consumption of fixed capital, billions of 2000 dollars; GAO calculation for years through 2006 = 2000 nominal value from U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Fixed Asset Table 7.3B line 46 times the relevant year’s quantity index/100 from NIPA Fixed Asset Table 7.4B line 46. GSLCKFALL = Government consumption of fixed capital, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA table 3.3 line 38. GSLCO = State and local consumption excluding employee compensation and capital consumption, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, calculated by GAO as GSLCO = GSLC-GSLCWSS-GSLCKF. GSLCOTHBEN = Other general state and local government employee compensation, billions of dollars; calculated by GAO from U.S. Department of Commerce, Bureau of Economic Analysis, total compensation less the sum of wages and salary accruals, pension contributions and health benefits (GSLCOTHBEN= GSLCWSS-GSLCWAGE-GSLCPEN-GSLCHLTH). GSLCPEN = State and local government contribution for general government employees, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, calculated by GAO based on NIPA tables 7.8 and 6.3A, 6.3B, 6.3C and 6.3D. GSLCPENALL = State and local government contribution for general and enterprise employees, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA table 7.8 Line 10. GSLCWSS = Total compensation for state and local government employees, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA table 3.10.5 line 50. GSLCWAGE = State and local wages for general government employees, billions of dollars; U.S. Department of Commerce, Bureau of Economics Analysis, NIPA table 6.3A, 6.3B, 6.3C Line 83 and 6.3D Line 93. GSLCWAGEALL = Total state and local wages for general government and government enterprise employees, billions of dollars; U.S. Department of Commerce, Bureau of Economics Analysis, NIPA table 6.3A, 6.3B, 6.3C Line 82 and 6.3D Line 92. GSLCWAGEALLR = Total state and local wages for general and enterprise government employees deflated by the consumer price index, billions of 2006 dollars; calculated by GAO. GSLEXP = Total expenditures of state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 33. GSLEXPC = Total current expenditures of state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 21. GSLGI = Gross investment of state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA table 3.3 line 35. GSLGIR = Gross investment of state and local governments, billions of chained 2000 dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA table 3.9.6 line 23. GSLINTPAY = Interest paid by state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 24 GSLNETPCHNA = Net purchases of non-produced assets by state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 37. GSLRCPT = Total receipts of state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 30 GSLRCPTC = Total current receipts of state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 1. GSLRCPTKTRF= Capital transfers received (net), state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 32. HIMEDRATIO = Federal Medicaid spending as a percentage of GDP, high cost scenario, percentage; CBO, The Long-Term Budget Outlook (Washington, D.C.: December 2005) http://www.cbo.gov/ftpdocs/69xx/doc6982/Data.xls scenario 1. HLTHNHEEXCGR = Multiplier reflecting the difference between growth in National Health Expenditures Spending per capita and growth in GDP per capita; GAO Analysis. IGRANT = Federal investment grants to state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 5.10U line 9. IGRANTCBO = Federal capital transfers; exogenous projections, billions of dollars; CBO, The Treatment of Federal Receipts and Expenditures in the National Income and Product Accounts (Washington, D.C.: August 2007) Table 1; interpolated from fiscal to calendar year by GAO. JECIWSP = Employment cost index – private wages and salaries, 2005Q4=100.0; BLS Employment Cost Index Historical Index Table 6; exogenous, projections from CBO, The Budget and Economic Outlook: An Update (Washington, D.C.: August 2007), Table C-1. JECISTLC=Employment cost index for state and local workers; 2005Q4=100.0, index; BLS Employment Cost Index Historical Index Table 6 (ftp://ftp.bls.gov/pub/suppl/eci.echistry.txt). JECISTLCR=Employment cost index for state and local workers, deflated by the CPIU, index; calculated by GAO. JPGDP = Chained price index - gross domestic product, index 2000=100; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA table 1.1.4 line 1; exogenous, projections from The Budget and Economic Outlook: An Update (Washington, D.C.: August 2007), Table C-1. KGSLR = Net capital stock of state and local governments, billions of 2000 dollars; U.S. Department of Commerce, Bureau of Economic Analysis, Fixed Asset Summary Table 9.1 line 21. L1TOTALFA = Total state and local government employee retirement fund assets, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table L.119 line 1. L1TOTALFALYACT = Total state and local government employee retirement fund assets for the last year that actuals are available, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table L.119 line 1. LOWMEDRATIO = Federal Medicaid spending as a percentage of GDP low cost scenario, percentage; CBO, The Long-Term Budget Outlook (Washington, D.C.: December 2005) http://www.cbo.gov/ftpdocs/69xx/doc6982/Data.xls scenario 3. LYACTUAL = last year actual data are available, 2006 LYFCST = last year of the forecast period, 2080. NETASSETPAY = Interest payments less receipts on assets, billions of dollars; calculated by GAO as GSLINTPAY-YGSLA. NETLENDGSL = Net lending or net borrowing (-) of state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 39. NETSAVGSL = State and local government net saving, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 27. NETSIGSL = Net social insurance fund balance, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 Line 28. NP = Total population, thousands; exogenous projections 2007 OASDI Trustees Report, Table V.A2.-Social Security Area Population. OPBALNETCAP= GAO’s measure of the operating balance, excludes receipts used to acquire capital as well as capital-related expenditures; the balance also excludes the surplus/deficit of government enterprises and the net balance of social insurance funds, billions of dollars. PENBEN= Aggregate pension payments made to state and local pension beneficiaries, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA table 6.11A, 6.11B, and 6.11C line 41 and 6.11D line 43. PENBENR= Aggregate pension payments made to state and local pension beneficiaries deflated by the consumer price index, billions of 2000 dollars, calculated by GAO. PVEECONPENR = Present value of EECONPENR using RPENREAL for each year; calculated by GAO. PVGSLCWAGEALLR = Present value of GSLCWAGEALLR using RPENREAL for each year; calculated by GAO. PVPENBENR = Present value of PENBENR using RPENREAL for each year; calculated by GAO. RATEASSETS = Effective rate received on state and local government financial assets, interest rate; historical values calculated by GAO as RATEASSETS = 100*YGSLA / SLG_AFINL(-1). RATEOWED = Effective rate paid on state and local government credit market instruments outstanding, interest rate; historical values calculated by GAO as RATEOWED = 100*GSLINTPAY / SLG_LCRED(-1). REST_ALT = Market value of real estate and other property outstanding excluding business equipment at the end of the period, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table B.100 line 4 plus B.102 line 3 plus B.103 line 3. RESTR_ALT = Real market value of real estate and other property outstanding excluding business equipment at the end of the period, billions of chained 2000 dollars, calculated by GAO. RESTR_ALT_RESID = Residual from the real estate tax base equation, calculated by GAO. RETGSLCHLTH = State and local government health care contributions for retired employees, billions of dollars; U.S. Department of Health and Human Services, Agency for Healthcare Research and Quality, Medical Expenditure Panel Survey. RETHLTH = State and local government retirees receiving healthcare benefits, thousands; U.S. Department of Health and Human Services, Agency for Healthcare Research and Quality, Medical Expenditure Panel Survey. RETHLTHPERBEN = State and local government health care retired enrollees as a percentage of beneficiaries; calculated by GAO. RMMUNIAAA = Rate on Aaa-rated municipal bonds, percent per annum; Board of Governors of the Federal Reserve System, Statistical Release H.15: Selected Interest Rates http://www.federalreserve.gov/releases/h15/data.htm. RMMUNIAAA_RESID = Residual from the municipal rate equation, calculated by GAO. RMTBM3= Yield on 3 month treasury bill, percent per annum; Board of Governors of the Federal Reserve System, Statistical Release H.15: Selected Interest Rates http://www.federalreserve.gov/releases/h15/data.htm. Exogenous, projections from CBO, The Budget and Economic Outlook: An Update (Washington, D.C.: August 2007), Table C-1. RMTCM10Y= Yield on 10-year Treasury notes, percent per annum; Board of Governors of the Federal Reserve System, Statistical Release H.15: Selected Interest Rates http://www.federalreserve.gov/releases/h15/data.htm; exogenous, projections from CBO, The Budget and Economic Outlook: An Update (Washington, D.C.: August 2007), Table C-1. RPENREAL = Real return on pension assets, interest rate; calculated by GAO as the sum of the product of the average real return from 1965 through 2005 (from Federal Reserve Board H.15 and other sources) on each retirement fund asset category (Flow of Funds table L.119) and each asset category’s average share of assets over the last ten years = 5.0%. SLG_AFINL = Total financial assets of state and local governments, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table L.105 line 1. SLG_AFINLSWITCH = One if assets grow slower than GDP when debt grows with econometric estimations, zero otherwise, dummy variable; calculated by GAO. SLG_LCRED = Credit market instrument liabilities of state and local governments, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table L.105 line 19. SLG_LFINL = Total liabilities of state and local governments, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table L.105 line 18. SUBGSL = State and local government subsidy payments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 25. SURGSLE = Current surplus of state and local government enterprises, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 20. TPVEECONPENR = Running sum of PVEECONPENR, billions of discounted dollars; calculated by GAO. TPVGSLCPEN = Unfunded pension liability through a given year, billions of discounted dollars; calculated by GAO. TPVGSLCWAGEALLR = Running sum of PVGSLCWAGEALLR, billions of discounted dollars; calculated by GAO. TPVPENBENR = Running sum of PVPENBENR, billions of discounted dollars; calculated by GAO. TRADEPAYABLES = Trade payables of state and local governments outstanding, billions of dollars; Board of Governors of the Federal Reserve System, Flow of Funds Table L.105 line 24. TXCORPGSL = Taxes on corporate income, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 10. TXGSL = Current tax receipts, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 2. TXIMGSL = Taxes on production and imports, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3, line 6. TXIMGSLO = Other taxes on production and imports, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3, line 9. TXIMGSLPROP = Property tax receipts, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3, line 8. TXIMGSLS = Sales tax receipts, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3, line 7. TXIMGSLSGEN = General sales tax receipts, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.5, lines 16 and 24. TXIMGSLSOTH = Other sales tax receipts, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.5, line 14 less 16 and 24. TXIMGSLSOTH_RESID = Residual from the other sales tax receipts equation, calculated by GAO. TXPGLOCAL = Local personal income tax receipts, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.21 Line 4. TXPGSL = Personal tax receipts, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 3. TXPGSLINC = Personal income tax receipts, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 4. TXPGSLO = Other personal tax receipts, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 5. TXPGSTATE = State personal income tax receipts, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.20 Line 4. TXPGSTATE_RESID = Residual from the state personal income tax receipts equation; calculated by GAO. TXSIGSL = Contributions for government social insurance, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 11. WALDGSL = Wage accruals less disbursements, state and local government, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 26. WC = Weight of current beneficiaries. Last year beneficiaries as a share of current year beneficiaries, percentage; calculated by GAO. WD = Weight of deceased beneficiaries. Number of deceased beneficiaries as a share of current year beneficiaries, percentage; calculated by GAO. WJECISTLCR = Weighted real state and local employment cost index. Serves as a proxy for the growth in the average pension benefit, index; calculated by GAO. WN = Weight of new beneficiaries. New beneficiaries as a share of total beneficiaries, percentage; calculated by GAO. YGSLA = Income receipts on assets, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 12. YGSLTRF = Current transfer receipts, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 16. YGSLTRFBUS = Current transfer receipts from businesses, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 18. YGSLTRFP = Current transfer receipts from persons, state and local governments, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 19. YPCOMPWSD = Wage and salary disbursements, billions of dollars, U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 2.1 line 3; exogenous, projections from CBO, The Budget and Economic Outlook: An Update (Washington, D.C.: August 2007), Table C-1. YPCOMPWSDR = Real wage and salary disbursements, billions of chained 2000 dollars; calculated by GAO. YPTAXABLE = Taxable personal income, billions of dollars. Calculated by GAO as wage and salary disbursements + dividends + interest + proprietors’ income + rental income from U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 2.1 lines 9, 12, 14 and 15; exogenous projections from CBO Budget and Economic Outlook. YPTRFGSL = State and local social benefit payments to individuals, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.3 line 23. YPTRFGSLPAM = State and local medical spending on behalf of individuals, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 3.12 line 32. YPTRFGSLPAO = State and local non-medical social benefit payments to individuals, billions of dollars; calculated by GAO as YPTRFGSL – YPTRFGSLPAM. ZB = Before-tax corporate profits excluding IVA, billions of dollars; U.S. Department of Commerce, Bureau of Economic Analysis, NIPA Table 1.12 line 44; exogenous projections from CBO, The Budget and Economic Outlook: An Update (Washington, D.C.: August 2007), Table C-1.
|
State and local governments provide an array of services to their residents, such as primary and secondary education, libraries, police and fire services, social programs, roads and other infrastructure, public colleges and universities, and more. These subnational governments may face fiscal stress similar to the federal government. Given the nature of the partnership among levels of government in providing services to Americans and the economic interrelationships among levels of government, understanding potential future fiscal conditions of the state and local government sector is important for federal policymaking. To provide Congress and the public with this broader context, we developed a fiscal model of the state and local sector. This report describes this model and provides (1) simulations of the state and local government sector's long-term fiscal outlook, (2) an analysis of the underlying causes of potential fiscal difficulties for the sector, (3) a discussion of the extent to which the long-term simulations are sensitive to alternative assumptions, and (4) an examination of how the state and local government sector could add to future federal fiscal challenges. The potential fiscal outcomes of the state and local government sector are projected through two fiscal balance measures: net lending or borrowing and what we call the operating balance. Net lending or borrowing--which is roughly analogous to the federal unified surplus or deficit--is a measure of the balance of all receipts and expenditures during a given time frame. Historically, total expenditures have usually exceeded total receipts, and the sector issues debt to cover part of the costs of its capital projects. As such, net lending or borrowing typically measures the need for the sector to borrow funds or draw down assets to cover its expenditures. The operating balance net of funds for capital expenditures--referred to in this report as the "operating balance"--is a measure of the ability of the sector to cover its current expenditures out of current receipts, that is, the balance of expenditures and receipts related to activities taking place in a given year. Most states have some sort of requirement to balance operating budgets. Projects with longer time frames are typically budgeted separately from the operating budgets and financed by a combination of current receipts, federal grants, and the issuance of debt. Because some current receipts may be used to fund part of longer-term investments, we developed a measure of the operating balance that makes adjustments for the extent to which current receipts are unavailable to fund current expenditures because they have been spent on longer-term projects, such as investments in buildings and roads. Our model shows that in less than a decade the state and local government sector will begin to face growing fiscal challenges. Both fiscal balance measures (1) net lending or borrowing and (2) the operating balance--are likely to remain within their historical ranges in the next few years, but both begin to decline thereafter and fall below their historical ranges within a decade. That is, absent policy changes, state and local governments will face an increasing gap between receipts and expenditures in the coming years. Since most state and local governments actually face requirements that their operating budgets be balanced or nearly balanced in most years, the declining fiscal conditions our simulations suggest are really just a foreshadowing of the extent to which these governments will need to make substantial policy changes to avoid these potential growing fiscal imbalances. As is true for the federal sector, the growth in health-related expenditures is the primary driver of the fiscal challenges facing the state and local government sector. In particular, two types of state and local expenditures will likely rise quickly. The first is Medicaid expenditures, and the second is expenditures by these governments for health insurance for state and local employees and retirees. Conversely, other types of expenditures of state and local governments in the aggregate--such as wages and salaries of state and local workers, nonhealth transfer payments (e.g., family assistance), and investments in capital goods--are assumed to grow slower than gross domestic product (GDP). Moreover, under the current policy scenario of the base case, most revenue categories grow at approximately the same rate as GDP. Therefore, the projected rise in health-related expenditures is the root of the fiscal difficulties these simulations suggest will occur. Although health care expenditures clearly appear to be a looming problem for the state and local government sector, the extent of fiscal difficulties faced by any given state or local government will vary with its individual expenditure and tax profile. We also used the model to examine how the fiscal balance measures would be affected over the long-term under assumptions that differed from those of our base case. In particular, we analyzed scenarios that differ across three factors: (1) the rate of growth in tax receipts, (2) the rate of growth in expenditures, and (3) the rate of growth in medical care expenditures. Some of the alternative scenarios were designed to examine the extent to which a change in base-case assumptions for any of these factors would enable the state and local government sector to maintain fiscal balances in their historical ranges. We found that it would be difficult to address the expected future fiscal deficits solely through tax increases or solely through expenditure cuts. Since 1992, we have produced long-term simulations of what might happen to federal deficits and debt under various policy scenarios. Our most recent long-term federal simulations show ever larger deficits resulting in a very large and growing federal debt burden over time.4 In that work, we found that federal fiscal difficulties stem primarily from an expected explosion of health-related expenditures. Our findings thus show that the state and local sector will provide an additional drag on an already declining federal government fiscal outlook and that the critical problem of escalating costs of health care is an economywide problem that will need to be addressed by all levels of government.
|
The FCS concept is designed to be part of the Army’s Future Force, which is intended to transform the Army into a more rapidly deployable and responsive force that differs substantially from the large division-centric structure of the past. The FCS family of weapons is now expected to include 14 manned and unmanned ground vehicles, air vehicles, sensors, and munitions that will be linked by an advanced information network. Fundamentally, the FCS concept is to replace mass with superior information—allowing soldiers to see and hit the enemy first rather than to rely on heavy armor to withstand a hit. The Army envisions a new way of fighting that depends on networking the force, which involves linking people, platforms, weapons, and sensors seamlessly together in a system of systems. Within the FCS program, eight types of manned ground vehicles are being developed, each having a common engine, chassis, and other components. One of the other common components is a hit avoidance system that features a set of capabilities to detect, avoid, and/or defeat threats against the manned ground vehicles. One of its subsystems is the APS, which is intended to protect a vehicle from attack by detecting a threat in the form of an incoming round or rocket propelled grenade (threat) and launching an interceptor round from the vehicle to destroy the incoming weapon. An APS system consists of a radar to detect the incoming weapon, a launcher, an interceptor or missile, and a computing system. The Army has employed a management approach for FCS that centers on a lead systems integrator to provide significant management services to help the Army define and develop FCS and reach across traditional Army mission areas. Boeing, along with its subcontractor, the Science Applications International Corporation (SAIC), serves as the lead systems integrator for the FCS system development and demonstration phase of acquisition, which is expected to extend until 2014. The lead systems integrator has a close partner-like relationship with the Army and its responsibilities include requirements development, design, and source selection of major system and subsystem subcontractors. In the case of APS, the first-tier subcontractors are the manned ground vehicle integrators, BAE and General Dynamics Land Systems, who are responsible for developing individual systems. BAE was designated the hit avoidance integrator, a role that covers more than active protection, and was responsible for awarding the subcontract to the APS developer. This subcontract has three elements: a base contract, option A to support the current force (the short-range solution) and option B to support the FCS manned ground vehicles (short- and long-range solution). Figure 1illustrates these relationships. A separate initiative involving active protection resulted from a Joint Urgent Operational Needs Statement, issued by Central Command and the Multi-National Corps in Iraq in April of 2005, which requested 14 special- equipped vehicles with a host of distinctive capabilities, one of which was an APS. The need statement called for a capability to field a combination of near-term technologies that would be useful in conducting force protection missions, reconnaissance and crowd control in Iraq and an evaluation of an active protection capability against rocket-propelled grenades as part of this suite of capabilities. To respond to this need statement, the Joint Rapid Acquisition Cell, a group within the Office of the Secretary of Defense (OSD) that seeks solutions to urgent needs and focuses on near-term or off-the-shelf equipment to meet these needs, provided funding to the Army, which worked with the OFT to evaluate various technologies, including an APS, for inclusion on the vehicles. The OFT was also an office within the OSD, and its role was to examine unanticipated needs and experiment with innovative technologies that could be used to meet warfighter needs. Both the process for evaluating APS sources and concepts to meet FCS needs and the urgent needs of the Central Command occurred nearly simultaneously, as shown in figure 3. As can be seen in figure 3, many events took place at the same time. The lead systems integrator for FCS completed its subcontractor selection for APS shortly before decisions were made on the near term system being considered to meet the Central Command need. The Trophy system was evaluated as a candidate system in both processes. In choosing the developer for the APS system, the FCS lead systems integrator, with Army support and concurrence, conducted a source selection and followed the FCS lead systems integrator subcontract provisions for avoiding organizational conflicts of interest. The purpose was to select the subcontractor for the APS that would be best able to develop the overall APS architecture to address the FCS requirements to defeat the short- and long-range antiarmor threats as well as meet the current force needs for defeating short-range rocket-propelled grenade attacks. The subcontractor selected would support the hit avoidance integrator in integrating APS technology into the FCS manned ground vehicles and also apply this architecture to the Army’s current force. The contract included two options that were to supply the specific design for the APS system: Option A for the short-range APS for the current force; and Option B for the short- and long-range solution for the FCS. These options would be awarded later, based on the results of trade studies subsequently performed. To protect against organizational conflicts of interest, contracts between the FCS lead systems integrator and its subcontractors preclude a subcontractor from conducting or participating in a source selection for other FCS subcontracts if any part of its organization submits a proposal. Under normal circumstances, since the APS would be part of the hit avoidance system of the FCS manned ground vehicles, the hit avoidance integrator, BAE, would have had the primary responsibility to issue the requests for proposals, conduct the source selection evaluation, and award the contract. In this capacity, BAE issued a draft request for proposals for the APS in April 2005. When the firm subsequently decided to submit a proposal on the APS subcontract, it was required, under the FCS lead systems integrator subcontract organizational conflict of interest provisions, to notify the lead systems integrator, Boeing, of its intention. BAE did so and the lead systems integrator reissued the request for proposals for APS in September 2005 and assumed the source selection responsibilities. BAE submitted its proposal but then had no further role in the evaluation of proposals or the actual source selection. After the source selection was complete, the lead systems integrator transferred contract responsibility to BAE, and BAE assumed the responsibility for awarding and administering the APS contract. From our review, the documentation from the APS source selection process shows that (1) no officials from the offering companies participated in the source selection process, and (2) all offerors were evaluated based on the same criteria contained in the request for proposals. In response to this request for proposals, four proposals were received. Three proposals were considered competitive, while the fourth was eliminated from consideration as it was considered “unsatisfactory” in technical merit and its architectural approach did not meet the requirements. Proposals from the remaining three companies—BAE, Raytheon, and General Dynamics Land Systems—were evaluated in the source selection process and no officials from these companies were on the evaluating or selecting teams. The source selection evaluation team consisted of 53 members, with 27 lead systems integrator representatives and 26 government representatives, including personnel from the FCS program manager’s office, Army research centers, and the Defense Contract Management Agency. After evaluating each of the proposals against the criteria spelled out in the request for proposals, the source selection evaluation team made its recommendation to the lead systems integrator source selection executive, who accepted itsrecommendation. Our review of the documentation shows that the criteria were ranked in order of importance, with technical merit considered most important, then cost, management/schedule and finally past performance. The technical merit criteria were divided into six sub-factors: systems engineering and architecture; expertise in APS technologies; simulation, modeling and test; fratricide and collateral damage; specialty engineering; and integration capability. Cost criteria were based on the realism, reasonableness, completeness, and affordability of the proposal. Management/ schedule criteria included such areas as expertise and experience in key positions. The past performance risk rating category was based on whether the respondents’ past performance raised doubts about their being able to perform the contract. Since all three proposals were deemed comparable in the areas of cost, management/schedule, and past performance, the primary discriminating factor became technical merit. According to the evaluation documentation, the technical merit scores were assessed based on whether the proposal demonstrated that the contractor understood the requirements and on its approach to meeting these requirements in each of the six technical merit sub-factors. Also, part of the technical score was a proposal risk evaluation, defined as the degree any proposal weaknesses could cause disruption of schedule, increase in cost, or degradation in performance. While the source selection’s stated purpose was to choose the company best able to develop the APS and not a specific design, each proposal used a specific APS system as an “artifact” to illustrate how they intended to meet the requirements. Even though, in theory, one company could have been chosen as the APS developer while another company’s preferred design could have been selected for development, much of the source selection assessment of technical merit was based on the “artifact” used for illustration. For example, in the technical merit category of APS expertise, the source selection evaluation of Raytheon states that “the vertical launch concept solves several design and integration problems.” Similarly, the BAE evaluation in the criteria of APS expertise states that “the proposed long-range countermeasure…design has effectiveness against the full spectrum of threats.” The General Dynamics Land System’s evaluation discusses the relatively high technology readiness level (TRL) of the “proposed Trophy system.” Therefore, while each company’s proposed solution was not the only aspect of the proposals to be evaluated, the evaluation documentation shows that the technical merit category was a key factor in the evaluation. The source selection evaluation team decided that the BAE and Raytheon proposals had the highest technical merit. BAE had a lower-risk approach and its solution had been tested in a relevant environment: however, the source selection evaluation team stated that this low-risk approach could prevent BAE from considering higher-risk options that would enable them to meet the full range of the performance requirements, such as protection from top-attack weapons. In addition, the source selection evaluation team determined that, while both Raytheon and BAE could develop the design presented in the BAE proposal, Raytheon would have the advantage if the vertical launch design was chosen. The evaluation team concluded that the Raytheon approach would have the best chance of meeting all the requirements. Based on the team’s recommendation, the lead systems integrator selected Raytheon. The integrator accepted the higher risk because it concluded that the Raytheon proposal had excellent technical merit and the firm would be better able to develop the vertical launch technology, if that were the design decided upon in the trade study. The APS development contract required the winner of the source selection to perform a trade study identifying and assessing competing APS alternatives. The trade study used a methodology consistent with Army guidance to evaluate all alternatives, ultimately selecting Raytheon’s vertical launch as the best design. According to the Army and the lead systems integrator, conducting the trade study after choosing the APS subcontractor could have resulted in selecting a different concept than Raytheon’s vertical launch design. However, in our view, this possibility appears remote given the selection of Raytheon as APS developer was based largely on the technical merits of its vertical launch design and the fact that it would be best able to develop that design. The development contract’s terms required the source selection winner to perform a trade study that would identify and assess APS alternatives and select an APS design from among competing alternatives. Therefore, once Raytheon won the development contract in March 2006, it was required to conduct the trade study rather than simply develop its own design. Since the trade study was not a source selection, FAR contract provisions regarding organizational conflicts of interest did not apply and Raytheon was free to participate in the study as the responsible contractor. The trade study’s specific objective was to choose a single short-range APS architecture (launcher and interceptor) that best met active protection requirements for FCS manned ground vehicles, with consideration for application to the current force. The study was conducted in May 2006 and Raytheon’s vertical launch concept was selected as the design. Based on the trade study documentation, the study was conducted using a methodology prescribed by Army guidance and this methodology was applied consistently to all APS alternatives. Seven alternatives survived a screening process and were then evaluated against a set of weighted criteria. The study concluded that Raytheon’s vertical launch was the best design approach. According to general Army guidance for trade studies, steps in the trade study process should include such elements as incorporating stakeholders, identifying assumptions, determining criteria, identifying alternatives, and conducting comparative analyses. The APS trade study process consistently applied such methodology to all APS alternatives by using separate, independent roles for a technical team and stakeholders; operating under a set of assumptions; using validated, protected technical data on each alternative; having a screening process to filter out non- viable alternatives; and using a set of weighted criteria to assess alternatives that survived the screening process. The trade study was performed by a technical team and stakeholders— each having separate roles and operating independently from one another. The technical team provided technical input and expertise to the stakeholders, who were the voting members of the study and made the final selection. The technical team, 21 members from industry and government as shown in table 1, included individuals who were subject matter experts as well as those from organizations participating in development of the short-range APS. Raytheon had 11 members on the technical team—the most from any single organization. The Army stated that this representation included administrators and observers and occurred because Raytheon had been designated APS developer, was thus required to conduct the trade study, and could gain knowledge from attending subject matter experts. The stakeholders made the final selection. The composition and number of stakeholders are shown in table 2. The stakeholders were program leads from the Army, lead systems integrator, and subcontractors responsible for integrating the FCS manned ground vehicles. According to the Army, Raytheon’s APS program manager was included as a stakeholder because Raytheon as developer had responsibility for developing the design chosen by the trade study process. The technical team and stakeholders operated the trade study under assumptions that set parameters for screening and evaluating each alternative. These assumptions were tied to such areas as performance and threat. Additionally, they conducted the study using data that was previously validated and remained protected throughout the study’s course. The primary source of the data was the Army Research, Development, and Engineering Command’s APS database, which contained data gathered and validated by the Command’s subordinate labs. This data was protected by third parties, including the Department of Energy’s Idaho National Lab, to ensure it was not changed during the study. The technical team used initial screening processes to eliminate four alternatives and identify seven viable alternatives for further assessment. The screening process filtered out the four alternatives that could not meet one or both of two criteria: (1) ability to grow to meet 360-degree hemispherical requirements, and (2) ability to be procured within a program schedule that would meet the need for prototype delivery of a short-range solution to the current force in fiscal year 2009. The seven alternatives that survived the screening process are shown in table 3, along with the respective government organizations and industry associated with each. The technical team assessed the seven alternatives against a set of five weighted criteria. According to the Army, these were the same top-level criteria mandated in all FCS trade studies, and their weights were assigned by FCS chief engineers. Table 4 defines each of the criteria and provides information on respective weights. The vertical launch concept scored highest in every category of criteria except risk. The Army indicated that the concept had about one-third better overall weighted performance than the other alternatives. Army officials described the vertical launch design as having technical advantages over the other alternatives—including the need for less space, weight, and power—as well as cost benefits. The Army and lead systems integrator officials told us that the trade study could have resulted in the selection of a design other than Raytheon’s. They also stated that, had this occurred, Raytheon as APS developer would have been required to develop this design rather than the vertical launch. While in theory the APS source selection chose a developer and the trade study chose the design to develop, in reality it is difficult to separate the trade study results and the source selection decision. In our view, in both the source selection and trade study, criteria related to technical aspects of the designs were deciding factors. Considering that the source selection evaluation relied on artifacts representing specific systems—and Raytheon won the source selection based in large part on the technical merit of its artifact—it seems unlikely that the APS trade study would have resulted in the selection of any system other than Raytheon’s vertical launch. Although the trade study concluded that vertical launch was a high-payoff approach, it also noted that it was a high risk due to its low technology maturity. At the time of the trade study, as shown in table 5, the vertical launch was less technologically mature than the other alternatives except for one. The Army expects the design to reach TRL 6 (system model or prototype demonstration in a relevant environment) by August or September 2007. The Army expects the vertical launch concept to be available for prototype delivery to current force combat vehicles in fiscal year 2009 and for testing on a FCS vehicle in 2011. These estimates appear optimistic. At a TRL 5, the vertical launch will require additional technology development and demonstration before it is ready for either application. Also, the FCS vehicles have not been fully developed yet. Assuming all goes as planned, most FCS vehicle prototypes are expected to be available in 2011 for developmental testing. As we noted in our March 2007 report, the Army has in general been accepting significant risks with immature technologies for the FCS program, coupled with compressed schedules for testing and evaluating prototypes. The Army and the lead systems integrator were both extensively involved in preparing for and conducting the APS subcontractor selection and the trade study. Prior to the selection, FCS program officials assisted in APS requirements development and reviewed and approved the scope of work, schedule, and evaluation criteria for the request for proposals. After the proposals were received, FCS program officials, technical experts from various Army research centers, representatives of the Tank-Automotive and Armaments Command and the Training and Doctrine Command were active participants in the selection evaluation team and reviewed the proposals along with the lead systems integrator members. The Source Selection Advisory Council, who advise the Source Selection Executive, provided oversight to the evaluation team and also had representatives from the FCS program manager’s office and the Army research community. Similarly, Army FCS officials, as well as technical experts from Army research centers, were members of the trade study technical team and also concurred in the choice of the vertical launch concept. The co-lead of the trade study was an FCS official. The lead systems integrator’s office assumed responsibility for the selection process, was the selection executive, and made the final choice of an APS developer. In addition to its lead role in the APS subcontractor selection, the lead systems integrator was represented on the trade study technical team and was one of the stakeholders. As our previous body of work on the FCS program has shown, the Army’s participation in the APS subcontractor selection and trade study is consistent with the Army’s general approach to FCS. Army leadership set up the FCS program in such a way that it would create more competition and have more influence over the selection of suppliers below the lead systems integrator. In setting up FCS, Army leadership noted that traditionally, once the Army hired a prime contractor, that contractor would bring its own supplier chains. The Army was not very involved in the choice of the suppliers. In FCS, the Army called for the lead systems integrator to hold a competition for the next tier of contractors. The Army had veto power over these selections. In addition, the Army directed that the lead systems integrator employ integrators at lower levels in the program, for high-cost items such as sensors and active protection systems and the Army has been involved with these selections. These integrators were also to hold competitions to select suppliers for those systems. This strategy was designed to keep the first tier of contractors from bringing their own supplier chains and pushed competition and Army visibility down lower in the supplier chain. The fact that the decisions on the APS subcontractor selection and trade study lend themselves to after- the-fact examination is due in part to the Army’s focus on competition at lower supplier levels on FCS. The process followed by OFT to meet the urgent needs of the Central Command was characterized by a simpler evaluation of active protection systems with potential for near term fielding, followed by actual physical testing of the APS candidate system that the OFT considered most technically mature, the Trophy. The Army’s Program Manager’s Office for Close Combat Systems was also involved in this evaluation. While the testing of Trophy had a high success rate, the Joint Rapid Acquisition Cell decided to defer fielding the Trophy based, at least in part, on the recommendations of the Army that the testing was not realistic and the Trophy’s integration on the platform would delay fielding of other useful capabilities. OFT officials did not agree with the Army’s position and thought the system’s success in testing indicated it should be further evaluated. To meet the Central Command’s need, OFT began an effort, the Full- Spectrum Effects Platform, to incorporate and test various improvements for potential application to existing military vehicles such as the Stryker. The platform itself is a modified Stryker vehicle. The program was divided into spirals: spiral 0 was to evaluate the synergy of the different systems, including the APS, on the vehicle and to compile lessons learned to aid in future concepts of operations, development and integration. Spiral 1 was intended to field a limited number of such systems to current forces in- theater in 2007, for purposes of an operational assessment of the various capabilities. The Full Spectrum Effects Platform is not part of or associated with FCS. OFT, in association with the Naval Surface Warfare Center, evaluated six candidate APS systems. Army representatives from the Program Manager, Close Combat Systems were also involved in this evaluation. The six candidate systems evaluated are shown in table 6. These systems were evaluated because the OFT and Navy and Army officials considered them to be the most promising APS solutions available within the required schedule. They evaluated each system based on such criteria as the feasibility of the operational concept, its cost and schedule factors, as well as its weight, size, and power requirements. Trophy was selected as the most promising system because it was the most technically mature system and was being developed by Israeli defense forces that had done initial work to integrate it on a light armored vehicle. OFT subsequently sponsored tests of the Trophy APS as part of the Full- Spectrum Effects Platform at Naval Surface Warfare Center in Dahlgren, Virginia. A representative from the Army’s Program Manager, Close Combat Systems, was part of the oversight team for these tests. In these test firings, the Trophy APS did well, destroying 35 of 38 incoming rocket- propelled grenades. However, the process for deciding how to proceed based on the test results was not agreed to in advance. A disagreement subsequently arose between OFT and the Army Close Combat System officials on how best to proceed from the testing. Although the tests were not designed to represent the Trophy’s capabilities in a realistic operational environment, OFT officials concluded that Trophy showed enough promise that they recommended continued testing to demonstrate its capabilities under various conditions. These officials estimated that an additional $13 million would cover the cost for this testing. They believed that Trophy could be integrated in the near term on existing light-armored vehicles and meet the urgent need for an immediate APS capability. The Army officials disagreed with OFT’s assessment that further testing of Trophy for inclusion on the Full Spectrum Effects Platform was justified. According to the Army officials, Trophy was not tested in a realistic environment for collateral damage or effectiveness. They believed that it would not be sufficiently tested for operational and safety issues within the time period required for the first spiral of the Full Spectrum Effects Platform. A delay in its integration on the Platform would delay, by at least 6 to 14 months, demonstration of other potentially useful capabilities,that could be immediately incorporated. Further, the Army estimated that it would take 5 years to integrate and field Trophy on other current force manned ground vehicles. The Army recommended to the Joint Rapid Acquisition Cell that the Trophy APS be excluded from Spiral 1 of the Full- Spectrum Effects Platform. In lieu of putting this technology in the field, the Army recommended that slat armor be incorporated on Spiral 1, since it has been effective in defeating the current rocket-propelled grenade threat. OFT officials disagreed, reasoning that although the use of slat armor on the current force has seemed to mitigate the effects of the rocket-propelled grenades currently in use, improved munitions will soon be available, and the slat armor will no longer be effective against these threats. They believed that the Trophy should be tested further in order to answer the questions raised by the Army and to provide insight into its capabilities. OFT officials based their position on the Trophy’s success in these tests, its high level of technical maturity when compared to other active protection systems, and the criticality of the need. The Joint Rapid Acquisition Cell presented this information to Central Command and recommended slipping the active protection capability to a later platform spiral, once it was more mature. Currently, there are no plans for further evaluation of active protection for future platform spirals. Upon the removal of the Trophy APS system from the Full-Spectrum Effects Platform vehicle, the Joint Rapid Acquisition Cell discontinued funding for further testing and evaluation of the Trophy. The disagreement between Army and OFT officials notwithstanding, we did not find information that would challenge the decision to defer the introduction of the Trophy on light-armored vehicles. On the other hand, the 5 years the Army estimated would be needed to integrate the comparatively mature Trophy system on the existing Stryker vehicle does not appear consistent with its estimates that the less mature vertical launch system could be ready for prototype delivery on Strykers in 2 years and on the yet-to-be developed FCS prototypes in 3 years. The FCS lead systems integrator, with support from the Army, followed a consistent and disciplined process in both selecting Raytheon to develop the APS for FCS and in conducting the trade study and followed the lead systems integrator subcontract and FAR provisions for avoiding organizational conflicts of interest. While the role played by Raytheon in the trade study was in accordance with its contract and thus not improper, the rationale for having the trade study follow the source selection is not entirely clear. The purpose of the trade study was to select the best concept; yet, the source selection process that preceded it had, in fact, chosen Raytheon primarily on the technical merits of its vertical launch design concept. It was thus improbable that the trade study would reach a different conclusion. Both the Army and the lead systems integrator were closely involved throughout the source selection and trade study processes and concurred in the selection of Raytheon’s APS concept. The process for evaluating the Trophy system to meet the urgent needs of the Central Command was different. It centered more directly on the results of physical testing, followed a less-disciplined decision-making process, and was characterized by considerable disagreement between OFT and the Army. While the decision to defer the use of the Trophy on fielded vehicles appears prudent in light of the limited realism of the testing, the promising results of the testing likewise appeared to warrant additional testing of the Trophy system to either confirm or dispel potential risks in the use of APS capabilities. Discontinuing all testing of the Trophy systems may thus have been premature, particularly in light of the need to better understand tactics, techniques and procedures and concepts of operations for both near-term and long-term applications. Because of the likelihood that the Army will introduce APS into its forces, we recommend that the Secretary of Defense support additional testing and demonstration of near-term APS systems on the Full Spectrum Effects Platform or similar vehicles to, at a minimum, help develop tactics, techniques, procedures, and concepts of operations for both near-term and long-term active protection systems. DOD provided us with written comments on a draft of this report. The comments are reprinted in appendix II. DOD did not concur with our recommendation. DOD also provided technical comments, which we incorporated where appropriate. DOD did not concur with our recommendation that the Secretary of Defense support additional testing and demonstration of near-term active protection systems on the Full Spectrum Effects Platform that could respond to the Central Command’s need. It stated that the original decision in May 2006 that delayed delivering Full Spectrum Effects Platform capabilities due to technical development and performance risks remains true today. DOD added that there are no active protection systems mature enough at this time to integrate on a Full Spectrum Effects Platform regardless of any additional testing and demonstration efforts. This represents a much more decided opinion than was rendered at the time of the OFT tests. At that time, Army officials believed that the Trophy would not be sufficiently tested for operational and safety issues in time for the first spiral of the Full Spectrum Effects Platform. OFT officials believed that the Trophy should be tested further to answer the questions raised by the Army and to provide insight into its capabilities. Ultimately, the Joint Rapid Acquisition Cell recommended slipping the active protection capability to a later spiral of the Full Spectrum Effects Platform. This was the basis for our recommendation for additional testing of near-term active protection systems on the Full Spectrum Effects Platform. DOD stated that it continues to pursue active protection, citing the Army’s vertical launch system for FCS. As stated in our report, this system is technically immature and the Army’s estimates for testing it appear optimistic. According to the Institute of Defense Analysis, the vertical launch system is ambitious, with much enabling technology not yet demonstrated. Given the criticality of active protection for the FCS manned ground vehicles, additional testing of near-term active protection systems could provide valuable insights into operations and tactics that would benefit future applications, such as FCS. DOD noted that the Trophy system is being tested on the Wolf Pack Platoon Project, an OSD Rapid Reaction Technology Office (formerly OFT) effort. However, this project is not directed toward development of APS tactics, techniques, procedures, or concepts of operations. In addition, it will not include testing against live targets. Testing near-term active protection systems on the Full Spectrum Effects Platform or similar vehicles is valuable for answering remaining questions about such systems and to provide insights for the employment of future systems. This is particularly important given the likelihood that the Army will field some form of APS to its forces. We have broadened our recommendation to capture the value of continued testing of near-term APS for tactics, techniques and procedures and concepts of operations. Please contact me on (202) 512-4841 if you or your staff has any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. To develop the information on the U.S. Army’s decision to pursue a new APS system under the FCS program, we interviewed officials of the Office of the Assistant Secretary of the Army (Acquisition, Logistics and Technology); the Tank-Automotive and Armaments Command; the Joint Rapid Acquisition Cell; the Office of Force Transformation; the Naval Surface Warfare Center (Dahlgren Division); the Program Manager for the Future Combat System (Brigade Combat Team); and the Future Combat System Lead Systems Integrator. We reviewed the APS subcontractor selection documentation, including the APS request for proposal, current force and FCS operational requirements documents, subcontract proposals, criteria used to rate those proposals, and the APS development contract to determine if procedures for avoiding organizational conflicts of interest were followed and how the APS subcontractor was selected. In addition, we held discussions with key Army officials and lead systems integrator representatives regarding this process and their roles in it. To determine why the trade study was conducted after source selection, we reviewed the trade study process and results and Army guidelines for conducting trade studies. To identify the roles played by both the Army and lead systems integrator in the selection of an APS, we reviewed documentation concerning their roles in these processes. We also reviewed these materials to determine whether consideration was given to a separate APS solution for current forces and, in conjunction with this issue, we reviewed test reports and other documentation and discussed the testing of an alternative APS system, the Trophy, with the parties involved. In evaluating the APS subcontractor selection and trade study processes, we did not attempt to determine if the best technical solution was chosen, but only if these processes followed lead systems integrator provisions for organizational conflicts of interest and used a consistent methodology for the trade study. We conducted our work between October 2006 and June 2007 in accordance with generally accepted government auditing standards. Other contributors to this report were Assistant Director William R. Graveline, Marie P. Ahearn, Beverly Breen, Tana Davis, Letisha Jenkins, Kenneth E. Patton, and Robert Swierczek.
|
Active Protection Systems (APS) protect vehicles from attack by detecting and intercepting missiles or munitions. In 2005, the lead systems integrator for the Army's Future Combat Systems (FCS) program sought proposals for an APS developer and design and to deliver APS prototypes on vehicles by fiscal year 2009. Raytheon was chosen the APS developer. At the same time, the Department of Defense's Office of Force Transformation (OFT) evaluated near-term APS for potential use in Iraq. GAO was asked to review the Army's actions on APS/FCS: (1) the process for selecting the subcontractor to develop an APS for FCS and if potential conflicts of interest were avoided; (2) the timing of the trade study and if it followed a consistent methodology to evaluate alternatives, and the results; (3) the role the Army and Boeing played in selecting the developer; and (4) the process followed to provide a near-term APS solution for current forces. In selecting the APS developer, the Army and Boeing--the FCS lead systems integrator--followed the provisions of the FCS lead systems integrator contract, as well as the Federal Acquisition Regulation, in addressing organizational conflicts of interest. No officials from the offering companies participated in the evaluation and all offerors were evaluated based on the same criteria. Four proposals were evaluated and three were determined to be comparable in terms of cost and schedule. The winner--Raytheon--was chosen on technical merit, as being more likely to meet APS requirements although its design had less mature technology. The APS development contract required the source selection winner to perform a trade study to assess alternatives and select the best design for development, and the Raytheon design was chosen. The trade study applied a consistent methodology to all alternatives before selecting Raytheon's vertical launch design. While the role played by Raytheon in the trade study was in accordance with its contract, the rationale for having the trade study follow the source selection is not entirely clear. The purpose of the trade study was to select the best concept; yet the source selection process that preceded it had, in fact, chosen Raytheon primarily on the technical merits of its vertical launch design concept. Although the vertical launch technology is not mature, the Army estimated that it could be available for prototype delivery to current force vehicles in fiscal year 2009 and tested on a FCS vehicle in 2011. This may be an optimistic estimate, as the FCS vehicle is yet to be fully developed. The Army and Boeing were extensively involved in APS source selection and the trade study. FCS officials actively participated and concurred in the final selection of the APS developer. FCS officials and technical experts from Army research centers took part in the trade study and helped choose the vertical launch design. Boeing officials took part in various ways and, with the Army's concurrence, selected Raytheon as the APS developer, participated in the trade study, and recommended the vertical launch approach. In its pursuit of a different APS concept, OFT was responding to an urgent need statement issued by the Central Command with potential for near-term fielding. This evaluation centered on the results of physical testing of the most technically mature candidate system, the Trophy. Decisions on how to proceed with Trophy involved disagreement between OFT and the Army. While the Trophy tests were successful, the Joint Rapid Acquisition Cell decided to defer fielding the APS system, based in part on the recommendation of Army officials, who believed that testing had not been realistic and integrating it on the platform would delay fielding other useful capabilities. OFT officials proposed additional testing of Trophy to answer these questions, but funding for further OFT testing of this system was discontinued after the Joint Rapid Acquisition Cell's decision.
|
Congress enacted the Nuclear Waste Policy Act of 1982 to establish a comprehensive policy and program for the safe, permanent disposal of commercial spent fuel and other highly radioactive wastes in one or more mined geologic repositories. The act charged DOE with (1) establishing criteria for recommending sites for repositories; (2) “characterizing” (investigating) three sites to determine each site’s suitability for a repository (1987 amendments to the act directed DOE to investigate only the Yucca Mountain site); (3) recommending one suitable site to the President, who, if he considered the site qualified for a license application, would submit a recommendation to Congress; and (4) seeking a license from NRC to construct and operate a repository at the approved site. The act created the Office of Civilian Radioactive Waste Management within DOE to manage its nuclear waste program. Since the 1980s, DOE has spent years conducting site characterization studies at the Yucca Mountain site to determine whether it is suitable for a high-level radioactive waste and spent nuclear fuel repository. DOE, for example, has completed numerous scientific studies of the mountain and its surrounding region for water flow and the potential for rock movement, including volcanoes and earthquakes that might adversely affect the performance of the repository. To allow scientists and engineers greater access to the rock being studied, DOE excavated two tunnels for studying the deep underground environment: (1) a five-mile main tunnel that loops through the mountain, with several research areas or alcoves connected to it; and (2) a 1.7-mile tunnel that crosses the mountain (see fig. 2). This second tunnel allows scientists to study properties of the rock and the behavior of water near the potential repository area. In July 2002, Congress approved the President’s recommendation of the Yucca Mountain site for the development of a repository. The Yucca Mountain project is currently focused on preparing an application to obtain a license from NRC to construct a repository. The required application information includes both repository design work and scientific analyses. DOE is engaged in necessary tasks such as compiling information and writing sections of the license application, and is conducting technical exchanges with NRC staff and addressing key technical issues identified by NRC to ensure that sufficient supporting information is provided. It also plans to further develop the design of the repository, including revised designs for the repository’s surface facilities and canisters to hold the waste. DOE is also identifying and preparing potentially relevant documentary material that it is required to make available on NRC’s Web-based information system, known as the Licensing Support Network. This is a critical step because DOE is required to certify that the documentary material has been identified and made electronically available no later than 6 months in advance of submitting the license application. In February 2005, DOE announced that it does not expect the repository to open until 2012 at the earliest, which is more than 14 years later than the 1998 goal specified by the Nuclear Waste Policy Act of 1982. More recently, the conference report for DOE’s fiscal year 2006 appropriations observed that further significant schedule slippages for submitting a license application are likely. Further delays could arise from factors such as the time needed for EPA to establish revised radiation standards for Yucca Mountain and for DOE to revise its technical documents in response. Such delays could be costly because nuclear utilities, which pay for most of the disposal program through a fee on nuclear power, have sued DOE, seeking damages for not starting the removal of spent nuclear fuel from storage at commercial reactors by the 1998 deadline. Estimates of the potential damages vary widely, from DOE’s estimate of about $5 billion to a nuclear industry’s estimate of about $50 billion, but the cost for the damages will likely rise if there are further delays to opening the repository. Given these schedule slippages, Congress has considered other options for managing existing and future nuclear wastes, such as centralized interim storage at one or more DOE sites. The conference report for DOE’s fiscal year 2006 appropriations directed DOE to develop a spent nuclear fuel recycling plan to reuse the fuel. However, according to the policy organization of the nuclear energy industry, no technological option contemplated will eliminate the need to ultimately dispose of nuclear waste in a geologic repository. In October 2005, the project’s Acting Director issued a memorandum calling for the development of wide-ranging plans for the “new path forward,” DOE’s effort to address quality assurance and other challenges prior to applying for a license. To restore confidence in scientific documents that will support the license application, some of the plans will address the need to review and replace USGS work products, a requirement for USGS to certify its scientific work products, and establishing a lead national laboratory to assist the project. Other plans are focused on a new simplified design for the waste canisters and repository facilities, a design that is expected to improve the safety and operation of the repository by eliminating the need to directly handle and process the spent fuel at the repository. Further, this aggressive effort called for management changes, including a transition plan; more rigorous project management, including a new baseline schedule; rescoping existing contracts and developing new contracts; tracking project hiring actions; a financial plan; and new reporting indicators. After DOE submits the license application, NRC plans to take 90 days to examine the application for completeness to determine whether DOE has addressed all NRC requirements. One of the reviews for completeness will include an examination of DOE’s documentation of the quality assurance program to assess whether it addresses all NRC criteria. These criteria include, among other things, organization, design control, document control, corrective actions, quality assurance records, and quality audits. If it deems any part of the application is incomplete, NRC may either reject the application or require that DOE furnish the necessary documentation before proceeding with the detailed technical review of the application. If it deems the application is complete, NRC will docket the application, indicating its readiness for a detailed technical review. Once the application is accepted and placed on the docket, NRC will conduct its 18-month technical review of the application to determine if the application meets all NRC requirements, including the soundness of scientific analyses and preliminary facility design, and NRC quality assurance criteria. If NRC discovers problems with the technical information used to support the application, it may conduct specific reviews, including inspections, to determine the extent and effect of the problem. Because the data, models, and software used in modeling repository performance are integral parts of this technical review, quality assurance plays a key role since it is the mechanism used to verify the accuracy of the information DOE presents in the application. NRC may conduct reviews, including inspections, of the quality assurance program if technical problems are identified that are attributable to quality problems. NRC will hold public hearings chaired by its Atomic Safety and Licensing Board to examine specific topics. After completing the proceedings, the board will forward its initial decision to the NRC commissioners for their review. Finally, within 3 to 4 years from the date that NRC dockets the application, NRC will make a decision to grant the construction authorization, reject the application, or grant the construction authorization with conditions. NRC will grant a construction authorization only if it concludes from its reviews that the repository would meet its reasonable expectation that the safety and health of workers and the public would be protected. DOE has repeatedly experienced quality assurance problems with its work on the Yucca Mountain project. In the late 1980s, DOE had been challenged to fix and develop adequate plans and procedures related to quality assurance. By the late 1990s, audits by GAO, DOE, and others identified recurring quality assurance problems with several aspects of key scientific data, models, and software. Currently, in preparing to submit the license application to NRC, DOE is relying on costly and time-consuming rework to resolve lingering quality assurance problems with the transparency and traceability of data and in project design and engineering documents uncovered during audits and after-the-fact evaluations. DOE has a long-standing history of attempting to address NRC concerns about its quality assurance program. Although NRC will have responsibility for regulating the construction, operation, and decommissioning (closure) phases of the project, its regulatory and oversight role does not begin until DOE submits a license application. As a result, NRC’s role in the project has been limited to providing guidance to DOE to ensure an understanding of NRC regulations and that the years of scientific and technical work will not later be found inadequate for licensing purposes. Specifically, since 1984, NRC has agreed to point out problems it identifies with the quality assurance program so that DOE can take timely corrective action. Initially, this NRC guidance was mainly focused on ensuring that DOE had the necessary quality assurance organization, plans, and procedures. As we reported in 1988, NRC had reviewed DOE’s quality assurance plans and procedures comprising the principal framework of its quality assurance program, and concluded that they were inadequate and did not meet NRC requirements. NRC also concluded that DOE’s efforts to independently identify and resolve weaknesses in the plans and procedures were ineffective. After observing DOE quality assurance audits, NRC determined that the audits were ineffective for measuring whether quality assurance procedures were being effectively implemented. Further, NRC identified additional concerns, during the 1980s, related to DOE management and organizational deficiencies relating to the quality assurance program. Specifically, among other things, NRC found the following: DOE had a small staff and relied heavily on contractors to provide quality assurance oversight. Based on its experience in regulating nuclear power plants, NRC found that these types of organizations frequently developed major quality-related problems. DOE had indirect project control, with administrative and functional control over the project split between different offices. NRC found that such project control arrangements tend to have serious quality assurance-related problems because conflicts can arise between quality and other organizational goals, such as cost and schedule. During a 1984 NRC visit to Nevada, DOE project participants had expressed the opinion that quality assurance is “unnecessary, burdensome, and an imposition.” Further, in 1986, DOE issued a stop- work order to the USGS based on a determination that USGS staff did not appreciate the importance of quality assurance and that USGS work would not meet NRC expectations. NRC believed that organizational attitudes can indicate whether a project is likely to experience problems relating to quality assurance and found such examples troublesome. Finally, based in part on the information obtained from its oversight activities, NRC concluded, in 1989, that DOE and its key contractors had yet to develop and implement an acceptable quality assurance program. However, by March 1992, NRC came to the conclusion that DOE had made significant progress in improving its quality assurance program. NRC noted that DOE had addressed many of its concerns, specifically that, among other things, (1) all of the contractor organizations had developed and were in the process of implementing quality assurance programs that met NRC requirements, (2) quality assurance management positions had been filled with full-time DOE personnel with appropriate knowledge and experience, and (3) DOE had demonstrated that it is capable of evaluating and correcting deficiencies in the overall quality assurance program. Nevertheless, in October 1994, NRC found problems with quality assurance, particularly with the site contractor’s ability to effectively implement corrective actions and DOE’s ability to oversee the site contractor’s quality assurance program. As DOE’s quality assurance program matured, it resolved NRC concerns about its organization, plans, and procedures, and in the late 1990s began successfully detecting new quality assurance problems in three areas critical to the repository’s successful performance: the adequacy of the data sources, the validity of scientific models, and the reliability of computer software developed at the site. These problems surfaced in 1998 when DOE began to run the initial version of its performance assessment model. Specifically, DOE was unable to ensure that critical project data had been collected and tracked back to the original sources. In addition, DOE did not have a standardized process for developing scientific models used to simulate a variety of geologic events or an effective process for ensuring that computer software used to support the scientific models would work properly. As required by DOE’s quality assurance procedures, the department conducted a root cause analysis and issued a corrective action plan in 1999. After corrective actions were taken, DOE considered the issues resolved. However, in 2001, similar deficiencies associated with models and software resurfaced. DOE attributed the recurrence to ineffective procedures and corrective actions, improper implementation of quality procedures by line managers, and personnel who feared reprisal for expressing quality concerns. Recognizing the need to correct these recurring problems, DOE conducted a comprehensive root cause analysis that included reviews of numerous past self-assessments and independent program assessments, and identified weaknesses in management systems, quality processes, and organization roles and responsibilities. Following the analysis, in July 2002, DOE issued its Management Improvement Initiatives (Initiatives) that addressed quality problems with software and models. In addition, DOE added other corrective actions to address management weaknesses that it found in areas such as roles and responsibilities, quality assurance processes, written procedures, corrective action plans, and work environment. However, DOE continued to face difficulties in resolving quality assurance problems concerning the data, software, and modeling to be used in support of the licensing application: Data management. As part of NRC’s quality assurance requirements, data used to support conclusions about the safety and design of the repository must be either collected under a quality assurance program or subjected to prescribed testing procedures to ensure the data are accurate for their intended use. In addition, the data supporting these conclusions must also be traceable back to its original source. In 1998, DOE identified quality assurance problems with the quality and traceability of data—specifically that some data had not been properly collected or tested to ensure their accuracy and that data used to support scientific analysis could not be properly traced back to their source. DOE again found similar problems in April and September 2003, when a DOE audit revealed that some data sets did not have the documentation necessary to trace them back to their sources; the processes for data control and management were unsatisfactory; and faulty definitions were developed, which allowed unqualified data to be used. Software management. DOE quality assurance procedures require that software used to support analysis and conclusions about the performance and safety of the repository be tested or created in such a way to ensure that it is reliable. From 1998 to 2003, multiple DOE audits found recurring quality assurance problems that could affect confidence in the adequacy of software codes. For example, in 2003, DOE auditors found problems related to software similar to those found previously in areas such as technical reviews, software classification, planning, design, and testing. Further, a team of industry professionals hired by DOE to assess quality assurance problems with software reported in February 2004 that these problems kept recurring because DOE did not assess the effectiveness of its corrective actions and did not adequately identify the root causes of the problems. Model validation. Models are used to simulate natural and environmental conditions at Yucca Mountain, and to demonstrate the performance of the future repository over time. However, before models can be used to support the license application, DOE must demonstrate through a process called validation that the models are able to accurately predict geologic events. In 1998, a team of project personnel evaluated the models and determined that 87 percent did not comply with the validation requirements. In 2001, and again in 2003, DOE audits found that project personnel were not properly following procedures— specifically in the areas of model documentation, model validation, and checking and review. Further, the 2003 audit concluded that previous corrective actions designed to improve validation and reduce errors in model reports were not fully implemented. After many years of working to address these quality assurance problems with data, software, and models, DOE had mostly resolved these problems and closed the last of the associated condition reports by February 2005. As DOE prepares to submit the Yucca Mountain project license application to NRC, it has relied on costly and time consuming rework to ensure that the documents supporting the application are accurate and complete. Specifically, DOE has relied on inspections and rework by DOE personnel to resolve quality assurance problems with the traceability and transparency of technical work products. These efforts to deal with quality problems at the end, rather than effectively ensuring that work organizations are producing quality products from the beginning, add to the project’s cost and could potentially delay DOE’s submission of the license application to NRC. In addition, DOE’s efforts indicate that some corrective actions have been ineffective in resolving problems with the quality assurance process. Further, DOE is now detecting quality assurance problems in design and engineering work that are similar to the quality assurance problems it experienced with its scientific work in the late 1990s. Although DOE did not initiate its major effort to address these problems until 2004, the department and NRC for years had known of quality assurance problems with the traceability and transparency of technical work products called Analysis and Model Reports (AMR). AMRs are a key component of the license application, and contain the scientific analysis and modeling data demonstrating the safety and performance of the planned repository. Among other quality requirements, AMRs must be traceable back to their original source material and data, and must also be transparent in justifying and explaining their underlying assumptions, calculations, and conclusions. In 2003, based in part on these problems as well as DOE’s long-standing problems with data, software, and modeling, NRC conducted an independent evaluation of three AMRs. The scope of the review was to determine if the AMRs met NRC requirements for being traceable, transparent, and technically appropriate for their use in the license application. NRC found significant problems. First, in some cases DOE was not transparent in explaining the basis on which it was reaching conclusions. For example, in some circumstances, DOE selected a single value from a range of data without sufficient justification. Other times, DOE did not explain how a range of experimental conditions were representative of repository conditions. Second, where DOE did sufficiently explain the basis for a conclusion, it did not always provide the necessary technical information, such as experimental data, analysis, or expert judgment, to trace the support for that explanation back to source materials. For example, DOE did not explain how information on one type of material provided an appropriate comparison for another material. Moreover, while DOE had identified similar problems in the past, the actions taken to correct them did not identify and resolve other deficiencies. NRC concluded that these findings suggested that other AMRs possibly had similar problems, and that if not resolved, such problems could delay NRC’s review of the license application as it would need to conduct special inspections to resolve any issues it found with the quality of technical information. To address problems of traceability and transparency, DOE in the spring of 2004 initiated an effort called the Regulatory Integration Team (RIT) to perform a comprehensive inspection and rework of the AMRs to ensure they met NRC requirements and expectations. According to DOE officials, the RIT involved roughly 150 full-time personnel from DOE, USGS, and multiple national laboratories such as Sandia, Los Alamos, and Lawrence Livermore. First, the RIT screened all of the approximately 110 AMRs and prioritized its efforts on 89 that needed additional rework. Ten AMRs were determined to be acceptable, and 11 were canceled because they were no longer needed to support the license application. According to DOE officials, approximately 8 months later, the RIT project was completed at a cost of about $20 million, with a total of over 3,700 problems and issues addressed or corrected. In February 2005, in a letter to DOE, the site contractor stated that the RIT effort was successful and that the AMRs had been revised to improve traceability and transparency. Subsequently, however, additional problems with traceability and transparency have been identified, requiring further inspections and rework. For example, after the March 2005 discovery of e-mails from USGS employees written between May 1998 and March 2000 implying that employees had falsified documentation of their work to avoid quality assurance standards, DOE initiated a review of additional AMRs that were not included in the scope of the 2004 RIT review. The additional AMRs contained scientific work performed by the USGS employees and had been assumed by the RIT to meet NRC requirements for traceability and transparency. However, according to DOE officials, DOE’s review determined that these AMRs did not meet NRC’s standards, and additional rework was required. Further, similar problems were identified as the focus of the project shifted to the design and engineering work required for the license application. In February 2005, the site contractor determined that in addition to problems with AMRs, similar traceability and transparency problems existed in the design and engineering documents that comprise the Safety Analysis Report—the report necessary for demonstrating to NRC how the facilities and other components of the repository site will meet the project’s health, safety, and environmental goals and objectives. In a root cause analysis of this problem, the site contractor noted that additional resources were needed to inspect and rework the documents to correct the problems. DOE cannot be certain that it has met continuous improvement goals for implementing its quality assurance requirements, a commitment DOE made at the closure of its Management Improvement Initiatives (Initiatives) in April 2004. At that time, DOE told us it expected that the progress achieved with the initiatives would continue and that its performance indicators would enable it to assess further progress and direct management attention as needed. However, DOE’s performance indicators, as well as a second management tool—trend evaluation reports—have not been effective for this purpose. More specifically, the indicators panel did not highlight the areas of concern covered by the initiatives and had weaknesses in assessing progress because the indicators kept changing. The trend evaluation reports also did not focus on tracking the concerns covered by the Initiatives, had technical weaknesses for identifying significant and recurring problems, had inconsistently tracked progress in addressing problems, and could not fully analyze projectwide problems. In addition, the trend reports’ tracking of problems for which corrective actions were already being taken was at times overly influenced by judgments about whether additional management action was warranted rather than the problems’ significance. By the time that the actions called for by the Initiatives had been completed in April 2004, project management had already developed the indicators panel, which DOE refers to as the annunciator panel, to use at monthly management meetings to monitor project performance. The panel was a single page composed of colored blocks representing selected performance indicators and their rating or level of performance. A manager viewing the panel would be able to quickly see the color rating of each block or indicator. For example, red indicated degraded or adverse performance warranting significant management attention; yellow indicated performance warranting increased management attention or acceptable performance that could change for the worse; and green indicated good performance. The panel represented a hierarchy of indicators in which the highest level, or primary, indicators were shown; secondary indicators that determined the primary indicators’ ratings were shown for some primary indicators; but lower third- or fourth-level indicators were not shown. Our review analyzed a subset of these indicators that DOE designated as the indicators that best predict performance in areas affecting quality. While we were conducting our review, DOE suspended preparation of the panel after August 2005 while it reconsiders its use of indicators to monitor project performance. DOE had also suspended preparation of the panel from late 2004 to early 2005 in order to make substantial revisions. These revisions were made, in part, to emphasize fewer, more important indicators for management attention. The Initiatives raised concerns about five key areas of management weakness as adversely affecting the implementation of quality assurance requirements: 1. Roles and responsibilities were becoming confused as the project transitioned from scientific studies to activities supporting licensing. The confusion over roles and responsibilities was undermining managers’ accountability for results. The Initiatives’ objective was to realign DOE’s project organization to give a single point of responsibility for project functions, such as quality assurance and the Corrective Action Program, and hold the project contractor more accountable for performing the necessary work in accordance with quality, schedule, and cost requirements. 2. Product quality was sometimes being achieved through inspections by the project’s Office of Quality Assurance rather than being routinely implemented by the project’s work organizations. As a result, the Initiatives sought to increase work organizations’ responsibility for being the principle means for achieving quality. 3. Work procedures were typically too burdensome and inefficient, which impeded work. The Initiatives sought to provide new user-friendly and effective procedures, when necessary, to allow routine compliance with safety and quality requirements. 4. Multiple corrective action programs existed, processes were burdensome and did not yield useful management reports, and corrective actions were not completed in a timely manner. The Initiatives sought to implement a single program to ensure that problems were identified, prioritized, and documented and that timely and effective corrective actions were taken to preclude recurrence of problems. 5. The importance of a safety-conscious work environment that fosters open communication about concerns was not understood by all managers and staff, and they had not been held accountable when inappropriately overemphasizing the work schedule, inadequately attending to work quality, and acting inconsistently in practicing the desired openness about concerns. Through issuing a work environment policy, providing training on the policy, and improving the Employee Concerns Program, the Initiatives sought to create an environment in which employees felt free to raise concerns without fear of reprisal and with confidence that issues would be addressed promptly and appropriately. As shown in table 1, the Initiatives’ effectiveness indicators for tracking progress in addressing these management weaknesses did not have equivalent performance indicators visible in the annunciator panel when it was prepared for the last time, using August 2005 data. Two of the Initiatives’ key areas of concern—(1) roles, responsibilities, authority, and accountability; and (2) work procedures—and their associated effectiveness indicators were not represented in the panel’s visible or underlying indicators. The Initiatives’ effectiveness indicator for tracking trends in recurring problems also was not represented. In other cases, the Initiatives’ effectiveness indicators were represented in underlying lower-level indicators that had very little impact on the rating of the visible indicator. An example is the Initiatives’ indicator for timely completion of employee concerns. The panel’s related visible indicator was work environment, whose rating was based on 4 secondary and 23 tertiary indicators. Of the third-level indicators, two were for timeliness of completion of employee concerns, and combined they contributed 3 percent toward the rating of the work environment indicator. As a result of the weighting of these many underlying indicators, ratings for individual lower-level indicators could be different from the visible indicator. For example, in August 2005, the work environment indicator showed good performance. However, the ratings of four underlying indicators from the project’s employee survey on the work environment—collectively accounting for 25 percent of the work environment indicator’s score— indicated the need for increased management attention. Moreover, some of the Initiatives’ indicators, such as the work organizations’ self- identification of significant problems, had their impact on visible indicators diluted by the inclusion of other indicators that were not focused solely on the detection of significant problems. Another shortcoming of the annunciator panel was that frequent changes to the indicators hindered the ability to identify problems for management attention and track progress in resolving them. The indicators could change in many ways, such as changes in their definition, calculation, or data sources used in calculations, or from the deletion or addition of a subindicator. When such changes were made to the indicators, progress became less clear because changes in reported performance levels may have been the result of the indicator changes rather than actual performance changes. Some of the indicators for key project processes with quality elements changed from one to five times during the 8-month period from April 2004 through November 2004. Even after the major revision of the panel in early 2005, most of the performance indicators tracking quality issues continued to change over the next 6 months—that is, from March 2005 through August 2005. As shown in table 2, only one of the five relevant indicators did not change during this period. One indicator was changed four times during the 6-month period, resulting in it being different in more months than it remained the same. Moreover, the panel was not always available to identify problems and track progress. The panel was not created for December 2004, January 2005, and February 2005 because it was undergoing a major revision. At that time, DOE told NRC that the performance indicators for the panel were revised to reflect the change in the work as the project moved into the engineering, procurement, and construction phase. DOE also reduced the total number of visible indicators from 60 to 30 to focus on fewer, more critical aspects of project management. Panels with the new indicators were then produced for 6 months, starting with March 2005 and ending after August 2005. This second interruption of the panels resulted from another major revision to the indicators; this time, indicators are being made congruent with project work as designated by DOE’s “new path forward,” again to focus on fewer, more important activities. In December 2005, a senior DOE official told us that the project would begin to measure key activities, but without use of the panel. According to DOE, some of the Initiatives’ areas of concern and their associated effectiveness indicators—for example, trends in quality problems related to roles and responsibilities—were being captured, at least partially, in the project’s quarterly trend evaluation reports rather than in the performance indicators. However, the trend reports are a management tool designed more to identify emerging and unanticipated problems than to monitor progress with already identified problems, such as those addressed by the Initiatives. In developing these reports, trend analysts seek to identify patterns and trends in condition reports (CR), which document problematic conditions through the project’s Corrective Action Program. The trend reports analyze CRs for more significant problems (Levels A and B) and minor problems (Level C), but not at Level D (opportunities for improvement). The trend analysis typically separates the reported problems into categories such as organizational unit, type of problem, and cause. These categories are intended to provide insights into the problems. For example, analysis might reveal that most occurrences of a particular type of problem are associated with a certain organization. In practice, DOE missed opportunities to use trend reports to call attention to progress in the Initiatives’ areas of concern. For example, the Initiatives sought to clarify roles and responsibilities within and between DOE and BSC to ensure clear accountability for project results during the project’s transition from scientific studies to the design and engineering activities necessary to license a repository. Similar organizational transition problems were identified in the November 2004 trend report. While that report attributed increases in the number of causal factors associated with change management, supervisory methods, and work organization to recent BSC reorganizations and changes in the project from science-based to design and engineering activities, it did not specifically mention issues of roles and responsibilities or that roles and responsibilities was an Initiatives’ area of concern. However, an analysis of the cause of the problems noted in various significant condition reports, which is performed for certain condition reports and outside of the process of developing trend reports, found evidence of weaknesses in the organizational interfaces among BSC organizations, as well as between BSC and DOE. According to this cause analysis, these organizational interface weaknesses were associated with some manner of change and represented weaknesses in the definition of roles and responsibilities. Trend reports are generally based on condition reports, and problems with roles and responsibilities seem to be identified in cause analyses rather than in the condition reports themselves. Similarly, DOE missed an opportunity to use trend reports to discuss the Initiatives’ goal that the project’s line or work organizations become more accountable for self-identifying significant problems. The August 2005 trend report briefly cited an evaluation of a CR highlighting the low rate of self-identification of significant problems during the previous quarter and reported the evaluation’s conclusion that it was not a problem warranting management attention. However, the trend report did not mention that about 35 percent of significant problems were self-identified during the previous quarter, while the Initiatives’ goal was that 80 percent of significant problems would be self-identified. Thus, the trend report missed an opportunity to either raise a performance problem or pose the question of whether the Initiatives’ goal needed to be reassessed. Beyond whether they effectively tracked the Initiatives’ areas of concern, trend reports face important obstacles, in general, to adequately identify recurrent and significant problems: Recurring or similar conditions can be difficult to clearly identify for management’s attention and resolution. A trend report noted that there will be few cases where recurrent conditions are obvious because each condition slightly differs. Trend analysis tends to focus on the number of CRs issued, but the number of CRs does not necessarily reflect the significance of a problem. For example, the number of CRs involving requirements management decreased by over half from the first quarter to the second quarter of fiscal year 2005. However, this decrease was not a clear sign of progress. Not only did the number rise again in the third quarter, but the May 2005 trend report also noted that the number of all condition reports had dropped during the second quarter. According to the report, the volume of CRs in the first quarter had been high because of reviews of various areas, including requirements management. Another example is the records management problem. The November 2005 trend report stated that a records management problem identified in various CRs, despite accounting for about 50 percent of all business administration problems, reflected an underlying error rate of less than 1 percent and thus was not a significant problem. The lack of an increasing trend in the number of reported problems does not necessarily mean the lack of a significant problem for management attention. Knowing the appropriate level of performance, regardless of the trend, is difficult without having clearly appropriate benchmarks from organizations engaged in activities similar to the Yucca Mountain project. Such benchmarks would clarify, for example, whether a project’s percentages of human performance errors compare favorably, regardless of whether the numbers are increasing. Similarly, the trend in the number and types of CRs during any period is not necessarily a sign of improvement or worsening conditions. Trends can be attributed to various factors, including increases in the number of audits or self- assessments, which can lead to more CRs being issued. At the time of analysis, some trend data may not be sufficiently reliable or complete to ensure sound findings for management’s attention. For example, although some actions were taken in December 2004 to ensure that cause and other codes were properly assigned, a BSC audit in June 2005 again raised questions about the consistency of the coding. With respect to completeness, the fourth quarter report for 2005 noted that 28 percent of the Level B CRs did not have a cause code at the time of the trend analysis, and one finding was presented even though two-thirds of the data was missing. Due, in part, to these obstacles and changes to how the analysis is done, trend reports have not consistently determined the significance of problems or performed well in tracking progress in resolving problems. For example, trend reports have questionably identified significant human performance problems and ineffectively tracked progress in resolving the problem because of no clearly appropriate or precise benchmark for performance, inconsistent focus on the problem, and unreliable data on cause codes. The February 2004 trend report identified a human performance problem based on Yucca Mountain project data showing the project’s proportion of skill-based errors to all human performance errors was two times higher than benchmark data from the Institute of Nuclear Power Operations (INPO). The report used this comparison to suggest that the project needed to adopt successful commercial nuclear practices for addressing skill-based errors. However, the report cautioned that other comparisons with these INPO data may not be appropriate because of differences in the nature, complexity, and scope of work performed, but did not explain why the report’s comparison of INPO data for skill-based errors to the Yucca Mountain project should be an exception to this caution. The May 2004 trend report repeated this comparison to INPO, finding skill-based errors three times higher than the benchmark data. However, this INPO benchmark has not been used in subsequent reports. The November 2004 trend report redefined the problem as the predominance of human performance errors in general, rather than the skill-based component of these errors—but later reports reinterpreted this predominance as not a problem. The problem with skill-based errors was unclear in the November 2004 report because these errors were showing a decreasing trend, a finding that was attributed as likely the result of unreliable assignment of cause codes. Instead, the report cited an adverse trend based on the fact that the human performance cause category accounted for over half of the total number of causes for condition reports prepared during the quarter. Under the project’s trend analysis guidelines, this large predominance of human performance causes—in contrast to management, communication or procedure, and other cause categories— was designated an adverse trend. Nevertheless, by February 2005, trend reports began interpreting this predominance as generally appropriate, given the type of work done by the project. That is, the project’s work involves mainly human efforts and little equipment, while work at nuclear power plants involves more opportunities for errors caused by equipment. In our view, this interpretation that a predominance of human performance errors would be expected implies an imprecise benchmark for appropriate performance. Although trend reports continued to draw conclusions about human performance problems, the February 2005 report indicated that any conclusions were hard to justify because of data reliability problems with cause coding. For example, the majority of problems attributed to human performance causes are minor, or Level C, problems that receive less rigorous cause analysis, such as not completing a form. This less rigorous analysis tends to reveal only individual human errors—that is, human performance problems—whereas more rigorous analysis tends to reveal less immediately obvious problems with management and procedures. Trend reports have also inconsistently tracked progress in resolving the problem associated with the “flow-down” of requirements into the project’s procedures—that is, with ensuring that program, regulatory, and statutory requirements are identified, allocated, and assigned to the project organizations that are responsible for applicable activities. Such requirements management problems can result in inadequate control over design inputs and, possibly, inputs to scientific models. Progress with this problem was less clear because of inconsistent methods of categorizing requirements management problems over time. Initially, based on reviews of annual trends in condition reports, the September 2004 and November 2004 trend reports observed a systemic and continuing problem in the flow- down of requirements from BSC’s Project Requirements Document and identified this as an adverse trend. In subsequent reports, the requirements flow-down problem was variously treated as an aspect of requirements management or records management, or as a latent management weakness or weak change management. When treated as an aspect of these broader problems, the significance of the original flow-down problem and any progress in resolving it became diluted and less clear. The primary focus eventually became requirements management, which the February 2005 trend report designated as a potential trend, whereas the flow-down problem had earlier been designated an adverse trend. Consequently, as a result of this change, the flow-down of requirements got less direct attention and analysis—for example, receiving only a footnote in the August 2005 trend report stating that the April 2004 condition report issued to address the adverse trend was still overseeing implementation of corrective actions. In addition, because trend reports examine only condition reports issued to BSC, they do not always assess the projectwide significance of problems such as requirements management. When analyzing one category of issues associated with requirements management, the November 2005 report stated that BSC and DOE shared the process problems, which cannot be adequately addressed by just one of the organizations. However, for a second category of these issues, the report did not analyze most of the condition reports because 6 of the 10 relevant reports were assigned to DOE. For a third category of issues, no analysis or recommendation was provided because all of the reports were assigned to DOE and therefore did not fall within the scope of the trend report. The tracking of problems for which corrective actions are already being taken appeared at times to be overly influenced by judgments, rather than the problems’ significance, about whether additional management action is warranted. As a result, problems might be rated as less significant, or not tracked further. The situation of assigning a lower rating to a problem’s significance was apparently caused by the fact that ratings were simultaneously an assessment of a problem’s significance and of the need for management action. In its current formulation, DOE’s rating categories cannot accurately represent both the assessment of a problem’s significance and a judgment that additional actions are not needed because the designated rating category will distort one or the other. For instance, the November 2005 trend report analyzed the four categories of requirements management issues and designated one category that included problems with requirements flow-down as a “monitoring trend”—defined as a small perturbation in numbers that does not warrant action but needs to be monitored closely. Describing this trend as a small perturbation, or a disturbance in numbers, did not accurately describe the report’s simultaneous recognition that significant process problems spanned both BSC and DOE and the fact that the numbers and types of problems were consistently identified over the previous three quarters. A more understandable explanation for the low rating is that designating the problem at any higher level of significance would have triggered guidelines involving the issuance of a condition report, which, according to the judgment expressed in the report, was not needed. Specifically, the report indicated that existing condition reports have already identified and were evaluating and resolving the problem, thereby eliminating the need to issue a new condition report. By rating the problem at the lowest level of significance and not calling for additional actions, the trend report did not sufficiently draw management’s attention to the problem. The trend report’s assessment did not convey that other serious problems might have been raised by the additional condition reports. At about the same time that the trend report judged that no new condition reports were necessary, an Employee Concerns Program’s investigation of requirements management resulted in 14 new condition reports—3 at the highest level of significance and 8 at the second-highest level of significance. For example, the Employee Concerns Program’s investigation resulted in condition reports calling for an analysis of the collective significance of the numerous existing condition reports and an assessment of whether the quality assurance requirement for complete and prompt remedial action had been met. As a result of the investigation and a concurrent DOE root cause analysis, during the December 2005 Quarterly Management Meeting with NRC, DOE stated that strong actions were required to address the problems with its requirements management system and any resulting uncertainty about the adequacy of its design products. Trend reports identified significant problems in the February 2005 report but did not continue to track the problems after a separate analysis identified ongoing improvement actions. According to the trend report, Level B condition reports collectively indicated organizational weaknesses associated with change management involving cross-departmental interfaces. The trend report recommended that management focus on these problems, and cited a condition report that would further investigate them. The cause analysis for that condition report and a related condition report found that the problems were well-known, in part through a BSC review, and related to a variety of ongoing BSC improvement actions. Since this was a broad category of problems with many initiatives under way, the cause analysis recommended no new actions other than for management to remain aware of the problems. However, the trend reports that followed provided no further analyses to focus management’s awareness on these problems or to assess progress in resolving them. In October 2005, DOE announced an aggressive series of proposed changes to the design, organization, and management of the Yucca Mountain project, but this effort—known as the “new path forward”—will face substantial challenges. Some key challenges facing DOE are (1) determining the extent of problems and restoring confidence in the documents supporting the license application after the discovery of e-mails raising the potential of falsified records, (2) settling design issues and associated problems with requirements management, and (3) replacing key personnel and managing the transition of new managers and other organizational challenges. The current Acting Director of the Office of Civilian Radioactive Waste Management (OCRWM) stated that DOE will not announce a schedule for submitting a license application until DOE addresses these important quality assurance and other challenges. Since DOE is still formulating its plans, it is too early to determine whether the new path will resolve these challenges. In March 2005, after announcing the discovery of USGS e-mails suggesting the possible violation of quality assurance requirements, including the falsification of records, DOE has taken steps to address lingering concerns about the adequacy of the scientific work related to the flow of water into the repository and whether similar quality assurance problems are evident in other e-mails relevant to the licensing application. Specifically, DOE is (1) conducting an extensive review of approximately 14 million e-mails to determine whether these e-mails raise additional quality assurance concerns and whether they might be relevant to the licensing process, and (2) reworking the technical documents created by USGS personnel to ensure that the science underlying the conclusions on water infiltration are correct and supportable in the license application. The Acting Director of OCRWM has stated that DOE will not submit a license application until these efforts are complete. Consequently, given the early planning stage of these efforts, it is unknown how long this will delay the submission of a license application. As part of the licensing process, DOE is required to publicly disclose all documents relevant to the licensing application, including e-mails, by posting them on DOE’s public Web site, which is accessible through the NRC-sponsored, Internet-based Licensing Support Network (LSN). To satisfy schedule requirements, DOE must certify that relevant documents have been posted to the network and made available for public review 6 months before the submission of the license application. In preparation for submitting the license application by December 2004, in June of that year, DOE submitted almost 700,000 e-mails to the LSN that had been reviewed by their original authors and determined to be relevant to the licensing process. They were part of a group of approximately 6 million archived e- mails authored by individuals still associated with the project. However, in August 2004, NRC’s Atomic Safety and Licensing Board ruled that DOE had not met its regulatory obligation to make all relevant documentary material available. Specifically, DOE had not reviewed a group of approximately 4 million archived e-mails authored by individuals no longer affiliated with the project to determine whether the e-mails were relevant to the licensing process. As part of its effort to address the board’s ruling, BSC began a review of e-mails authored by employees who were not currently working at the project. During this review, the contractor discovered and brought forward e-mails between USGS scientists working on water infiltration models that raised questions of the potential falsification of technical information in order to sidestep quality assurance requirements. Following the discovery of the e-mails, DOE conducted a search to determine if there were similar e-mails in the approximately 1 million e- mails previously determined relevant for licensing. However, the DOE Inspector General reported in November 2005 that there was no evidence that the project requirements for identifying and addressing conditions adverse to quality, such as those contained in the USGS e-mails, were considered during the initial review of e-mails. Further, among the approximately 10 million e-mails that had already been reviewed for the licensing process, they found additional e-mails that identified possible conditions adverse to quality that had not been identified by project personnel as requiring further review. The DOE Inspector General recommended, among other things, that DOE (1) expand the review of archived e-mails to include both those deemed relevant and those deemed not relevant to the licensing process, and ensure that conditions adverse to quality are appropriately identified, investigated, reported, and resolved; and (2) ensure that current and future e-mails are reviewed for possible conditions adverse to quality and that such conditions are appropriately addressed under the Corrective Action Program (CAP) system. DOE accepted the Inspector General’s recommendations. Specifically, DOE agreed to develop a corrective action plan to expand the review of archived e-mails to ensure that conditions adverse to quality are appropriately identified and processed under the CAP system. In addition to this review, the DOE Inspector General opened a criminal investigation into the USGS e-mails in March 2005. As of December 2005, the investigation was still in progress. According to NRC on-site representatives, completing these e-mail reviews will be challenging because DOE now has to screen millions of e-mails to ensure that records were not falsified. Further, many of these e-mails were written by employees who no longer work at the project or may be deceased, making it difficult to learn their true meaning and context. Moreover, if additional e-mails are found that raise quality assurance concerns, DOE may have to initiate further review, inspections, or rework to address the newfound problems. NRC officials stated that it takes the issue of potentially falsified documents by USGS employees very seriously, wants a full understanding of the situation regarding the USGS e-mails, and will conduct follow-up in this area. Because NRC wants DOE to submit a high-quality license application, it has encouraged DOE to take the time and actions necessary to fully and adequately resolve these and other quality assurance issues. Immediately following the discovery of the USGS e-mails, DOE undertook a scientific investigation into the technical documents created by USGS personnel. In October 2005, DOE began developing an action plan for reviewing, validating, augmenting, and replacing USGS work products that had come under scrutiny. Although the plan is not yet complete, the Acting Director told us that the license application would not be submitted until the USGS work is replaced and there is confidence that all requirements have been met. In an effort to ensure that the scientific work underlying water infiltration modeling is accurate, DOE is working to corroborate the original work by engaging multiple agencies and organizations to rework the models. For example, DOE has (1) had its lead project contractor work with the Idaho National Laboratories to extensively review the software and data used in the original science work, (2) engaged Sandia National Laboratories to rework the model and calculations using different software than was used originally, and (3) also asked USGS to rework the models. Consequently, when this additional rework is completed, DOE will have four sets of analysis (including the original scientific work) with which they can evaluate, compare, and corroborate results. DOE will then pick one set of scientific analysis for inclusion in the license application, and work to explain and defend its choice. In October 2005, DOE announced significant changes to the design of the Yucca Mountain repository to simplify the project and improve its safety and operation. However, these changes will also require additional design and engineering work that will add uncertainty about the timing of the submission of a license application. DOE had been considering a design where radioactive waste would be shipped to the Yucca Mountain site, removed from its shipping container, placed and sealed in a special disposal container, and finally moved into the underground repository. As a result, DOE contemplated handling the waste up to four separate times. In late 2003, DOE engineers began identifying potential safety problems with this approach. First, possible fissures or holes in the cladding surrounding the spent nuclear fuel accidentally caused during the handling of the waste could cause air to mix with the fuel and oxidize. Consequently, this radioactive oxidized material could then leak and be dispersed into the air. Second, DOE engineers determined that the original facility design would not be able to adequately control the levels of radioactivity in the buildings where the waste would be repackaged before being moved in the repository. To address these problems, DOE researched a series of options, including only accepting radioactive waste that had already decayed to the point where oxidization would not be problematic, and testing the waste shipments for oxidization and treating them at another site before they arrived at the repository. In addition, DOE also considered changing the design by filling the processing buildings with inert gas to prevent oxidization and revising the electrical and ventilation systems. According to a DOE official, these options were impractical or added complexity to the design. However, in October 2005, DOE proposed a new design that relies on uniform canisters that would be filled and sealed before being shipped, eliminating the need for direct handling of the waste prior to being placed in the repository. As a result, DOE will not have to construct several extremely large buildings costing millions of dollars for handling radioactive waste. DOE believes this change will improve the safety, operation, and long-term performance of the repository. However, this change will also pose a challenge to the project because of the widespread implications and the unknown time and effort required to implement it. For example, to implement the new design, DOE will need to, among other things, get approval from the Energy Systems Acquisition Advisory Board for a new project plan, which, among other things, includes details on the conceptual design, cost estimates, risk management efforts, and acquisition strategies; plan, design, and produce standardized canisters for the transportation coordinate this new approach with commercial nuclear power plants, NRC, and government organizations that plan on shipping waste to the project; and revise procurement and contracting plans to support the new design. Finally, DOE will need to perform the detailed design and engineering work required to implement the new design, and create new technical documents to support the license application. However, before it can present its new plans and perform this design and engineering work, DOE officials have stated that it will need to resolve long-standing quality assurance problems involving requirements management. Requirements management is the process that ensures the broad plans and regulatory requirements affecting the project are tracked and incorporated into specific engineering details. According to DOE’s root cause analyses, low- level documents were appropriately updated and revised to reflect high- level design changes through fiscal year 1995. However, from 1995 through 2002, many of these design documents were not adequately maintained and updated to reflect current designs and requirements. Further, a document that is a major component of the project’s requirements management process was revised in July 2002, but has never been finalized or approved. Instead, the project envisioned a transition to a new requirements management system after the planned submission of the license application in December 2004. However, for various reasons, the license application was not submitted at that time, and the transition to a new requirements management system was never implemented. As a result, the document refers to the out-of-date NRC regulations contained in 10 CFR part 60, and not the regulations in 10 CFR part 63 that were finalized in October 2002. The scope and cause of requirements management problems have been identified in multiple DOE and NRC reviews. Multiple condition reports issued in 2004 and 2005 have identified problems with requirements management. Due to these condition reports and NRC concerns that repetitive deficiencies and the failure to implement timely corrective actions could have direct implications on the quality of the planned license application, NRC performed a review of Corrective Action Program documents related to the requirements management program in the late summer of 2005. NRC determined that these reports identified approximately 35 deficiencies related to requirements management. Because the requirements management documents are not current and the new requirements management system has not been implemented, NRC concluded that there does not appear to be a requirements management mechanism in place. Further, based on the number of reports and other issues identified by DOE audits, NRC concluded that the project’s Corrective Action Program was not effective in, among other things, eliminating the repeated identification of deficiencies relating to requirements management or initiating the actions to identify and appropriately address the root cause of these problems. In September 2005, DOE began reviewing the root causes associated with CR-6278, a condition report identifying problems with requirements management. As part of the review, DOE personnel analyzed 135 condition reports and other events and allegations. Among other things, this review found that DOE expectations for requirements management were diluted and eventually neglected, that DOE reduced funding for requirements management due to reductions in its annual budget, and that these and other events caused the requirements management process to become “completely dysfunctional” from July 2002 to the time of the review in the fall of 2005. The analysis identified the root causes of these conditions as DOE’s failure to fund, maintain, and rigidly apply a requirements management system. In November 2005, a team of DOE personnel concluded an investigation into an employee’s concerns regarding requirements management. The team substantiated all of the concerns they investigated and found instances of failures and breakdowns in the requirements management process. For example, among other things, the team found that no procedure was developed to describe how requirements management was to occur; some existing requirements management procedures were not implemented; and project management was aware of these conditions but corrective actions were deferred because the planned requirements management system was expected to address the problem. As a contributing factor, the team also observed that the project’s lead contractor had not implemented a “traditional systems engineering approach” as it did not have, among other things, typical engineering management plans or a separate systems engineering organization responsible for requirements management. As a result of the investigation, the team initiated 14 condition reports, 13 of which identified quality-related problems. To address these problems, on December 19, 2005, DOE issued a stop-work order on design and engineering for the surface facility and certain other technical work. DOE stated that the root cause analysis for CR 6278 and the investigation into employee concerns revealed that the project has not maintained or properly implemented its requirements management system, resulting in inadequacies in the design control process. This stop-work order will be in effect until, among other things, the project’s lead contractor improves the requirements management system; validates that processes exist and are being followed; and requirements are appropriately traced to implementing mechanisms and products. Further, DOE will establish a team to take other actions necessary to prevent inadequacies in requirements management and other management systems from recurring. An example of the potential risks of a breakdown with requirements management was noted during a BSC audit on the design process in March 2005. NRC on-site representatives observing this audit reported that the audit team noted problems with inconsistencies between the design documents of the planned fuel-handing facility that would be receiving, preparing, and packaging the waste before it is placed in the repository. The original set of requirements specified that no water from a fire protection system was to be used in the fuel-handling areas of the facility because under certain scenarios, water used for fire suppression could facilitate an accidental nuclear reaction, a condition known as criticality. Later, as the project began to review the design of the fuel-handling facility, the design was changed to allow the use of water sprinklers in the fuel- handling areas of the facility to suppress possible fires. NRC noted that personnel working on the design knew of the inconsistencies between older and newer design documents, but no formal tracking mechanism had been provided to ensure that those issues were rectified. According to an NRC on-site representative in December 2005, this was an example of a concern with requirements management, and that repetitive and uncorrected issues associated with the requirements management process could have direct implications on the quality of the license application. While the project may be able to resolve these inconsistencies through an informal process, the lack of a formal design control and requirements management process increases the risk that not all such problems will be addressed. These requirements management problems are potentially significant because if the high-level engineering needs of the project are not accurately or completely reflected in the detailed design, then the quality of the license application may be compromised and cause delays in the license application review process. For example, according to a 1989 speech prepared by NRC’s Office of General Counsel stressing the importance of quality assurance, a West Coast nuclear power plant experienced similar quality assurance problems with requirements management. After a license was issued by NRC, power plant personnel discovered that the wrong diagrams were used to develop design requirements. As a result of this and other quality assurance weaknesses identified by NRC, the license was suspended and the power plant was required to initiate an independent program to verify the correctness of the design. Further, NRC reopened hearings on the issue of the adequacy of the power plant’s quality assurance program related to the plant’s design. In October 2005, DOE announced a “new path forward” that would create a new project schedule and financial plan to address the completion of scientific and engineering work in support of a license application. However, DOE faces challenges to successfully implementing the new path, in terms of managing the transition, program and organizational complexities, and the continuity of management. According to DOE managers involved with planning the new path forward, the organizational transition could take several months to complete. It is too early to determine whether DOE’s new effort will resolve quality assurance issues and move the project forward to the submission of a license application. Accountability for quality and results, which was identified as a significant transition issue in the Initiatives, will likely pose a challenge for managing the transition to the new path forward. The Initiatives sought to clarify roles and responsibilities within and between DOE and contractor organizations to ensure clear accountability for results and quality during the transition from OCRWM’s organization, processes, procedures, and skills supporting scientific studies to those supporting the activities necessary to license a repository. As the project realigns organizations, processes, procedures, and skills to support the new path forward, it will also be faced with the challenge of ensuring that accountability is not undermined during the transition. For instance, according one DOE manager, transitioning project work to a lead laboratory under a direct contract with DOE could pose a significant challenge for quality assurance because the laboratories are currently working under BSC quality assurance procedures and will now have to develop their own procedures. Implicitly recognizing the importance of accountability issues, elements of the new path forward seek to address issues that can negatively affect quality assurance and project management in general. For instance, the new path includes plans for developing and transmitting requirements to USGS for the certification of scientific work. In addition, a senior project official told us that the lead laboratory would provide a single point of accountability that will enhance the quality of the science work. The Acting Director indicated that OCRWM’s management structure may have to be reorganized to have a single manager clearly accountable for each of the new path’s major tasks in science, engineering, and licensing. Moreover, the project is developing new performance indicators to allow the project to assess important activities under the new path forward. Outside of the new path, as the result of a September 2005 DOE Inspector General report on accountability problems with managing contract incentives, OCRWM agreed to develop a comprehensive corrective action plan to provide clearer and more objective performance standards in the BSC contract. Program complexity and other project characteristics are also likely to pose challenges to managing quality assurance. Based on its experience with licensing and regulating nuclear power plants, NRC observed in the mid-1980s that the Yucca Mountain project’s characteristics, such as a large and complicated program, increased the likelihood of major quality-related problems. Although the new path is intended to simplify design, licensing, and construction, the project remains a complicated program that seeks to both restore confidence in its scientific studies and pursue new design and engineering activities. As a result, the project has to manage quality assurance issues simultaneously in both areas. Moreover, the project involves a complicated organizational structure. The project will continue contracting work with BSC, USGS, and the Sandia National Laboratory, which involves working with organizations in various locations. In our 1988 report, we noted that the geographic distance between the various organizations may hamper OCRWM’s quality assurance communication and oversight objectives. The project also faces management challenges related to ensuring management continuity at the project, since DOE has experienced turnover in 9 of 17 key management positions since 2001. To ensure the right managers move the project forward to licensing, the project has a recruitment effort for replacing key departing managers. In the past year, the project has lost key managers through the departures of the director of Project Management and Engineering, the director of the License Application and Strategy, the director of Quality Assurance, and the contractor’s general manager. According to NRC on-site representatives in August and October 2005, management turnover is a concern for NRC because it would like to see continuity of qualified managers rather than a series of acting managers. Recruiting replacement managers can impact project continuity, and newly acting managers may not take full rein of project tasks. However, the Acting Director told us that the recruitment process is an opportunity to improve project managers and staff, but recruiting the right people is challenging for various reasons—for example, government salaries are less than those in industry, and employment clauses restrict subsequent employment in related industries. Finally, since new directors sometimes give new direction to the project, a critical issue for sustaining the current new path forward is continuity with OCRWM’s director. This position was occupied by three individuals between late 1999 and early 2005. The last OCRWM director assumed the position in April 2002, started the Management Improvement Initiatives in 2002, and left the position in February 2005. The current Acting Director began functioning in his position in the summer of 2005, and initiated the new path forward in October 2005. DOE is currently awaiting congressional confirmation of a nominee to take the director position. However, the Acting Director told us he expects that the new path forward will be sustained because it has been endorsed by the Secretary of Energy. DOE’s Yucca Mountain project has been wrestling with quality assurance problems for a long time. Now, after more than 20 years of project work, DOE is again faced with substantial quality assurance and other challenges to submit a fully defensible license application to NRC. Unless these challenges are effectively addressed, further delays on the project are likely. Furthermore, even as DOE faces new quality assurance challenges, it cannot be certain it has resolved past problems, largely because the department has not been well served by management tools—specifically, its performance indicators and trend evaluation reports—that have not effectively identified and tracked progress on significant and recurring problems. First, the management tools have provided limited coverage of the areas of concern identified in the Management Improvement Initiatives and thus have not enabled DOE managers to effectively monitor progress in these important areas. Second, the tools have often not reflected the full extent or significance of problems because their scope has been limited and not based on projectwide analysis. Third, the trend evaluation reports have, at times, not accurately characterized problems because reliable and complete data and appropriate performance benchmarks were not available at the time of analysis. Fourth, frequent changes in performance indicators and the way analysis is done have made it difficult to accurately identify trends over time. Fifth, the tools’ rating categories have sometimes been misleading as to the significance of problems because the ratings tend to be skewed by the fact that corrective actions were already being taken, without considering their effectiveness or considering the significance of the problem on its own terms. These shortcomings with the tools limit project managers’ ability to direct and oversee such a large and complex undertaking as constructing an underground repository for nuclear wastes. Further complicating DOE’s ability to manage the project are the vacancies in key managerial positions for the quality assurance program and elsewhere on the project. The tools become even more important for new managers who need to quickly understand project management issues. To improve the effectiveness of DOE’s efforts to monitor performance in key areas at the Yucca Mountain project, including quality assurance, we recommend that the Secretary of Energy direct the Director, Office of Civilian Radioactive Waste Management, to take the following five actions to strengthen the project’s management tools: Reassess the coverage that the management tools provide for the areas of concern identified in the Management Improvement Initiatives and ensure that performance in these important areas is effectively monitored, especially in light of the more recent condition reports and associated cause analyses, trend reports, and other reviews indicating continuing problems. Base future management tools, such as the trend evaluation reports, on projectwide analysis of problems, unless there are compelling reasons for a lesser scope. Establish quality guidelines for trend evaluation reports to ensure sound analysis when reporting problems for management’s attention. Such guidelines should address, among other things, having reliable and complete data and appropriate benchmarks. To the extent practicable, make analyses and indicators of performance consistent over time so that trends or progress can be accurately identified and, where changes to analyses or indicators are made for compelling reasons, provide a clear history of the changes and their impact on measuring progress. Focus the management tools’ rating categories on the significance of the monitored condition, not on a judgment of the need for management action. We provided DOE and NRC with draft copies of this report for their review and comment. In oral comments, DOE agreed with our recommendations and provided technical and editorial comments that we have incorporated in the report, as appropriate. We also incorporated, as appropriate, NRC’s oral editorial comments, which primarily served to clarify its role. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and Members of Congress, the Secretary of Energy, and the Chairman of the Nuclear Regulatory Commission. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this review were to determine (1) the history of the Yucca Mountain project’s quality assurance problems since the project’s start in the 1980s, (2) the Department of Energy’s (DOE) tracking of quality problems and progress implementing quality assurance requirements since our April 2004 report, and (3) challenges that DOE faces as it continues to address quality assurance issues within the project. In addition, we were asked to provide information about implementation of the project’s Employee Concerns Program and the types of concerns raised in recent years through the program. To determine the history of the project’s quality assurance problems, we reviewed our prior reports and those of DOE’s Office of the Inspector General concerning the Yucca Mountain project. We also reviewed internal DOE evaluations and audit reports written about the quality assurance program and Nuclear Regulatory Commission (NRC) reports and NRC- prepared summaries of NRC and DOE quarterly management meetings, technical exchange meetings, and quality assurance meetings dating to early 2004. In addition, we reviewed letters and communications between DOE and NRC regarding quality assurance from the NRC Web archives from the late 1980s. Furthermore, we reviewed plans for the Regulatory Integration Team (RIT) and subsequent correspondence between Bechtel/SAIC Company, LLC (BSC), DOE’s management contractor for the Yucca Mountain project, and DOE. Moreover, we discussed quality assurance issues with officials of DOE’s Office of Civilian Radioactive Waste Management (OCRWM), including the Acting Director and Deputy Director, at DOE headquarters in Washington, D.C., and at its field office in Las Vegas. In addition, we interviewed representatives of Navarro Quality Services, a DOE subcontractor, as well as BSC, and NRC officials in the agency’s field office in Las Vegas, Nevada, and at its headquarters in Rockville, Maryland. To determine DOE’s tracking of quality problems and progress implementing quality assurance requirements since our April 2004 report, we interviewed OCRWM, BSC, and NRC officials about the status of these efforts since the issuance of our prior report. We also reviewed DOE’s Management Improvement Initiatives (2002), DOE’s Management Improvement Initiatives Transition Approach (2003), and our 2004 report to understand the history of the improvement efforts. To understand DOE’s management tools to monitor problems and progress, we reviewed the available performance indicators panels from April 2004 through August 2005, when it was last produced; the documentation on the individual indicators applied to August 2005 data; and the quarterly trend reports from the fourth quarter of fiscal year 2003 through the fourth quarter of fiscal year 2005. We also reviewed information from condition reports and examined documentation on DOE’s Quality Assurance Requirements and Description (issued in August 2004), BSC’s Trend Evaluation and Reporting, and DOE’s Procedure: Condition Reporting and Resolution (issued in November 2005). To determine challenges that DOE faces as it continues to address quality assurance issues within the project, we reviewed information from condition reports, NRC on-site representative reports, DOE Inspector General reports, and an OCRWM Office of Concerns Program’s investigative report on past quality assurance problems and DOE’s efforts to address them. We obtained information on turnover in key management positions at DOE and BSC since 2000. In addition, we discussed with DOE and NRC officials DOE’s difficulties in addressing recurring quality assurance problems and the quality assurance implications of the Yucca Mountain project moving from the site characterization phase to design and licensing. Also, to better understand issues and challenges, we attended quarterly meetings held between DOE and NRC in Rockville in September 2005 and Las Vegas in December 2005. To identify recent employee concerns related to quality assurance, such as falsification of records and a safety-conscious work environment, as well as to identify the actions taken to address those concerns, we reviewed all concerns received by the OCRWM and BSC Employee Concerns Programs from January through November 2005. For the OCRWM program, we reviewed all employee concerns files to identify concerns related to quality assurance. For the BSC program, we first read summary descriptions of each concerns file, and reviewed the concerns files for only those we identified as related to quality assurance. We then conducted a content analysis of all concerns files that we reviewed. Next, our three team members reached consensus about the correct classification of a concern as a quality assurance problem, such as potential falsification of records. Finally, through a second review of concerns files, we verified our recorded information for those concerns that seemed to be important illustrations of problems. In addition, we also spot-checked a sample of OCRWM and BSC concerns received in 2005 to verify the accuracy of their placement in various concerns categories. We found that the concerns were generally categorized accurately. We performed our work from July 2005 through January 2006 in accordance with generally accepted government auditing standards. NRC expects licensees to establish a safety conscious work environment— that is, one in which (1) employees are encouraged to raise concerns either to their own management or to NRC without fear of retaliation and (2) employees’ concerns are resolved in a timely and appropriate manner according to their importance. NRC encourages but does not require licensees to establish employee concerns programs to help achieve such a work environment, and both DOE and BSC have established such programs. DOE’s Employee Concerns Program is currently operated under the requirements of DOE Order 442.1A, but the department, in anticipation of becoming a licensee, is in the process of establishing the program to meet NRC expectations. DOE and contractor employees at the Yucca Mountain project may raise concerns about quality, safety, or other work environment issues—such as harassment, intimidation, retaliation, and discrimination—through various means. Employees are encouraged to resolve concerns at the lowest possible level in the organization, in the following order: Use normal supervisory channels, such as by raising an issue to a manager for resolution. Initiate a condition report through the Corrective Action Program—a process in which any employee can formally identify a problem on the project, such as with policies, procedures, or the work environment, and have the issue investigated and, if necessary, fixed through corrective actions. Submit a concern via e-mail, telephone, or in person to one of the project’s two Employee Concerns Programs—a BSC program for BSC employees and other subcontractors and another run by DOE for either DOE or BSC employees. Contact NRC directly. The DOE and BSC concerns programs are intended to supplement rather than replace the resolution of problems through managers or the Corrective Action Program. DOE and BSC Employee Concerns Programs have each established a communication network to allow employees to register concerns. These networks include brochures and regular newsletters on the programs and numerous links to the program on the project’s intranet, where employees can obtain concerns forms. Both the DOE and BSC concerns programs of the Yucca Mountain project have four main steps: 1. Employees notify the concerns program staff about issues that they feel should be corrected, such as safety or health issues; harassment, intimidation, retaliation, or discrimination; concerns raised through the Corrective Action Program; and quality assurance problems. 2. The concerns program staff document and handle the concern in accordance with the requirements of DOE Order 442.1A. 3. The concerns program notifies the employees of the results of the investigation and notifies management of any deficiencies. 4. Project management develops corrective actions for deficiencies, and the program validates that the concerns have been effectively addressed by the actions. Under DOE Order 442.1A, concerns may be addressed through an investigation by the concerns program staff, an independent investigation, a referral, a transfer, or a dismissal of the concern. Employees can request or waive confidentiality. If a concern is submitted anonymously, interpreting the main issues and problems is left up to the concerns program staff, and action on the concern may be limited if the submitted information does not clearly or sufficiently define the concern. The concerns program may conduct its own investigation of the concern. Alternatively, it may refer the concern to another project organization for investigation or resolution. After the results of the investigation or resolution are reported to the concerns program within a specified period, the concerns program accepts the results or requires additional actions. In other cases, concerns may be transferred to another organization with the appropriate subject matter responsibility or expertise, such as the Office of Human Relations, Office of General Counsel, or Office of the Inspector General. After investigating a concern, the concerns programs determines whether the concern is substantiated, partially substantiated, unsubstantiated, or indeterminate. If a concern is substantiated or partially substantiated, the investigation results are presented to the responsible senior managers. A concern is considered indeterminate when evidence is insufficient to substantiate a concern or allow for a conclusion to be drawn. Some concerns can be resolved through a noninvestigative resolution, a method to address concerns promptly when minimal effort is required for resolution. Some resolutions involve the development of management corrective action plans that are tracked until they are closed. In addition, for deficiencies that identify systemic problems, the concerns programs may file a condition report through the Corrective Action Program. Moreover, DOE and contractor employees are required to report certain conditions or alleged conditions to DOE’s Office of the Inspector General under DOE Order 221.1, which covers waste, fraud, and abuse. The concerns program handles some employee concerns in this way. From January through November 2005, DOE’s concerns program opened 139 employee concerns for investigation, and the BSC concerns program opened 112 concerns for investigation. DOE’s concerns program places concerns into 14 categories, while the BSC program uses 20 categories. For both DOE and BSC, the category receiving by far the most concerns for calendar year 2005 was management: “management/mismanagement” for DOE and “management practices” for BSC. According to DOE, management concerns generally involved conditions related to management behavior, policy practice, budget allocation, or use of resources. According to the manager of BSC’s program, about half of the concerns in the management practices category involve hiring and human relations issues and the other half involve organizational policies and other issues. The “quality” category accounts for a relatively small portion of total concerns—18 percent of concerns for the DOE program and 4 percent for the BSC program. Tables 3 and 4 show the concerns received by the DOE and BSC programs for January through November 2005. The Employee Concerns Programs, which are designed to provide an alternative to raising issues through the Corrective Action Program and issuing condition reports, have been playing an active and sometimes key role in identifying and addressing quality assurance problems, as can be seen in the following examples: As part of an effort to identify e-mails relevant to the licensing process and that therefore should be included in the Licensing Support Network, BSC employees in late 2004 discovered e-mails suggesting potential falsification of technical records. The e-mails were submitted to the Employee Concerns Program in March 2005 and were eventually reported to the DOE Inspector General for investigation. The quality assurance issues raised by the e-mails have resulted in a substantial effort by DOE to restore confidence in the quality of technical documents that will support its license application to construct the repository. In mid-2005, the DOE concerns program referred to the project’s senior management an employee’s allegation that the project’s schedule was taking priority over quality in the review of technical documents. In this instance, the Office of Concerns Program Manager negotiated with senior management to address the time and resource needs for ensuring quality assurance, rather than simply communicating to the organization that quality should take priority over the schedule. As the result of an employee’s concerns referred to DOE by NRC in mid- 2005, the Employee Concerns Program initiated an extensive investigation of issues related to requirements management. That investigation substantiated the employee’s concerns and led to the issuance of 14 condition reports for problem resolution. Signifying the importance of this issue, DOE discussed problems with requirements management with NRC at their quarterly meeting in December 2005. The Employee Concerns Programs’ role in identifying and addressing quality assurance and other issues is dependent upon employees’ willingness to submit concerns, but the employees’ willingness has sometimes been in doubt. A late 2004 DOE survey of project employees indicated, for example, that less than two-thirds of employees were confident that submitted concerns would be thoroughly investigated and appropriately resolved. DOE recognizes the need to improve employee trust and willingness to use the concerns program, and both the DOE and BSC program are engaged in outreach efforts. However, employees’ willingness to submit concerns may be affected by factors outside the programs’ control. According to a DOE manager, the project’s recent and pending workforce reductions may account for a decreasing number of concerns submitted to the DOE program in late 2005. Based on OCRWM Employee Concerns Program data, the program averaged about 13 concerns a month from January through November 2005. However, the number of monthly concerns dropped to 5 in October and 3 in November 2005. During our review of concerns opened for investigation from January 2004 through November 2005, we did not identify any concerns alleging problems similar to the falsification of technical records suggested by the USGS e-mails. Although we found records of an early 2004 concern about an instance of inappropriate management of a technical document, this instance was resolved and did not appear to be an intentional or systematic effort to falsify records. The manager of the BSC program told us of a concern raised about another set of e-mails, but this concern was not about record falsification. The manager of the DOE program told us that she had not seen any reportable allegations of falsification of technical records since she took her position in July 2004. In addition to the contact named above, Raymond Smith (Assistant Director), Casey Brown, John Delicath, James Espinoza, and Terry Hanford made key contributions to this report.
|
The Department of Energy (DOE) is working to obtain a license from the Nuclear Regulatory Commission (NRC) to construct a nuclear waste repository at Yucca Mountain in Nevada. The project, which began in the 1980s, has been beset by delays. In a 2004 report, GAO raised concerns that persistent quality assurance problems could further delay the project. Then, in 2005, DOE announced the discovery of employee e-mails suggesting quality assurance problems, including possible falsification of records. Quality assurance, which establishes requirements for work to be performed under controlled conditions that ensure quality, is critical to making sure the project meets standards for protecting public health and the environment. GAO was asked to examine (1) the history of the project's quality assurance problems, (2) DOE's tracking of these problems and efforts to address them since GAO's 2004 report, and (3) challenges facing DOE as it continues to address quality assurance issues within the project. DOE has had a long history of quality assurance problems at the Yucca Mountain project. In the 1980s and 1990s, DOE had problems assuring NRC that it had developed adequate plans and procedures related to quality assurance. More recently, as it prepares to submit a license application for the repository to NRC, DOE has been relying on costly and time-consuming rework to resolve lingering quality assurance problems uncovered during audits and after-the-fact evaluations. DOE announced, in 2004, that it was making a commitment to continuous quality assurance improvement and that its efforts would be tracked by performance indicators that would enable it to assess progress and direct management attention as needed. However, GAO found that the project's performance indicators and other key management tools were not effective for this purpose. For example, the management tools did not target existing areas of concern and did not track progress in addressing them. The tools also had weaknesses in detecting and highlighting significant problems for management attention. DOE continues to face quality assurance and other challenges. First, DOE is engaged in extensive efforts to restore confidence in scientific documents because of the quality assurance problems suggested in the discovered e-mails between project employees, and it has about 14 million more project e-mails to review. Second, DOE faces quality assurance challenges in resolving design control problems associated with its requirements management process--the process for ensuring that high-level plans and regulatory requirements are incorporated into specific engineering details. Problems with the process led to the December 2005 suspension of certain project work. Third, DOE continues to be challenged to manage a complex program and organization. Significant personnel and project changes initiated in October 2005 create the potential for confusion over roles and responsibilities--a situation DOE found to contribute to quality assurance problems during an earlier transition.
|
Students’ pursuit of a college degree may include transferring from one school to another. Students typically transfer from a 2-year school to a 4- year school, a direction known as a vertical transfer. Students can also transfer from a 4-year school to a 2-year school, known as a reverse transfer, or laterally transfer between similar schools (e.g., 2-year to 2- year or 4-year to 4-year). Students can transfer for different reasons, depending on their goals and the type of transfer involved. For example, if a student is seeking to obtain a degree from a relatively expensive school, transferring credits from a less expensive school could help them save on tuition costs. Students may transfer vertically to facilitate completion of a bachelor’s degree. Alternatively, students may initiate a reverse transfer in order to complete an associate’s degree. Further, students who transfer laterally may do so to find a better institutional fit or a degree program that more closely aligns with their goals. Colleges also vary with respect to governance structure, length of degree programs, and other characteristics. Public schools are generally operated by publicly elected or appointed officials. Private schools are operated by individuals or agencies other than governmental entities. Further, private schools can be nonprofit or for-profit entities. Private nonprofit schools are traditionally operated by independent or religious organizations and earnings do not benefit any shareholder or individual, whereas private for-profit schools are owned and operated by private organizations and earnings can benefit shareholders or individuals. Schools are also classified by whether they offer degree programs that are 4 years or 2 years in duration. In this report, we refer to six school types: 2-year public, 2-year private nonprofit, 2-year private for-profit, 4- year public, 4-year private nonprofit, and 4-year private for-profit. A student who wants to transfer credits generally must provide the destination school with a transcript of previously earned credit. Destination schools generally have discretion in determining whether to accept these credits and use various criteria to evaluate them. Criteria can include, for example, a minimum grade requirement, the quality of the student’s coursework, the level and content of the coursework compared to similar courses at the destination school, and the applicability of a course to the degree or programs at the destination school. Many schools enter into voluntary transfer agreements or partnerships with each other—broadly referred to as articulation agreements—which specify how transferred course credits meet program or degree requirements among those schools. Additionally, states can establish statewide articulation agreements as well as credit transfer policies that are generally applicable to schools within the state. Our prior work found that states had enacted a variety of legislation and implemented statewide initiatives, primarily covering public schools, that established transfer agreements and common curricula to facilitate credit transfer. For example, in 2005, we reported that some states identified a block of general education courses for which credits were fully transferable across public schools within that state. We previously reported that, at that time, 39 states had legislation pertaining to transfer of credit between colleges. The Higher Education Opportunity Act requires schools participating in any program authorized under Title IV, which contains various student financial assistance programs, to publicly disclose their credit transfer policies. Specifically, the Act requires schools to publicly disclose, in a readable and comprehensible manner, a statement of their credit transfer policies that includes, at a minimum: (1) any established criteria the school uses regarding the transfer of credit earned at another school and (2) a list of schools with which the school has established an articulation agreement. The cost of attending college generally includes tuition, room and board, books and school supplies, fees, travel costs, and other miscellaneous expenses. As costs increase, college may become less affordable for many students and their families. To help students cover these costs, according to Education, in fiscal year 2016, $125 billion was available to students primarily through the Federal Pell Grant (Pell Grants) program and William D. Ford Federal Direct Loan program (Federal Direct Loans). Pell Grants, which do not have to be repaid, are awarded to undergraduate students based on financial need. Federal Direct Loans, either subsidized or unsubsidized by the government and which generally have to be repaid, are available up to the cost of attendance as determined by a student’s school and in accordance with federal limits. Pell Grants and Federal Direct Loans also have eligibility limits based on lifetime use or program length (see table 1). An estimated 35 percent of first-time students transferred schools over a 6-year period, according to Education’s most recent BPS data on students who started in academic year 2003-04 (2004 cohort). Transfer patterns are similar for the cohort of students in the ongoing BPS study who started in academic year 2011-12 (2012 cohort). In addition, the transfer rate among students who originally attended a private for-profit school was lower than among students who attended public or private nonprofit schools (see table 2). Of the students who transferred, an estimated 62 percent of them transferred between public schools. The most common transfer path was from a 2-year public to 4-year public school (see fig. 1). Transfer patterns for the 2012 cohort also show that students transferred between public schools at a higher rate than among other types of schools, based on mid-point data. Additionally, of the students who transferred, a majority (about 75 percent) originally attended a public school. Fewer transfer students originally attended a private nonprofit or private for-profit school, an estimated 19 and 7 percent, respectively. One Student’s Perspective One student wished that his school took greater care to ensure students know exactly which classes they should take to avoid losing credits and incurring additional costs as a result of transferring schools. He is worried that he might have to pay twice for classes he has already taken. Successful transfer of course credits can be hampered when the two schools involved have not established an agreement that specifies how credit transfers will occur. As we previously reported, many schools enter into transfer agreements or partnerships, often referred to as articulation agreements, which specify which course credits meet program or degree requirements at one or more schools (see app. II for an example of a transfer guide based on an articulation agreement). Based on our review of school websites, articulation agreements were more commonly listed among public schools, but private nonprofit and for-profit schools also establish articulation agreements. This level of clarity helps students better plan their college path by helping them understand how specific earned credits will transfer. Stakeholders we interviewed from 16 of 25 higher education organizations and schools said it is more difficult to transfer credits when there is no articulation agreement between schools or state policy outlining how credits will transfer. It is often easier for students to transfer credits when transferring to a school in the same state, especially in states that have policies outlining how credits should transfer, according to stakeholders we interviewed from 7 of 25 higher education organizations and schools. For example, according to the National Conference of State Legislatures, Florida has a statewide articulation agreement which generally guarantees that students who earn an associate’s degree from a Florida community college can transfer at least 60 credits to one of the 4-year public schools in the state. However, nearly one in five students who started at a 2-year public school and one in four who started at a 4-year public school transfer to a school in a different state, according to a recent National Student Clearinghouse report. One program designed to facilitate successful transfers for students across state lines is the Western Interstate Commission for Higher Education’s Interstate Passport program. According to its description, the program accomplishes this by focusing on common learning outcomes across schools in different states rather than determining how individual courses compare to each other. Specifically, faculty and others in the field, such as registrars and advisors, in schools across multiple states agreed upon student learning outcomes and proficiency criteria in different skill areas for general education courses intended to be completed during the first half of a bachelor’s degree program. Students who achieve these learning outcomes are then able to transfer their lower division general education credits to any of the other schools participating in the program. Currently there are 21 schools that are members of the Interstate Passport Network. The type of school is also a factor in successfully transferring credits between schools, according to stakeholders we interviewed from 18 of 25 higher education organizations and schools. For example, according to stakeholders, transferring credits from private for-profit schools can be more difficult than transferring credits from other types of schools. Private for-profit schools are typically nationally accredited whereas public and private nonprofit schools are historically regionally accredited, and we previously reported that regionally accredited schools usually prefer to accept credits only from other regionally accredited schools. Stakeholders from several higher education organizations and schools said national accreditation is seen as less stringent than regional accreditation, though Education recognizes and applies the same standards to both types of accreditors. Additionally, according to one stakeholder, nationally accredited schools tend to offer more technical or vocational degrees where the coursework may be difficult to transfer to other schools. According to our analysis of BPS data, a relatively small percentage of students who originally attended private for-profit schools transfer to another school (16 percent). Students transferring from a 2-year school, such as a community college, can face similar challenges transferring credits. For example, credits earned at 2-year schools may in some cases be seen by 4-year schools as less academically rigorous or more technical in nature than credits earned at the 4-year school, according to stakeholders we interviewed from 12 of 25 higher education organizations and schools. Students can also face challenges transferring credits between public and private schools. According to our prior work, statewide transfer policies generally applied only to public schools. Therefore, when transferring within the public school system, students potentially lose fewer credits compared to transferring between public and private schools. One Student’s Perspective One student said that he met with an advisor prior to transferring so that he could plan his coursework appropriately; however, he did not learn about some of the degree requirements until after he transferred. This resulted in taking an additional semester of classes to fulfill lower-level requirements. example, these stakeholders said that advisors should provide students with information to help them transition to the destination school and adjust to a new campus, in addition to helping them understand how their credits will transfer. Some schools may be under-resourced when it comes to providing quality advising to students, according to stakeholders we interviewed from 13 of 25 higher education organizations and schools. According to our analysis of BPS data, students most commonly transferred from public 2-year—such as community colleges—to public 4-year schools. Some of the stakeholders we spoke with specifically mentioned resource challenges faced by community colleges. One stakeholder at a community college said that it can be challenging for schools to identify which students intend to transfer in order to connect them with available advisors to help plan their path. According to stakeholders from one school, 4-year schools can also face resource challenges. Specifically, they identified multiple efforts to help transfer students, but said that their registrar and admissions staff do not have the capacity to meet with every student individually, and that students could go without advising as a result. Stakeholders from some of the higher education organizations and schools we interviewed said that the timing of advising and transfer information is also important. Specifically, they said that much of the transfer process is influenced by student decisions made early during their college career. For example, without early information about the transfer process, students may change majors or sign up for technical courses at their origin school without being aware of the implications such choices have on the transferability of their credits. Stakeholders also said that it is important for students to meet with advisors from both schools. Ultimately, the destination school generally has the final say on how credits are evaluated and accepted, so it is important for students to confirm with the destination school that the information they receive from their origin school is accurate, according to one stakeholder. The timing of when the destination school completes an official credit evaluation can also pose challenges for students, according to stakeholders from some higher education organizations and schools. Students may not know prior to enrollment at the destination school whether their credits will transfer because some schools do not complete an official credit evaluation until after the enrollment deadline. Even if a student’s credits transfer, they may not apply toward fulfilling degree requirements for their intended major, according to stakeholders we interviewed from 12 of 25 higher education organizations and schools. Some stakeholders saw this as a more important issue than the ability to transfer credits. Destination schools may determine that the courses the student wants to transfer are not equivalent to the requirements of the major at their school or may prefer their own curriculum, according to some stakeholders. For example, according to one stakeholder, a biology course may count as a general science elective but not count toward the science requirement for a degree in biology. In these cases, a student will likely have to take additional courses at the destination school, which could potentially delay graduation. One study that conducted student focus groups at two Indiana higher education systems similarly found that some students experienced challenges with credits applying toward degree requirements. Students lost an estimated 43 percent of college credits when they transferred, or an estimated 13 credits, on average, according to our analysis of BPS data on students who started in academic year 2003-04 and were tracked over a 6-year period. Typically, semester courses are awarded three credits each, so the average credits lost during transfer (13) is equivalent to about four courses, which is almost one semester of full-time enrollment for students taking 15 credits per semester. Credit loss among the 2004 cohort varied greatly by the types of schools involved in a transfer (see fig. 2). For example, students who transferred between public schools—which accounted for almost two-thirds of transfer students—lost 37 percent of their credits, on average. In comparison, students who transferred from private for-profit schools to public schools—which accounted for 4 percent of students who transferred—lost an estimated 94 percent of their credits, on average. (See table 3 in app. III for more information). Our analyses provide descriptive information on credit loss and do not control for certain factors that may be related to the ability to transfer credits, including whether students informed the school of possible credits eligible for transfer based on previous attendance at another school. Credit loss also varied by transfer direction. For example, based on our analysis of the 2004 cohort, students who transferred vertically from 2- year to 4-year schools lost an estimated average of 26 percent of their credits, while those who transferred laterally between 2-year schools lost an estimated average of 74 percent of their credits. Vertical transfers from 2-year to 4-year schools accounted for 40 percent of transfer students, whereas lateral transfers between 2-year schools accounted for 17 percent of transfer students (see table 4 in app. III for more information). While academic performance can be an important determinant in the transferability of credits, we did not assess how such factors affected the extent of credit loss. Our analysis also showed a wide range of credit loss when taking into account both the type of school and direction of transfer, or transfer path (see fig. 3). For example, students transferring from 2-year public to 4- year public schools, which was the most common transfer path and accounted for 26 percent of transfer students, lost an estimated average of 22 percent of their credits. This was a lower rate of credit loss than the overall average for all transfer students. In addition, students transferring between 2-year public schools—another common transfer path that accounted for 13 percent of transfer students—lost an estimated average of 69 percent of their credits. Students who took some of the less frequent transfer paths lost a relatively higher percentage of their credits. For example, students who transferred from 2-year private for-profit to 2-year public schools lost an estimated average of 97 percent of their credits. Similarly, students who transferred from 2-year public to 2-year private for-profit schools lost an estimated average of 95 percent of their credits. Each of these transfer paths accounted for about 1 percent of transfer students. (See table 5 in app. III for more information). Stakeholders from about half of the higher education organizations and schools we interviewed said some students may seek to save on tuition costs by starting at a less expensive school and then transferring to a more expensive school to complete a degree. However, stakeholders from about the same number of higher education organizations and schools told us that some students face additional tuition costs due to repeated coursework or additional time to degree as a result of lost credits. See figure 4 for examples of the transfer process and its potential outcomes. If a student is seeking to obtain a bachelor’s degree from a relatively expensive school, transferring could help the student save on tuition costs. Specifically, if a student is able to successfully transfer all credits from the less expensive school, and those credits count toward his or her degree program, then the student saves on tuition costs by having earned a portion of the credits at the lower-cost origin school. If a student loses some credits during the transfer, then the student’s overall tuition costs depend on the combined effect of the credit loss and the difference in tuition rates between the two schools. However, with any level of credit loss, the student will likely need to stay in school longer to complete degree requirements and pay tuition for repeated coursework. If a student loses all of his or her transfer credits, then the cost of completing a degree is generally higher because the student not only incurs tuition costs from the origin school but must also retake credits required for a degree at the more expensive destination school. This also extends the time to complete a degree. The direction of transfer also affects college affordability. Stakeholders we interviewed from 12 of 25 higher education organizations and schools said that students transferring vertically may achieve savings because 2- year schools are relatively low cost. Based on our analysis of 2013-2014 IPEDS tuition data, average net tuition per year—which is the cost of attendance minus financial aid and non-tuition costs, such as room and board—varied by school type and ranges from about $1,900 for a 2-year public school to about $13,800 for a 4-year private nonprofit school (see table 6 in app. III for more tuition data). However, net tuition may underestimate costs for transfer students because, according to Education, schools often do not offer the same amount of institutional aid to transfer students compared to first-time, non-transfer students. Transfer may be more difficult when transferring laterally. In comparison to vertical transfers, students who transferred laterally experienced higher rates of credit loss, according to our analysis of BPS data for the 2004 cohort. Stakeholders we spoke with from one school said that students who are transferring between 4-year schools may not have been planning to transfer, and it is more difficult to advise students about which credits will transfer laterally. One stakeholder from another school said that if students transferring laterally switch their degree program, few of their courses will transfer. Students who lose more credits during transfer would typically incur additional tuition costs by paying for repeated or additional coursework. To illustrate some potential financial implications for students, we created examples of several different transfer scenarios (see fig. 5 and fig. 10 in app. III). Credit loss data do not reflect the reasons why credits were not accepted, though there are a variety of reasons why credits may not transfer successfully. In some cases, the credits students attempt to transfer may not be applicable or comparable to the coursework at the destination school. For example, vocational or remedial coursework from a 2-year school may not be transferable to a 4-year degree program. Further, factors that are not entirely within a school’s control, such as students’ decisions and academic performance, also affect credit transfer and the time it takes to complete a degree. For example, students may not ask to have their credits evaluated or they may decide to change majors. Our analysis does not reflect certain student decisions or characteristics, which can also factor into the extent of credit loss. Further, aside from tuition, other factors can affect a student’s costs, such as changes in cost of living or forgone earnings while attending school, according to stakeholders from several higher education organizations and schools. Almost half of transfer students received Pell Grants and almost two- thirds received Federal Direct Loans, according to our analysis of BPS data collected between 2004 and 2009. According to mid-point data from the more recent BPS cohort, many transfer students who started school in academic year 2011-12 also received Pell Grants (55 percent) and Federal Direct Loans (62 percent) in their first 3 years. Access to such aid is affected by the length of time needed to complete a degree. The Pell Grant program imposes a lifetime limit of 12 semesters (6 years) of eligibility. Direct Subsidized Loans, which are loans in which the government pays part of the interest, are limited to a maximum timeframe of 150 percent of the published length of a program at a school (e.g., 150 percent of a 120-credit, 4-year degree program would be 6 years). According to our analysis of both BPS cohorts during their first 3 years in school, an estimated 48 percent of students in the 2004 cohort received Direct Subsidized Loans and an estimated 57 percent of students in the 2012 cohort received Direct Subsidized Loans. According to stakeholders we interviewed from higher education organizations and schools, transfer students may exhaust available aid before they complete their degree. Transfer students who lose credits must pay for and spend additional time to retake credits needed to graduate, which may make them exceed time frames for financial aid eligibility. BPS data do not indicate whether students exhausted their financial aid eligibility before obtaining a degree. However, many transfer students who started college in academic year 2003-04, an estimated 40 percent, did not obtain any type of degree within a 6-year time period. Further, while available data do not provide enough information to adequately identify a student’s intention to obtain a specific type of degree (e.g., 2-year associate’s degree versus 4-year bachelor’s degree), about a third of students who chose to transfer from a 2-year to a 4-year school did not ultimately obtain a bachelor’s or other type of degree within the 6-year time period (see table 7 in app. III for more information). In cases where students lose access to aid, they may be financially unprepared or unable to earn their degree. Additionally, stakeholders from some higher education organizations and schools told us that schools may not offer as much scholarship funding for transfer students as they do for new, first-time students. Credits lost in a transfer also can result in additional costs for the federal government in providing student aid. The government’s costs may increase if transfer students who receive financial aid take longer to complete a degree as a result of retaking lost credits. Education’s data do not identify whether particular funding sources, such as Pell Grants or other financial aid, are used to pay for credits taken or to pay for other costs. Therefore, we used an example to show how lost credits can result in potential additional costs in student aid to the federal government (see fig. 6). Actual costs to the federal government would vary. Some students may transfer as a result of the closure of their current school, and they face additional challenges and financial aid considerations. In recent years, multiple school closures have affected large numbers of students and resulted in costs to the federal government. School closures pose financial risks to students who wish to continue their education because of the potential difficulty in transferring credits from a closed school. When a school closes, students must decide whether to complete their degree at another school—which can include transferring credits—or stop pursuit of that degree and, according to Education policy, apply for a discharge of their federal student loans. Education policy states that students are eligible to discharge (i.e., not pay) 100 percent of their federal student loans if they (a) did not complete their program because of a closure, and (b) did not continue in a comparable program at another school. Education officials said some students who have requested discharges of their student loans after their private for-profit school closed said they were unable to transfer their credits. For students who transfer to a comparable program at another school, their existing Direct Subsidized Loans continue to count in calculating eligibility (150 percent of published program length). Students with Pell Grants who are unable to complete their program at the closed school can restore the portion of their lifetime eligibility for grants used at the closed school, according to a December 2016 Education announcement. Closures can pose a financial risk for the government and taxpayers to the extent that federal student loans are forgiven and students reset their Pell Grant eligibility. Under federal law, schools participating in any Title IV program are required to publicly disclose the transfer of credit policies established by the school, including a list of schools with which they have articulation agreements. According to Education officials, schools must disclose credit transfer policies on their website, but the list of schools can be disclosed through a website or other appropriate publications or mailings. Based on our review of websites for a nationally representative stratified random sample of 214 schools, an estimated 99 percent contained the school’s credit transfer policies. Of those websites with credit transfer information, an estimated 68 percent listed the names of partner schools with which they have articulation agreements. An estimated 29 percent of websites did not provide such a list, while an estimated 4 percent explicitly stated that the school did not have any articulation agreements. In addition, the prevalence of websites listing partner schools varied by school type. While most (89 percent) public school websites with credit transfer information listed partner schools with which they have articulation agreements, fewer private nonprofit and for-profit school websites had such a list (see fig. 7). In some cases, school websites included information on statewide articulation policies, such as the Illinois Articulation Initiative, where, according to its website, over 100 participating schools in the state have agreed to accept a package of general education courses in lieu of their own general education classes. Students have access to varying levels of detail about credits covered by articulation agreements. Of the websites that listed partner schools, an estimated 63 percent provided the agreement’s provisions, and an estimated 21 percent provided a link to partner school websites. For the schools that did not provide a list of partner schools on their website or explicitly note that they had no such agreements, it was difficult to determine, without further follow-up, whether the lack of information indicated that the school did not have such agreements or that the school was not providing the list of partner schools. Based on targeted follow-up with officials at 10 schools, we found that some did not have articulation agreements while others had articulation agreements but their partner schools were not listed on the website. Specifically, officials at 5 of the schools said that their school did not have, or they were unaware of, established articulation agreements. Officials at 3 schools said that they had articulation agreements and they were listed in publications available onsite at the school or by contacting school staff. (We did not verify the physical presence of publication copies at school locations or the information included). Officials at 2 schools said that they were currently reviewing articulation agreements and planning to update the school website. Although schools participating in any Title IV program are required to publicly disclose a list of the schools with which they have articulation agreements, they do not have to disclose this information specifically on their websites but may choose to do so in another appropriate publication or mailing. In addition, Education officials stated that it is unclear whether the department has the authority to require schools to post the list of articulation-agreement schools online because federal law does not specify the means of disclosure. The purpose of this disclosure requirement is for schools to share transfer information with students, and selecting an appropriate means of disseminating this information enhances the effectiveness of such communication. Furthermore, Education officials told us that schools are increasingly using websites to share consumer information and that the department already requires that credit transfer policies and other disclosures, such as net price calculators, be posted on school websites. Awareness of articulation agreements can benefit students because such agreements clarify how credits transfer between schools. Posting this information online would make it more easily accessible to prospective students and their families than restricting it to publications located on campus, particularly since almost all school websites already include information on credit transfer policies. In circumstances where the school does not have articulation agreements with other schools, Education’s guidance does not specify how or whether to document this. According to Education officials, federal law and related regulations do not require schools to disclose the fact that they do not have articulation agreements, and the officials stated that it is unclear whether the department has the authority to require such a disclosure. Online information on the presence of articulation agreements would make it easier to determine whether schools are disclosing a list of partner schools as required. Clarification on what a school should depict on its website when it does not have articulation agreements could also help provide more information to students and enhance their understanding of potential transfer options. Moreover, one of Education’s goals is to increase college completion and affordability, and adequate communication with transferring students supports this goal. More complete information on school websites about articulation agreements on transfers could help students fully weigh alternatives when making a transfer decision. Aside from the required school transfer policies noted earlier, school websites varied in the extent of any additional information they provided on transfers. An estimated 60 percent of school websites had some general information about how students could initiate the process of having their course credits evaluated for transfer (see fig. 8). In addition, about half of school websites included resources to help students understand the transfer process. These resources were more common on websites for public schools than for other school types. Some websites, for example, had course equivalency databases where students could input their prior coursework to see how it would translate into earned credits at the destination school. In addition, some school websites provided information on transfer-related resources, such as transfer fairs and other in-person activities at which students could meet with school representatives to learn more about the transfer process. An estimated 47 percent of school websites published transfer deadlines and 43 percent published transfer-specific financial aid information. Knowing deadlines in advance can help students ensure that they do not miss key steps of the transfer process, such as submitting admissions or credit evaluation applications. Financial aid information, including whether a school awards transfer scholarships, can help students identify transfer options that are financially feasible. Fewer schools provided listings of transfer-related frequently asked questions or transfer-specific contacts. Such information can help students more easily navigate to applicable transfer information if they have questions and identify relevant school staff for assistance. The format of transfer information also varied on school websites. Three- quarters of school websites used multiple formats to convey transfer information, including various combinations of webpages or websites (including external sites) and downloadable documents, such as copies of course catalogs. In comparison, an estimated 25 percent provided credit transfer information in a single format, either through a single webpage or document. Nevertheless, it may still be difficult to access transfer information, even if it is provided in a single format, if the transfer material is not easy to locate on the school’s website. For example, we found one school’s credit transfer policies in a course catalog that we downloaded from the website. However, the school website’s search function did not show the location of this material and it was not obvious that a student would need to download the course catalog to access transfer information. In other instances, schools listed credit transfer policies on their consumer information disclosure webpages, but if a student is unaware that consumer disclosures include credit transfer policies, they may not know to look on that particular page for transfer information. In addition to school websites, Education’s websites also provide college students and their families with information on transfers, but it is limited. We found that the transfer information was neither focused nor targeted toward transfer students. In particular, Education’s StudentAid.gov website highlights descriptions of school types and things to consider when choosing a school, and while these pages briefly refer to transfer information, it is presented in the context of other topics rather than having a substantive focus on transfer. For example, as part of a broad description of community colleges, it is noted that many community colleges have articulation agreements and students from this type of school often transfer. Education officials said that they do not see a need to develop consumer information on transfers because students typically would not seek transfer information from Education, and they see little demand based on the volume of transfer-related searches of the department’s website. However, even for students who seek transfer information, it would be difficult for them to access relevant information given Education’s limited offerings on the topic. Providing additional transfer-focused information could encourage more students to access the department’s website for this purpose. Education also includes some transfer information for students affected by school closures in a frequently-asked-questions page on Education’s website. While this information is helpful, it does not address the broader population of transfer students, which accounted for over a third of first-time students, according to our analysis of transfer data. Finally, according to student complaint data, Education officials sometimes provided general information on the transferability of credits in response to complaints about transfer issues. Although such information could broadly apply to all transfer students, it is provided infrequently and is limited to students who happen to submit a complaint. Other reasons Education officials cited for not developing consumer information on transfers are that resources for transfer students are mostly provided by schools, transfer is school specific, and the federal government does not oversee schools’ curriculums. However, we found that close to half of school websites are not providing transfer resources that can help students understand the process beyond the minimal required information. This could compound the challenges we noted earlier of students potentially not obtaining adequate advising and information. In addition, while transferring is ultimately based on a student’s unique circumstances coupled with a school’s transfer policy, there are nevertheless general considerations that apply across schools. Education already provides consumer guidance, such as on college applications, which applies across schools. Similarly, Education could provide guidance on college transfers, such as common credit evaluation criteria, tips for locating transfer resources, and the potential effect of transferring on financial aid eligibility. Furthermore, while Education does not oversee schools’ curriculums, it oversees federal financial aid programs that provide over $125 billion to students. The department also has a goal of promoting college affordability. Given the substantial number of students who transfer and the effect of credit loss on potential costs to the student and the federal government, general consumer information on factors to consider when transferring could be valuable. According to federal internal control standards, agencies should externally communicate the necessary quality information to help achieve their goals. Transfers can affect the time and cost of completing a degree. Knowledge of key considerations could help students and their families make better-informed transfer decisions. About one-third of students transferred schools on their path toward a college degree based on our analysis of Education’s data. While for some, transfer can provide an avenue for saving on tuition costs, many of the credits that students earn may not ultimately help them earn a degree after they transfer. As tuition rises and college becomes less affordable for many, the financial implications of losing credits during transfer are particularly salient for students, their families, and the federal government. Articulation agreements between schools can facilitate credit transfers because they detail how and which credits will transfer from one school to another. When schools make information about these agreements accessible on their websites, students can more easily understand their transfer options. In addition, since many school websites do not provide helpful transfer resources for students, consumer guidance from Education that includes key factors to consider when transferring could help students more easily navigate the process. With such guidance, students can more accurately weigh their options and make an informed decision about transferring schools that takes into account how much time and money they must invest in pursuit of a college degree. To help improve students’ access to information so that they can make well-informed transfer decisions, we recommend the Secretary of Education take the following two actions: Require schools to (1) disclose the list of schools with which they have articulation agreements online if the school has a website, and (2) clearly inform students, on the school’s website if it has one, when no articulation agreements on credit transfer are in place. If the department determines that it does not have the authority to require this, it should nonetheless encourage schools to take these actions (through guidance or other means). Provide students and their families with general transfer information, for example by developing a consumer guide and posting it on Education’s website or augmenting transfer information already provided on the website, to help increase awareness of key considerations when transferring schools. We provided a draft of this report to Education for its review and comment. Education’s comments are reproduced in appendix IV. In its written comments, Education disagreed with our recommendation to require schools to disclose on their websites the list of partner schools with which they have articulation agreements and inform students when there are no agreements in place. Education reiterated that it already requires schools to disclose a list of other schools with which they have established articulation agreements. Given that the purpose of required consumer disclosures on articulation agreements is to inform students, we believe that posting this information online would make it more accessible to prospective students compared to publications located physically on a school’s campus. The increased accessibility would be especially beneficial for prospective students who live far away from the school. Education also said that students should contact specific schools to obtain accurate and updated information. While it is important for students to contact schools, we found that not all schools listed transfer- specific contacts on their websites. Therefore, it is particularly important that the required consumer information on articulation agreements be easily accessible to students. Moreover, according to Education, online disclosure is already required with respect to a school’s credit transfer policies if the school has a website, and schools are increasingly using their websites to provide other consumer information to students. Education also cautioned that placing special emphasis on articulation agreements could mislead students because the agreements—or lack thereof—do not fully reflect the transferability of credits. Specifically, Education said that if the few schools with articulation agreements are listed on the school’s website or if a school notes that it has no articulation agreements, students may erroneously believe that their credits will transfer only to those few schools or that none of their credits will transfer. However, regardless of the number of articulation agreements a school may have, schools are already legally required to disclose the list of partner schools. We found that a majority of schools already disclose a list of partner schools on their websites, and it is unclear why posting this required information online would be more confusing than disclosing this information through publications or other means. Further, according to Education, schools are also legally required to disclose their credit transfer policies online, in effect, outlining the circumstances under which students can generally transfer credits. Therefore, using a school’s website to disclose the list of other schools with which there are articulation agreements, or the fact that there are no agreements, would enhance students’ understanding of their transfer options and help reduce confusion rather than mislead students. Education agreed that it would assist students to have more general transfer information when students are considering transferring to other schools, and said that it plans to include this information on its studentaid.gov website. Education also provided technical comments, which we incorporated in our report as appropriate. In its technical comments, Education proposed that we address the relevance of academic performance to the transferability of credits. We agree that this is an important factor that can affect credit loss and provided additional clarification and references to Education’s research on this topic. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Department of Education, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788- 0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report examines: (1) How many college students transfer and what challenges, if any, do they face in transferring credits? (2) What are possible financial implications associated with transferring credits? and (3) To what extent are students provided with information about transfer policies to help them plan their college path? To address these questions, we interviewed stakeholders from 25 higher education organizations and schools, analyzed transfer and tuition data from Education’s Beginning Postsecondary Students Longitudinal Study (BPS) and Integrated Postsecondary Education Data System (IPEDS), and reviewed websites for a nationally-representative stratified random sample of schools. We assessed the reliability of BPS and IPEDS data by reviewing existing information about the data and the system that produced them and interviewing officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of describing transfer students and credit loss rates. We also reviewed Education’s guidance and regulations on credit transfer disclosure requirements and consumer information for college students, and we compared Education’s practices to federal internal control standards. Finally, we reviewed relevant literature, federal laws, and interviewed Education officials. This appendix provides a detailed description of the methodology used to (1) gather testimonial evidence through interviews, (2) analyze Education’s transfer and tuition data, and (3) conduct website reviews. We conducted this performance audit from March 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To understand what challenges, if any, students face in transferring credits, we interviewed stakeholders from a non-generalizable sample of 25 higher education organizations and schools. More specifically, we interviewed representatives from 17 higher education organizations and officials from eight schools that have expertise in transfers and represent a range of viewpoints. We selected higher education organizations that met one or more of the following criteria: (1) published relevant research or other work, (2) developed guidelines for credit transfer processes, (3) developed tools or state policies to help facilitate credit transfers, or (4) represented relevant groups involved in the transfer process such as associations for students, admissions or advising staff, school systems, accrediting agencies, and/or state or regional higher education bodies. Further, we interviewed stakeholders from eight selected schools to obtain additional perspectives from those directly involved in the credit transfer process. At each school, we interviewed stakeholders involved in the credit evaluation process, such as admissions/advising staff, registrar officials, and relevant transfer offices, as appropriate. To ensure we obtained perspectives from different school types, we selected a mix of public, private nonprofit, and private for-profit schools, and 2- and 4-year schools. While we selected a diverse range of schools, we did not equally select schools among each school type since some school types have a low prevalence or are not typically part of a transfer path. For example, we did not interview stakeholders from a 2-year private nonprofit school because this school type represents a small segment of schools and, correspondingly, a small number of students who transfer to or from that type of school. In our selection, we included at least one pair of schools with articulation agreements. We chose schools in different geographic locations and that represent different transfer environments. For example, we identified schools in states with a variety of statewide transfer policies and student transfer rates. We used state transfer rate data from the National Student Clearinghouse Research Center to inform our selection of schools to conduct interviews. We interviewed National Student Clearinghouse Research Center officials and reviewed the methodology to determine if there are any limitations associated with using the total state transfer rate, and determined these data to be reliable for our purposes. For these interviews with stakeholders from 25 higher education organizations and schools (stakeholders), we used semi-structured interview protocols. To summarize results, we identified commonly mentioned themes regarding challenges in the transfer process. We used the following terms to summarize themes mentioned by stakeholders: “some” or “a few” higher education organizations and schools represent 3 to 5; “several” represents 6 to 10; and “many” represents 11 to 15. For themes mentioned by stakeholders from more than 15 higher education organizations and schools, we generally specified the number of groups in the text. We corroborated testimonial evidence on transfer challenges with findings from our analysis of Education’s transfer data and documentary evidence from transfer literature or publications. In addition to the interviews with 25 higher education organizations and schools, we collected first-hand accounts from several individual transfer students identified by stakeholders from higher education organizations. The students provided non-generalizable illustrative examples of experiences with the transfer process. BPS: To estimate the extent to which students’ credits transfer the first time they change schools and to identify other characteristics of transfer students, we analyzed transfer data from Education’s Beginning Postsecondary Students Longitudinal Study (BPS). To estimate the extent of credit loss among students, we analyzed transcript data from the 2004 cohort, the most recently completed. Each cycle of BPS follows a cohort of students enrolling in postsecondary education for the first time. BPS tracks these students over a 6-year period and collects both survey and transcript data. The most recently completed BPS cohort first enrolled in postsecondary education in the 2003-04 academic year. The final follow-up with this cohort group was the 2008-09 academic year. For the purposes of our data analysis, we define transfer students as those who moved from one school to another for a period longer than 4 months, and the analyses reflect a student’s first transfer only. We define credit loss as credits earned at the origin school that were not accepted by the destination school. The sector variable was used to categorize schools according to whether they were public, private nonprofit, or private for- profit schools at the 2-year or 4-year level. The sector variable reflects the level of the highest degree offered at the school. The highest degree offered may be different from the predominant degree obtained at the school. To incorporate more recent data into the analysis and to further understand the potential implications for federal financial aid, we also used BPS mid-point data from the cohort of students who began school in academic year 2011-12—the 2012 cohort—to describe the number of transfers and the financial aid characteristics of more recent transfer students. This will only represent data collected midway through the study period, and transfer characteristics will change over the remainder of the study period as students continue to progress through their undergraduate studies and the 6-year time period. While we used transcript data to estimate transfer data for the full 6-year study period of the 2004 cohort, transcript data are not yet available for the 2012 cohort. As a result, to calculate mid-point transfer and financial aid receipt rates in the first 3 years of the study, we used BPS data based on transfers reported in student interviews. Estimates based on transcript data may differ from estimates based on student interview data because interviews represent one point in time, whereas transcript data cover an entire study period. To better understand any potential implications for federal financial aid, we determined the extent to which transfer students received federal financial aid, including Pell Grants and Federal Direct Loans. Federal financial aid is awarded based on a student’s total costs, which can include non-tuition expenses. Available data do not identify whether the cost of credits are covered by Pell Grant, Federal Direct Loans, or other financial aid funds specifically. Because the BPS data are based on probability samples, estimates are calculated using the appropriate sample weights provided and reflecting the sample design. Each of these samples follows a probability procedure based on random selection, and they represent only one of a large number of samples that could have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Unless otherwise noted, all percentage estimates from the BPS data analysis have 95 percent confidence intervals within +/-10 percentage points of the percent estimate, and other numerical estimates have confidence intervals within +/-10 percent of the estimate itself. We compared 95 percent confidence intervals to identify statistically significant differences between specific estimates and the comparison groups. While the BPS data illustrate the extent to which credits transfer, the data do not track the reason why credits did not transfer or the academic quality of those credits. The data also do not distinguish whether credits accepted counted toward degree requirements for a student’s major. This means that credit acceptance cannot be equated with progress toward a degree, which would also have financial implications for students. Additionally, student decisions may also affect credit transfer and the time it takes to complete a degree. For example, students may not ask to have their credits evaluated or they may decide to change majors, which would make it difficult to attribute the costs of lost credits to the transfer process. Given these limitations, the potential financial effects associated with lost credits are not solely attributable to schools’ credit transfer policies. Similarly, differences in financial outcomes for transfer versus non- transfer students may be due to multiple reasons and not just the credit transfer process. IPEDS: To provide information on tuition by school type (i.e., public, private for-profit, and private nonprofit) and level of degree program (i.e., 2-year, 4-year), we calculated average net tuition by analyzing the tuition and non-tuition portions of the 2013-2014 academic year cost of attendance obtained from the institutional characteristics component of the Integrated Postsecondary Education Data System (IPEDS). We chose the 2013-2014 period because it contained the most recent net price data available at the time of our analysis. IPEDS gathers data from every college, university, and technical and vocational institution that participates in federal student aid programs, so the data cover the population of interest for this study. Net tuition is the cost of attendance minus financial aid and non-tuition portions of attendance. To estimate net tuition, we started with the net price variable, which is equal to the cost of attendance (i.e., tuition, fees, room and board, books and other expenses) minus financial aid. We then used other IPEDS variables to calculate the non-tuition portion of the cost of attendance and subtracted that value from the net price to estimate net tuition. We used net tuition data instead of published tuition because it is a more realistic portrayal of what students might actually pay. IPEDS net price data is collected at the school-level based on average charges for first-time, full-time undergraduate students, and does not account for variation in the tuition charged to individual students at the same school. In addition, many schools charge tuition by program or have different fees based on the specific major, which is not accounted for in the per-credit net tuition average estimates. Further, net tuition may underestimate costs for transfer students because, according to Education, schools often do not offer the same amount of institutional aid to transfer students compared to first-time, non-transfer students. Transfer Scenarios: To describe possible financial implications for students and the federal government associated with transferring credits, we present a variety of example transfer path scenarios. To create scenarios illustrating financial implications for students, we selected the transfer paths that were among the most common paths based on our transfer frequency analysis of each school control type (public, private nonprofit, and private for-profit). These transfer scenarios vary in the number of credits lost during transfer, type of transfer, type of school, and average net tuition. To create a scenario to illustrate the financial implications of transfer for the government, we assume the student received the average per credit Pell Grant receipt based on BPS data. For both types of transfer scenarios (showing possible financial implications for students and the government), we base credit loss assumptions on 2004-2009 BPS transcript data. The assumption for the number of credits lost is informed by the average percentage of credits lost for the relevant transfer path. We base tuition assumptions on 2013- 2014 IPEDS data on tuition by school type, and tuition values are from the 2013-2014 academic year. Because these scenarios are used for illustrative purposes and we are not estimating tuition costs of a particular cohort of students, we chose assumptions that were reflective of the higher education environment using the best available data. Accordingly, we used the most recently available data from BPS and IPEDS, though the time periods do not match because the datasets have different collection periods. We calculated the financial implications for students in the transfer scenarios by comparing the potential tuition for completing a degree at the origin and destination schools for a transfer student to the potential tuition for completing the same (120) credits for a degree at the destination school. We made various assumptions for these calculations. The scenarios assume that all students would pay tuition on a per-credit basis to retake credits lost during transfer, and that students attending public schools would pay in-state tuition. The financial implications of transfer would be impacted by whether a student attending a public school pays in-state or out-of-state tuition. We also assume that credits that are transferred will count toward graduation requirements. However, it is important to note that schools can accept transfer credits as elective credit but not allow the credit to be used toward a specific degree program. The calculations for the financial implications for students do not account for how students may use financial aid to offset out-of-pocket tuition costs, so the identified costs may not all be borne by the students. The calculations also do not account for the opportunity cost of staying in school longer, in the form of lost wages, or other factors that may affect a student’s decision to transfer, such as differences in room and board, expected lifetime earnings, quality of life, etc. Lastly, we apply average credit loss rates for the purposes of these scenarios. It is important to note that schools may have legitimate reasons to not accept some transfer credits, such as insufficient quality of prior instruction or lack of applicability to the chosen program of study, among other reasons. To determine the extent to which schools provide students with information on transfers to help them plan their college path, we reviewed websites from a nationally-representative sample of 214 schools participating in federal student aid programs. The sample was stratified and randomly drawn from Education’s 2014-2015 IPEDS, which contains data for all schools that participate in federal student aid programs authorized under Title IV of the Higher Education Act of 1965, as amended. Our sampling frame consisted of all public, private nonprofit, and private for-profit 2-year and 4-year degree granting schools that participated in Title IV federal student aid programs, had undergraduate programs, were not U.S. military academies, and had at least 100 students, yielding a universe of 4,309 schools. We dropped schools from our sample that were no longer operational at the time of our review. During our review period, several schools selected for our sample closed. To ensure that we had an adequate sample size for each school type, we drew an additional sample of schools from IPEDS that excluded the schools from our original sample and the closed schools. We created six strata to stratify the sampling frame by school type (public, private nonprofit, and private for-profit) and level of degree program (2-year and 4-year). This sample of schools allowed us to make national estimates about the availability of transfer information, as well as estimates by school type. The percentage estimates for website review results for the overall population reported from this review have 95 percent confidence intervals of +/- 7 percentage points unless otherwise noted. In order to review comparable information across the sampled schools, we developed a standardized web-based data collection instrument that we used to examine on each website the availability of credit transfer policies, articulation agreement lists, and other transfer information, such as contacts and tools to help transfer students, frequently asked questions, and deadlines for submitting information. We used a combination of information from our interviews, transfer literature, and relevant federal laws, regulations, and website usability guidelines to develop the questions included in the data collection instrument. We reviewed websites from September 2016 through February 2017. One analyst recorded information in the data collection instrument. The information was then checked and verified by another analyst. We collected complete information for all 214 schools in our sample. We then analyzed the information across schools. We did not, as a part of our review of school websites, assess the schools for compliance with legal disclosure requirements. Instead, this review was intended to understand what information is made accessible to students. Based on the results of our website review, we conducted targeted follow- up with school officials to obtain additional information related to course credit transfer disclosures. We followed up with two groups of schools. For the first group, we prioritized schools whose websites we initially determined did not include credit transfer policies or criteria for evaluating transfer credits. We contacted all schools in this group and asked school officials about how they provided students with information on credit transfer policies. For the second group, we contacted a select group of schools that did not list partner schools online. We selected schools in the second group based on whether they were already selected for follow-up in our first group as well as on their representation of different school types. The following figure depicts information that a school provided on its website and illustrates how the school implemented its articulation agreement. In this example, a destination school created transfer guides specific to origin schools with which it established an articulation agreement. These guides show how courses from the origin school would transfer into one of the destination school’s degree programs. The format of information provided about articulation agreements and the specifics of their provisions vary from school to school. This appendix provides estimates of the percentage of credits lost by school type, transfer direction, and transfer path; average net tuition by school type; the percentage of transfer students who did not obtain a degree; and two additional transfer scenario examples that illustrate transfer paths that include private for-profit schools. The transfer scenarios in the following figure illustrate possible financial implications of additional transfer paths for students. More specifically, these transfer scenarios illustrate potential financial implications for two transfer paths that are among the most common and involve private for- profit schools, based on our BPS analysis. Nonetheless, each of these transfer paths accounted for 2 percent or less of transfer students. In the transfer from 2-year public to 4-year private for-profit school example, the student accrues savings compared to attending the destination 4-year private for-profit school for the entire degree program because the student is able to transfer almost half the 60 credits earned at the less expensive 2-year public school. Net tuition at the 4-year private for-profit school is approximately seven times more expensive than at the 2-year public school. In the transfer from 2-year private for-profit to 2-year public school example, the student incurs additional costs compared to attending the destination 2-year public school for the entire degree program because the student has to repeat almost all of the 30 credits earned at the 2-year private for-profit school. This transfer path has a high credit loss rate. In addition, the credits the student lost from the 2-year private for-profit school were relatively expensive compared to the cost of credits at the destination 2-year public school. In addition to the contact named above, Meeta Engle (Assistant Director); Amrita Sen (Analyst-in-Charge); Elizabeth Hartjes, Connor Kincaid, Jean McSween, John Mingus, and Dae Park made key contributions to this report. Also contributing to this report were Susan Aschoff, James Bennett, Deborah Bland, Alicia Cackley, Evelyn Calderon, Yue Pui Chin, Michelle Duren, Gustavo Fernandez, Alexander Galuten, Timothy Guinane, Sharon Hermes, Laura Hoffrey, Bill Keller, Sheila McCoy, Jennifer McDonald, Jeffrey Miller, Amy Moran Lowe, Alexandra Rouse, Katherine Siggerud, Alexandra Squitieri, and Christopher Zbrozek.
|
College students sometimes opt to transfer schools in response to changing interests or for financial reasons. The extent to which students can transfer previously earned course credits can affect the time and cost for completing a degree. Given the federal government's sizeable investment in student aid—$125 billion in fiscal year 2016—and potential difficulties students may face in transferring credits, GAO was asked to examine the college transfer process. GAO examined (1) transfer rates and challenges students face in transferring credits, (2) the possible financial implications of transfer, and (3) the extent to which students are provided with transfer information to help them plan their college path. GAO analyzed Education's data, including its most recent available transfer data from the 2004-2009 student cohort, interviewed a non-generalizable sample of stakeholders from 25 schools and higher education organizations, and reviewed a nationally-representative sample of 214 school websites. Based on GAO's analysis of the Department of Education's (Education) most recently available data, an estimated 35 percent of college students transferred to a new school at least once from 2004 to 2009, and GAO found that students may face challenges getting information or advice about transferring course credits. An estimated 62 percent of these transfers were between public schools. According to stakeholders GAO spoke with, students can face challenges transferring credits between schools that do not have statewide polices or articulation agreements, which are transfer agreements or partnerships between schools designating how credits earned at one school will transfer to another. Stakeholders also said that advising and information may not be adequate to help students navigate the transfer process. The possible financial implications of transferring depend in part on the extent of credits lost in the transfer. Using Education's transfer data, GAO estimated that students who transferred from 2004 to 2009 lost, on average, an estimated 43 percent of their credits, and credit loss varied depending on the transfer path. For example, students who transferred between public schools—the majority of transfer students—lost an estimated 37 percent of their credits. In comparison, students who took some of the less frequent transfer paths lost a relatively higher percentage of their credits. For example, students who transferred from private for-profit schools to public schools accounted for 4 percent of all transfer students but lost an estimated 94 percent of their credits. Transferring can have different effects on college affordability. Students seeking to obtain a bachelor's degree at a more expensive school may save on tuition costs by transferring from a less expensive school. On the other hand, transfer students may incur additional costs to repeat credits that do not transfer or count toward their degree. Transfer students can receive federal financial aid. GAO's analysis showed that almost half of the students who transferred from 2004 to 2009 received Pell Grants and close to two-thirds received Federal Direct Loans. Students who lose credits may use more financial aid to pay for repeated courses at additional cost to the federal government, or they may exhaust their financial aid eligibility, which can result in additional out-of-pocket costs. While GAO estimated that the websites for almost all schools nationwide provided credit transfer policies, as required by Education, about 29 percent did not include a list of other schools with which the school had articulation agreements. Among those schools, GAO found that some did not have any articulation agreements, while others did but did not list partner schools on their websites. Schools must provide such listings, but they are not required to do so specifically on their website. As a result, students may not have ready access to this information to fully understand their transfer options. Moreover, Education provides limited transfer information to students and their families, contrary to federal internal control standards that call for agencies to provide adequate information to external parties. General information on key transfer considerations that are applicable across schools and more complete information on schools' articulation agreements can help students avoid making uninformed transfer decisions that could add to the time and expense of earning a degree. GAO recommends that Education (1) require schools to disclose on their websites (a) the list of other schools with which they have articulation agreements and (b) when no such agreements are in place; and (2) provide general transfer information to students and families. Education disagreed with the first and agreed with the second recommendation. GAO maintains that students can more easily understand transfer options if information is accessible on a school's website, as discussed in the report.
|
Tactical air forces are critical to achieving and maintaining air dominance during combat operations. These forces include Air Force, Navy, and Marine Corps fixed-wing fighters and attack aircraft with air-to-air combat, air-to-ground attack, and defense suppression missions, and related equipment and support activities. These forces operate in the first days of a conflict to penetrate enemy air space, defeat air defenses, and achieve air dominance. This allows follow-on ground, air, and naval forces freedom to maneuver and attack in the battle space. Once air dominance is established, tactical aircraft continue to vigorously and persistently strike ground targets for the remainder of the conflict. Some tactical aircraft are also essential to protect the homeland by defending against incoming missiles or enemy aircraft. Current operational tactical aircraft (referred to as legacy systems) are the Air Force’s F-15, F-16, F-117A, and A-10 systems and the Navy and Marine Corps F/A-18, EA-6B, and AV-8B. Most of these aircraft were purchased in the 1970s and 1980s and are considerably aged as measured by the number of flying hours accumulated by an aircraft compared to its estimated life expectancy. Weapon systems also tend to cost more to operate and maintain as they age. To meet national defense security requirements, DOD sustains its legacy fleets and also modernizes some with new capabilities and enhanced structures to keep aircraft operationally viable until new systems can be delivered in sufficient quantities and the legacies can be retired. DOD is continuing efforts to recapitalize its tactical air forces (replace legacy with new) by acquiring and fielding the Air Force’s F-22A, the Navy’s F/A-18E/F and EA-18G, and the joint service F-35 Joint Strike Fighter (JSF) weapon systems. Recapitalization plans began 20 years ago with the start-up of the F-22A program and are now expected to take another 20 years or more to fulfill culminating with the final JSF procurements. The JSF is being developed in three variants for the U.S. and allied forces. The Air Force’s version, a conventional take-off and landing aircraft, is intended to replace the F-16 and A-10 and complement the F-22A. The Navy’s carrier-capable version is intended to replace F/A-18C/D aircraft and complement the F/A-18E/F. The Marines Corps is acquiring a short field take-off and vertical landing (STOVL) variant to replace its AV-8B and F/A-18D fleets. Table 1 shows the new aircraft with the legacy systems they are expected to replace. Tactical air forces account for a significant share of the defense budget. DOD spends billions of dollars every year to develop, procure, and modernize its tactical air forces. Figure 1 shows the trend in actual and projected investment over the 36-year period from fiscal year 1976 to 2011. To reflect the trend in relative buying power, we normalized the data to express costs in fiscal year 2007 dollars. The total investment for research, development, test and evaluation (RDT&E) and procurement during this time period approaches $1 trillion in constant dollars. The figure illustrates the large investments throughout the 1980s when most of the legacy fleets were acquired and the subsequent decrease in investment during the 1990s as DOD focused on other procurement priorities. The rise in investment starting in the mid-1990s reflects the build up and acquisition of the new systems. This data does not include another $3.3 billion requested by DOD for tactical aircraft in the fiscal year 2007 supplemental and fiscal year 2008 budget request for the Global War on Terror. In addition to the large expenditures for development and procurement, the services spend billions more annually to operate, support, maintain, and man the tactical air forces. Over the past decade, the tactical air forces share of the total defense budget has stayed remarkably consistent, annually receiving about 11 to 12 percent of the total DOD budget and about 15 to 16 percent of the investment appropriations. DOD programmed a total of $331.6 billion for personnel, operations and maintenance, military construction, and acquisition costs for the tactical air forces for fiscal years 2006 to 2011, an annual average of $55.3 billion. Appendix III shows the breakdown by military service and by appropriation. Midway through a 40-year effort to recapitalize and modernize its tactical air forces, DOD’s efforts have been blunted by relatively poor outcomes in its cornerstone new acquisition programs. Increased costs, extended development times, requirement changes, and budget pressures have reduced DOD’s buying power, and DOD now expects to replace legacy aircraft with about 1,500 fewer new tactical aircraft than it had originally planned—a reduction of one-third. Additionally, delivery of these new systems has lagged far behind original plans, not only delaying the fielding of capabilities to the warfighter, but also increasing operating and modernization costs to keep legacy aircraft relevant and in the inventory longer than expected. DOD’s recapitalization plans center on the acquisitions of the JSF, F-22A, F/A-18E/F, and its electronic attack variant, the EA-18G. Collectively, these programs are expected to cost about $400 billion—with almost three- fourths still to be invested—to acquire about 3,200 aircraft (see table 2). Through the end of fiscal year 2006, Congress has appropriated about $111 billion, and the services have taken delivery on 480 new aircraft. Table 2 also shows that about 72 percent of the expected investment and 85 percent of the planned procurement quantity is in the future. The F-22 and the F-18 series acquisition programs are expected to be mostly completed over the next five years, but the JSF program is only halfway through development with procurement starting in 2007 and continuing until 2034. With most of its program still ahead, its sheer size, and its tri-service impact, the JSF is, in many ways, the linchpin of DOD’s tactical aircraft future. Increased costs, schedule delays, and budget pressures have combined to decrease procurement quantities of new tactical aircraft. Total quantities have been reduced by one-third compared to original plans at each program’s inception (see table 3). The cumulative impacts of delayed deliveries and reduced quantities on the total force (see fig. 2) have slowed the recapitalization of the legacy force and made it more expensive to modernize, operate, and maintain. Collectively, this means that the warfighters will have fewer of the newest and most capable aircraft throughout the recapitalization period. With fewer buys of new systems, legacy aircraft will make up a larger proportion of the future force and for a longer period of time than originally envisioned. Although legacy aircraft are still very capable—and will be expected to remain so through upgrades and life extension efforts—they are becoming increasingly more expensive to operate and maintain. Service officials are confident that new systems will provide improved capabilities compared to legacy systems they replace, but worry whether the numbers of aircraft acquired are sufficient to meet national security requirements at an acceptable level of risk. They are also concerned with managing risks using legacy systems in the near- and mid-terms. Over the years, our extensive reviews of DOD’s major weapon system acquisitions have usually found positive outcomes when programs follow the evolutionary, knowledge-based strategy espoused by the best practices of leading commercial firms and now established in DOD policy. This includes establishing a solid business case that accurately and realistically matches available resources (technologies, money, expertise, and time) to warfighter needs. The Defense Acquisition Performance Assessment report in January 2006 also found that a disciplined business approach was needed to improve DOD’s weapon system acquisition process. One particular and key practice recommended was for time-certain development programs—delivery of the first unit to operational forces within about six years from the Milestone A decision point. We have usually found poorer outcomes—significant cost increases, reduced procurement quantities, and schedule delays—in programs not following these practices. For example, immature technologies, design problems, and changes in threats and requirements underpinning the original business case, contributed to major cost increases for the F-22A program, a doubling of its years spent in development, and a sharp reduction in quantities deemed affordable. We are concerned that the JSF is on a similar risky path with highly concurrent plans to begin production while still early in development and with little testing completed. On the other hand, the F/A-18E/F program is employing a more evolutionary approach and is experiencing better cost and schedule outcomes. We have some concerns that its new electronic attack variant, the EA-18G, is pursuing a too-aggressive and more concurrent strategy, increasing its risks of poor program outcomes in the future. Table 4 summarizes outcomes to date on these four tactical aircraft programs. An overview of key observations on each new system follows. More details on each system’s mission, program status, major work activities, and funding are provided in appendix IV. The F-22A “Raptor” needs a new business case that more accurately and realistically supports the changed conditions and the program of record, including justification for additional investments of $6.3 billion to incorporate more robust ground attack and intelligence-gathering capabilities. There is a 198 aircraft difference between the Air Force’s stated need for 381 aircraft and the 183 aircraft the Office of Secretary of Defense (OSD) says is affordable. We have previously recommended that DOD develop a new business case for the F-22A program before further investments in new aircraft or modernization are made. DOD has not concurred with this recommendation, stating that an internal study of tactical aircraft has justified the current quantities planned for the F-22A. Because of the frequently changing OSD-approved requirements for the F-22A, repeated cost overruns, significant remaining investments, and delays in the program we continue to believe a new business case is required and that the assumptions used in the internal OSD study be validated by an independent source. The JSF “Lightning II” acquisition strategy’s high degree of concurrent development and production weakens its business case and poses substantial risks for cost overruns, schedule slips, and late delivery of promised capabilities to the warfighter. The program has contracted to deliver full capabilities for the three different variants in a single-step, 12-year development program and plans to begin production in 2007 with immature technologies, incomplete designs, undemonstrated system integration, and little knowledge about performance and producibility. Costs have increased another $31.6 billion from the fiscal year 2004 rebaselined amount. Due to affordability pressures, DOD is beginning to reduce annual procurement quantities; recent plans indicate a 28 percent decrease in maximum annual buy quantities compared to last year’s program of record. The F/A-18E/F “Super Hornet” program adopted a more evolutionary and less risky approach, having substantial commonality with its predecessor C/D models and leveraging previous technology. Planned upgrades incrementally add new capabilities, some of which are having performance problems and delays according to OSD testers. Over half of the planned fleet has been delivered, and some have been used in combat. The mature and stable production program is on its second multiyear contract and is delivering aircraft ahead of the contract schedule and within cost targets. The EA-18G “Growler” is the newest program and shares the same F/A-18F platform, but incorporates airborne electronic attack capabilities. Its acquisition schedule is very aggressive and concurrent. Only two of its five critical technologies are fully mature to best practice standards even though the program is well into development and plans to start producing electronic attack-capable aircraft this year. OSD’s Director of Operational Test and Evaluation also cites its aggressive schedule to achieve an initial operational capability and special risks in integrating the electronic attack capabilities onto the F/A-18F platform. The problems and delays encountered by the new tactical aircraft acquisition programs have direct and significant impacts on legacy systems plans and costs. Funding needs and plans for new and legacy aircraft are by nature interdependent, and decisions to sustain, modernize, or retire legacy systems are largely reactive to the outcomes of new systems. The military services accord new systems higher funding priority, and the legacy systems tend to get whatever funding is remaining after the new systems’ budget needs are met. If new aircraft consume more of the investment dollars than planned, the buying power and budgets for legacy systems are further reduced to remain within DOD budget limits. However, as quantities of new systems have been cut and deliveries to the warfighter delayed, more legacy aircraft are required to stay in the inventory and for longer periods of time than planned, requiring more dollars to modernize and maintain aging aircraft. Table 5 summarizes budgeted investments (development and procurement funding) for new and legacy systems. Over the next 7 years, DOD plans to invest about $109.3 billion in tactical aircraft to acquire about 570 new systems and modernize hundreds of legacy systems. Uncertainty about new systems costs and deliveries makes it difficult to effectively plan and efficiently implement modernization efforts and legacy retirement schedules. With unpredictable quantities and delivery schedules of the new systems, program managers for legacy aircraft are challenged to balance reduced funds for modifications with requirements to keep legacy systems operational and relevant longer than they had planned. Stable retirement plans are critical to effective management and efficient resource use, but in this environment retirement plans keep changing. Program managers are hard-pressed to allocate funds or set sunset schedules for legacy fleets until the outcomes of new acquisitions are known. Furthermore, the longer the services retain legacy systems in their inventories, the more money they will need for operation and maintenance costs in order to keep legacy aircraft operational and relevant. DOD has become increasingly concerned that the high cost of keeping aging weapon systems relevant and able to meet required readiness levels is a growing challenge in the face of forecast threat capabilities and is depleting modernization accounts, reducing the department’s flexibility to invest in new weapons. Operating costs per flying hour for Air Force legacy systems are shown in figure 3. It illustrates that operation and maintenance costs typically increase as weapons systems age. It also shows the relatively high operating costs for the F-117A, a factor in the decision to retire that fleet early. Some officials believe that operating costs for new systems will be less expensive than the legacy systems they replace, but others challenge that notion, citing such factors as the higher technology, stealth characteristics, and private sector support arrangements. Since legacy programs typically receive less funding than requested, program managers must prioritize and fund first those modifications that are absolutely necessary—ones that are related to safety of flight or that will cause the aircraft to be grounded. As a result, there are large pent up demands of unfunded requirements the warfighters report as necessary to meet their mission requirements. Current estimates for unfunded modernization and sustainment requirements on legacy systems total several billions of dollars. The services are considering substantial service life extension programs and additional modernization enhancements for several of the legacy fleets, but many of these costs are not reflected in current programmed budgets or have yet to be estimated. Some of these issues and concerns about legacy systems are not new, but perhaps have gained more immediacy because of their interdependency with the large scale new systems recapitalization efforts. GAO has previously reported on the condition, program strategies, and funding for key existing DOD weapon systems, including tactical aircraft. Our 2005 report found that the military services had incomplete long-term strategies and funding plans for some systems, in that future requirements are not identified, studies are not completed, funding for maintenance and upgrades was limited, or replacement systems were delayed or not yet identified. We recommended that DOD reassess and report annually on its near- and long-term programs for key systems until replacements are fielded. DOD partially concurred to reassess programs stating that it already does this in its planning, programming, budgeting, and execution process. It did not concur that additional annually reporting to Congress of this information was necessary as they stated the annual budget submission already includes a balanced overall program within available resources. The Air Force plans to invest more than $7.1 billion from fiscal year 2007 to 2013 to modernize legacy aircraft (table 6). These investments are heavily influenced by the ability of the Air Force to complete its recapitalization strategy for the F-22A and the JSF aircraft as currently planned. Further reductions in quantities and delays in delivering these new aircraft will impact the number of legacy aircraft retained and the amount of time they must remain in service. Future investments beyond those shown, including service life extension efforts costing billions of dollars, may be required to keep legacy fleets relevant and operational longer. Officials said the time is approaching when hard decisions on retiring or extending the life of legacy aircraft must be made. The following provides an overview of key observations on the Air Force legacy systems. Additional details on these systems are in appendix IV. The Air Force will retain the A-10 “Warthog” fleet in its inventory much longer than planned because of its relevant combat capabilities— demonstrated first during Desert Storm and now in the ongoing Global War on Terror. However, because of post-Cold War plans to retire the fleet in the early 1990s, the Air Force had spent little money on major upgrades and depot maintenance for at least 10 years. As a result, the Air Force faces a large backlog of structural repairs and modifications—much of it unfunded---and will likely identify more unplanned work as older aircraft are inspected and opened up for maintenance. Major efforts to upgrade avionics, modernize cockpit controls, and replace wings are funded and underway. Program officials identified a current unfunded requirement of $2.7 billion, including $2.1 billion for engine upgrades, which some Air Force officials say is not needed. A comprehensive service life extension program (if required) could cost billions more. F-15 “Eagles” will not be fully or as quickly replaced by F-22As as planned. For years, the Air Force modification efforts and funds have been concentrated on about half the fleet—the number projected as required to complement the new F-22A aircraft. With the F-22A quantities now reduced, more F-15s need to be modernized and retained for longer periods of time. Officials identified near-term unfunded requirements of $2.3 billion and much more if life extension efforts are needed. The newest F-15E aircraft with enhanced strike capabilities will be retained even longer. The Air Force deferred the start up of a major radar upgrade effort costing $2.3 billion, and program officials identified another $1.7 billion in unfunded requirements to address avionics, structural, and engine concerns among other efforts proposed for the F-15E. Newer F-16 “Falcon” aircraft may be needed to stay viable and operational longer due to JSF schedule delays and deferrals. The F-16 fleet consists of several different configurations that were acquired in a long and successful evolutionary program. The Air Force has invested billions over the years to upgrade capabilities, engines, and structural enhancements needed to achieve its original life expectancy of 8,000 hours. The program office estimated $3.2 billion in unfunded requirements, including radar upgrades to the aircraft capable of suppressing enemy air defenses, the Air Force’s only platform for that mission. Significant unknowns exist about extending the life beyond 8,000 hours should that be necessary. This makes any additional JSF schedule delays, deferrals, and cost growth very problematic for the overall Air Force fighter structure. The Air Force plans to retire the F-117A “Nighthawk” stealth fighter in fiscal years 2007 and 2008, stating that there are other more capable assets that can provide low observable, precision penetrating weapons capability. Program Budget Decision 720, dated December 2005, directed the Air Force to develop a strategy to gain congressional support for this plan. Program officials estimate that the drawdown of the fleet and the shutdown of government and contractor offices and facilities would cost approximately $283 million. There is currently no funding allocated for these retirement costs of the F-117A. This cost does not include storage and maintenance of the fleet after such a retirement. The Navy plans to invest about $4.6 billion in its legacy tactical aircraft over the next seven years (table 7). Officials are relying heavily on the acquisition of the F/A-18E/F Super Hornet and the JSF as planned to complete its recapitalization strategy. Delays in the JSF program could require additional modifications beyond those already budgeted for the F/A-18C/D and AV-8B aircraft. Work on EA-6B aircraft is dependent on the timely delivery of the EA-18G Growler, its naval replacement, and on evolving Marine Corps plans for its future electronic attack capability. The following provides an overview of key observations on the Navy and Marine Corps legacy systems. Additional details on these systems are in appendix IV. The F/A-18C/D “Hornet” fleet may be given extra life to ameliorate a fighter shortfall projected by Navy officials. Service officials are considering efforts to extend the life of the legacy aircraft until replaced by the JSF. A service life assessment effort to be completed in December 2007 will determine the feasibility, scope of work, and total costs for extending the life of the system. A preliminary estimate, including the costs of the assessment, is about $2 billion, but officials said that number could very well increase substantially as the assessment progresses and cost estimates mature. Also included in the above estimate is the Center Barrel Replacement to eliminate structural limitations caused by cracking in the central fuselage. This effort is about half completed and will cost about $970 million. A Naval Air Systems Command official said they could very well identify additional modifications and structural work required beyond what is funded. Further delays in JSF could exacerbate problems. The Navy will retire its EA-6B “Prowler” aircraft by 2013 and replace them with the new EA-18G, but the Marine Corps’s future plans are still evolving. The Navy will transition its most capable aircraft to the Marines who will operate and maintain them until retirement. The Marine Corps had planned to retire its EA-6B fleet starting in 2015, but officials said plans could change depending on the transition of aircraft from the Navy and that they may need to keep these aircraft in the inventory longer depending on the JSF delivery schedule. The Marine Corps has not yet made firm plans as to its future electronic attack capability and is considering employment of the JSF and other assets. The Marine Corps has requested a total of $379 million in the fiscal year 2007 global war on terrorism supplemental and the fiscal year 2008 global war on terror request to upgrade an additional 18 EA-6Bs with the Improved Capability III electronic attack suite and for other modernization enhancements. The Marine Corps wants to replace its entire AV-8B “Harrier” fleet with the JSF STOVL aircraft as expeditiously as possible. The Harrier—the original STOVL aircraft—is costly to maintain, and has a relatively high attrition rate. Program officials have budgeted very little future funds for Harrier modifications, but delays in JSF deliveries and possible cutbacks in quantity may require some redirection. Harriers may need to be retained in inventory longer than expected, but officials have not determined the extent of work required, nor the potential cost. Between 1994 and 2001, the majority of AV-8Bs were remanufactured with new fuselages to add structural life and to accommodate night attack modifications and a higher performance engine. Currently, five day attack aircraft are being upgraded to night attack capability, and two training aircraft are being refurbished. DOD does not have a single, integrated investment plan for recapitalizing and modernizing its tactical air forces. Rather, each service independently develops its requirements and programs its resources to size and shape its individual force structure. These plans to date have underperformed in terms of higher acquisition costs and fewer quantities delivered, and officials from each service forecast near-term and future shortfalls in the capabilities and numbers of aircraft. Moving forward, projected plans are likely unaffordable given competing demands from future defense and nondefense budgets. Efforts to build a more joint position continue with some promise, but recent studies did not significantly impact service acquisition plans. Without a joint, integrated investment strategy for tactical aircraft that plans and addresses requirements on a DOD enterprise-wide basis, it is difficult to evaluate the efficacy and severity of capability gaps or, alternatively, areas of redundancy. Also, it is difficult to fully account for and assess real and potential contributions from other current and future non-tactical systems providing similar capabilities, including bombers, missiles, and unmanned aircraft. The national defense strategy, which comes from an enterprise level in DOD, requires the services to be able to successfully and simultaneously defend the homeland, win two overlapping major contingencies, operate in forward locations around the world to deter aggression, and handle lesser operations as needed such as humanitarian and peace-keeping missions. Defense strategy continues to evolve with an increased emphasis on the “long war”—the Global War on Terror--and other asymmetric operations and a reduced emphasis on major theater combat and conventional adversaries. While OSD and the joint staff provide oversight and may make adjustments, each military service is primarily responsible for assessing tactical aircraft requirements, sizing its force structure, developing investment plans, and programming resources to meet its individual assignments within the total national defense policy requirements. The future forces planned by the military services will be smaller than today’s force, but more capable and stealthier, according to officials (see table 8). Even so, Service officials are forecasting shortfalls in force structure capabilities and numbers throughout this period. Two important factors in sizing and shaping forces are the types of forces and systems needed (capabilities) and the overall size of the force to meet operational demands (capacity). This means maintaining a force structure that not only has modern systems with advanced capabilities to meet projected threats, but also has enough assets to cover assigned targets, threats, and territories. Each Service also wants to size their force structure to enable them to employ rotational plans that cycle force packages through sequential phases of active deployment, return from deployment to reconstitute, and preparation for the next deployment. The Navy sizes and shapes its tactical fighter requirements to fill 10 carrier strike forces. Each future force would comprise 44 aircraft— 24 F/A-18 E/Fs and 20 carrier capable Joint Strike Fighters—with equivalent capabilities and a mix of stealthy and non stealthy aircraft. EA-18Gs will also be assigned to carriers to provide tactical jamming support for the strike force. Marine Corps fighter squadrons are attached to Marine expeditionary units and are sized and positioned to provide direct fire support and protection to front-line forces and reinforcements. The future Marine Corps combat air force is tied to success of the JSF acquisition program as officials plan to have an all-JSF force in the future. The future force will also have 40 percent fewer aircraft assigned to each infantry battalion. In 2003, the Department of the Navy began implementing a tactical air integration plan to address affordability concerns. The plan was aimed at more closely integrating Navy and Marine Corps strike fighter inventories, in effect managing tactical air assets as a common pool. The Navy projected net savings of $18.5 billion through fiscal year 2021 by reducing the number of operational legacy fighters required and, in turn, the number of new aircraft needed for recapitalization. This reduced future procurement plans by 409 JSFs and 88 F/A-18E/F aircraft. At the same time, it was recognized that integration would increase operating and maintenance costs because the smaller number of aircraft would need to be maintained at higher rates of readiness in order to meet emergency surge deployments. Actual and planned inventory levels for combined Navy and Marine Corps tactical aircraft from fiscal year 1990 through fiscal year 2025 are shown in figure 4. The Department of the Navy tactical aviation forces peaked in the early 1990s at about 1,800 aircraft and shrunk to about 1,200 by 2006, principally through retirement of the A-6 fleet and beginning draw downs on the F-14 fleet. By 2025, the total tactical inventory is slated to decrease another 300 aircraft, or 25 percent (refer back to table 8). Therefore, the total inventory in 2025 is projected to be one-half the inventory in the early 1990s. Legacy aircraft would be virtually replaced by the more capable new systems. Shortfalls Forecast by Navy and Marine Corps Officials Navy officials are projecting persistent future shortfalls in both legacy and new FA-18 aircraft. The amounts of the shortfall vary depending on two key variables—the rate of procurement on the Joint Strike Fighter and service life estimates for F/A-18s. Navy and Marine Corps officials told us that buying the JSF at the current planned rate—requiring a ramp-up to 50 aircraft per year by fiscal year 2015—will be difficult to achieve and to afford, particularly if costs continue to increase and schedules slip. According to one study, a likely scenario assumes acquiring fewer JSFs annually and achieving a modest increase in flying hour life for legacy F/A-18C/Ds; this scenario would project shortfalls starting in 2010 and peaking at 167 legacy strike fighters by 2017. Navy officials also project a shortfall of 131 F/A-18E/Fs by 2024 based on estimated usage, attrition, and assuming an increase in flying hour life from 6,000 to 9,000 hours. Options to erase these shortfalls include buying more new aircraft and extending the life of legacy aircraft. Marine Corps officials project a near-term shortfall in the AV-8B fleet ranging from 8 to 14 aircraft between fiscal years 2006 to 2011. Erasing this shortfall after 2011 depends upon acquiring the JSF STOVL in the numbers and time frames currently planned. According to officials, a one-year slide in the JSF schedule increases the shortfall by approximately three aircraft per year. As a result, the fleet would need to examine squadron structure and additional reductions to aircraft would be expected to negatively impact deployment capabilities. The Air Force sizes its tactical air forces to meet warfighting requirements. In order to fill peacetime defense needs, the Air Force schedules ten air and space expeditionary forces, the planned organizations of Air Force aircraft, personnel, and support for operations and deployments. These individual force constructs are applied against rotational national security requirements. The Air Force’s future plan for combat aircraft that is believed affordable is termed the programming force and is shown in figure 5. This plan assumes buying the 183 F-22As deemed affordable by OSD and the current program of record for the JSF, but with a slowdown in fielding. The programming plan projects the total number of tactical aircraft decreasing by about 700 aircraft--from 2,500 currently to about 1,800 in 2025 (refer back to table 8). This plan continues the overall decline in inventory since 1990 when the Air Force fielded about 4,000 tactical aircraft. The programming force shows significant quantities of A-10 and F-15C/D/E aircraft remaining in the force by 2025 with phased drawdown of all F-16s. The 2025 force is now projected to be roughly 60 percent new systems and 40 percent legacy systems. This is a significant shift from earlier projections which had planned on an almost all new force. This shift reflects changes due to the cuts in total F-22A purchases and the reduced annual buys of JSF with consequent slowdown in fielding. Shortfalls Forecast by Air Force Officials Officials at Air Combat Command--the requirement-setting command that supports the warfighter--told us that the programming (funded) force is not sufficient to meet national security requirements at acceptable levels of risk. According to these officials, the funded program would support only 100 combat aircraft (tactical fighters and bombers) in each air and expeditionary force compared to 150 aircraft today. While the new systems are expected to provide improved capabilities compared to the legacy systems they replace, officials do not think the force would have sufficient capacity to cover future security needs with acceptable risks. Air Combat Command develops another force plan known as the vision force (later reworked into a planning force by Air Force headquarters) that the requiring command believes provides the right mix and numbers to meet future needs at an acceptable level of risk. This plan would procure the full complement of JSFs and the Air Force’s stated requirement for 381 F-22As, which would allow a full operational squadron to be assigned to each of the 10 air and space expeditionary forces. Under this plan, almost all legacy aircraft would be retired by 2025 with the exception of the F-15E, the latest model in the F-15 series that has an enhanced strike capability. This plan is not constrained by resources, and command officials estimated it would cost more than $100 billion over the funding levels currently expected through 2025. Looking forward over the next 20 years, DOD’s collective tactical aircraft recapitalization plans are likely not affordable as currently planned. Acquisition strategies and plans assume favorable assumptions about cost and schedule and the ability to sustain funding at high levels over a considerable period of time. Historically, however, costs increase; quantities are reduced; and delivery schedules are delayed. The JSF program represents 90 percent of the investments to go for new tactical aircraft and projected plans are likely unaffordable given projected future budget constraints and competing demands. First, plans for new systems are based on conservative estimates of future cost growth to complete the programs but optimistic estimates on the availability of future funding, production rates, and quantities of new aircraft delivered to the warfighter on time. While it is understandable to project that programs will execute to cost and quantity targets as planned, the prevailing and historical evidence suggests otherwise. In 1997 we reported that the historical average cost growth of major acquisition systems was at least 20 percent. Our annual assessment of weapon systems continue to show today that many programs cost more, take longer to develop, and deliver fewer assets than planned. While the F/A-18E/F program has generally executed to schedule, the F-22A did not, and we believe the recent cost escalation and potential delays in production indicate that the JSF is on a similar path. Air Force and Marine Corps officials told us that the planned maximum procurement rates for the JSF will be very difficult to sustain and there are already pressures to reduce or delay procurement before it even begins. The fiscal year 2008 budget has reduced near-term quantities and current planning projections suggest that the Air Force will significantly reduce annual procurement quantities midterm in the program and defer these aircraft to later years, extending the procurement period by 7 years. Second, the tactical aircraft plans do not consider billions in potential added costs for legacy systems. As discussed earlier in this report, substantial service life extension programs and additional modernization enhancements are under serious consideration for many of the legacy fleets. Some of these costs are not reflected in current programmed budgets or have yet to be estimated. For example, the Navy is considering options to extend the life of its F/A-18 fleets, but has not yet developed comprehensive cost estimates. An initial estimate is for $2 billion, but an official told us the cost will likely be much larger. The Air Force is now planning to keep the A-10 in inventory for a longer period of time, but the full costs to extend the life are not known, and some other potential costs, including $2.1 billion to improve the engines, are not funded. One estimate for extending the A-10’s life in total was $4.4 billion. We also learned that $283 million to retire the F-117A during fiscal years 2007 and 2008 has not yet been funded, and given officials’ comments about unstable divestiture schedules and changing retirement dates, it may be the case that other programs have also not factored in retirement costs to close contractor facilities and government programs. Furthermore, as legacies remain in the operational force longer, substantial funding for additional sustainment costs and annual operating and maintenance costs will be necessary, particularly if plans to defer JSF procurements are implemented. Third, tactical aircraft plans will face increasing competition for the defense dollar from other new procurements and from continuing costs for the Global War on Terror. DOD is planning the start-up of several big-ticket items including a new strategic tanker aircraft, a next generation strike aircraft, unmanned aircraft, and other more transformational programs. Projected costs for ongoing military operations in the Global War on Terror will continue to put pressure on defense investment accounts and are also expected to increase the share of the total budget going to ground forces which could decrease the share for aviation programs. Flat or lower funding levels and future systems that can perform the same or similar tactical air missions may substantially alter the ultimate mix, timing, and rate at which combat aircraft are acquired. Fourth, any questions on affordability must be viewed in a larger context relative to federal spending, demographic trends, and impacts on discretionary funding. The Comptroller General testified last year on the nation’s unsustainable fiscal path and its large and growing structural deficit due primarily to known demographic trends, rising health care costs, and lower federal revenues as a percentage of the economy. Federal discretionary programs, including defense spending, will face serious budget pressures. Even so, defense programs are commanding larger budgets. Over the past 5 years, the department has doubled its planned investments in new weapon systems from about $700 billion in 2001 to nearly $1.4 trillion in 2006. The Congressional Budget Office evaluated the long-term implications of defense plans and determined that current investment plans would require sustained funding levels at higher real (inflation-adjusted) amounts than since mid-1980s, due to sustained purchase of new equipment, increased costs for new capabilities, increased operations and maintenance costs for aging legacy systems, and costlier new systems. At the same time, the Congressional Budget Office notes that increased medical and operating support costs competing for the defense dollar and national demographic trends will continue to put pressure on federal discretionary spending. Figure 6 illustrates the affordability challenge. It contrasts DOD’s optimistic future-funding plans with a more conservative estimate. DOD’s plan (top-line in figure 6) assumes funding levels well above historical amounts. The spike in funding required starting in 2008, clearly shows the typical bow wave effect in which weapon system budget requirements tend to move to the right (delayed to future years) as programs fail to receive full funding or do not execute as planned. DOD’s projections show an optimistic bent that tactical aircraft procurement will be able to significantly increase its share of defense funding, exceeding historical levels when many project flat or falling funding levels. The lower line (shaded portion of fig. 6) assumes funding at the same level as fiscal year 2006 carried forward with annual inflationary increases. This more conservative projection is in line with historical experience. Our analysis of future-year defense plans indicates that the military services in total and the tactical aircraft procurement in particular have received similar shares of the defense dollar over time, a finding that argues against a strategy that requires a substantial increase in order to succeed. The gap between the lines thus represents DOD plans that are likely unaffordable. DOD continues broad efforts to improve jointness and bring a more integrated cross-service perspective to its plans and programs. There are promising, but still rather new efforts to enhance capabilities-based planning and portfolio management that could be used to better integrate and hone joint tactical aircraft requirements. However, recent efforts to apply jointness to tactical aircraft have not had much direct impact on service investment plans and strategies. We also note that one of the few mission capabilities that have been provided jointly, the tactical airborne electronic attack mission carried out by the EA-6B, is now expected to be replaced in the future by separate and unique aircraft for each of the services. DOD has several promising efforts to enhance jointness and bring a capabilities-based approach to defense investments. The Joint Capabilities, Integration, and Development System (JCIDS), portfolio management, and other initiatives are evolving mechanisms designed to bring top commanders’ needs up-front and take a more joint, enterprise- wide view of requirements and funding decisions. Continuing efforts to develop joint capabilities-based assessment and planning methodologies will be essential to understand contributions to the warfighter, develop DOD-wide priorities, and craft investment strategies to mitigate shortfalls or eliminate duplication. JCIDS is a major, but relatively new initiative to shift from a service- centric focus on individual acquisition programs to a more top-down and joint view of warfighting capabilities and effects. JCIDS is intended to involve a wide range of stakeholders, including combatant commanders, in identifying capability needs and alternative solutions. JCIDS introduces new methodologies intended to foster jointness and groups warfighting needs into eight functional areas based on warfighting capabilities—such as, force application, battle-space awareness, and focused logistics—that cut across the military services and defense agencies. JCIDS process emphasizes early attention to the fiscal implications of newly identified needs, including identifying ways to pay for new capabilities by divesting the department of lower priority or redundant capabilities. Our recent report discusses JCIDS and other steps DOD is taking to better identify and prioritize joint warfighting needs, but finds that DOD’s service-centric structure and fragmented decision-making processes hinder successful implementation. Another promising and related initiative is joint capability portfolio management. The intent is to manage groups of like capabilities across the enterprise to improve interoperability, minimize capability redundancies and gaps, and maximize capability effectiveness. This would help build budgets around a set of capabilities instead of traditional military accounts. The idea is to take a more joint look at what capabilities combatant commanders and warfighters need, as opposed to the current more service-centric way in which the services independently buy and field capabilities they deem important. By shifting the focus from service- specific programs to joint capabilities, DOD should be better positioned to understand the implications of investment and resource trade-offs among competing priorities. In September 2006, DOD management selected four test cases for experimentation with the joint capability portfolio management concept. Depending on this outcome, tactical aviation would appear to be an excellent candidate for portfolio management by cross- decking similar capabilities in each service. Although the implementation of these portfolio management initiatives seems to have the potential for improving interoperability and minimizing capability redundancies and gaps, DOD still has a long way to go before the effectiveness of this capability-based planning and management effort can be determined. The Air Force is also implementing a new “associate wing” concept that is similar in its aims as the Navy-Marine Corps integration effort. Associate wings would pair up active and reserve component units to share the same aircraft and facilities, while retaining separate chains of command. Rather than each unit’s operating and maintaining its own wings, the two would now operate and maintain just one wing in common. While still very new, the expected outcomes would be reduced inventories, reduced operating costs, and fewer future replacements needed. Despite the Quadrennial Defense Reviews (QDR) and other studies, there are many unanswered questions about whether services can achieve overarching goals for modernizing aging tactical aircraft fleets. In testimony on the results of the department’s 2006 QDR, the Secretary of Defense stated that continued U.S. air dominance depends on a recapitalized fleet. Surprisingly, however, DOD’s 2006 QDR report, issued in February 2006, did not present a coherent joint investment strategy for tactical aircraft systems that addressed needs, capability gaps, alternatives, and affordability. The Joint Strike Fighter, the largest aircraft acquisition program, was not mentioned and the F-22A only in relation to multi-year contracting. The QDR report did include some non prescriptive direction for joint air capabilities, emphasizing systems with greater range and persistence, larger and more flexible payloads, and the ability to penetrate and sustain operations in denied areas. In a 2005 testimony, we suggested that the QDR would provide an opportunity for DOD to assess its tactical aircraft recapitalization plans and weigh options for accomplishing its specific and overarching goals. By not specifically addressing these issues, the DOD missed an opportunity. With limited information contained in the QDR report, many questions are still unanswered about the future of DOD’s tactical aircraft modernization efforts. In addition, DOD conducted a joint air dominance study that looked at current acquisition plans and capabilities. While it validated the need for three JSF variants, the study did not receive wide services support. Air Force officials said they submitted their own recommendations that were not adopted. Another consultant study, directed by the Deputy Secretary of Defense and intended to replicate the Navy-Marine Corps integration effort on a DOD-wide basis, also appears not to have had much direct impact on altering service acquisition plans going forward. In conducting military operations, U.S. and allied aircraft can be at great risk from enemy air defenses, such as surface to air missile systems. The airborne electronic attack mission employs specialized aircraft to suppress, destroy, or temporarily degrade enemy radars and communications and is a critical enabler to successful tactical air operations. Because these specialized aircraft protect aircraft of all services in hostile airspace, the electronic attack mission crosses individual service lines. DOD considers airborne electronic attack to be a key capability for many contingencies and predicts increasing roles and missions for aircraft with these capabilities. Since 1995, the EA-6B has been DOD’s only tactical standoff radar jammer aircraft and has provided support to all services during numerous joint and allied operations against both traditional and nontraditional threats. This capability—one of the few examples of a truly joint asset shared by the military services—is now expected to diminish, to be replaced by separate and unique aircraft for each of the services. Concerned about a gap in defense suppression capabilities as a consequence of increasing modernization of enemy air defenses and aging of the EA-6B, DOD conducted an analysis of alternatives for airborne electronic attack. The May 2002 report concluded that the EA-6B inventory would be insufficient to meet DOD’s future needs and identified many potential platform combinations to address capability shortfalls. DOD adopted a system-of- systems approach in which a multitude of systems are needed to provide required capabilities across the electronic spectrum. The report stated that before a service can begin a formal acquisition program, services decisions should consider whether one service will provide DOD’s core capability and whether it would reside in a single platform. Subsequent to the report, the Navy, Air Force, and Marine Corps each decided to develop individual and unique electronic attack capabilities to replace the EA-6B in the stand-off tactical jamming role. The Navy is developing the EA-18G, but plans to procure only enough to support its carrier strike forces. The Air Force initially proposed a modified B-52 for the standoff radar jamming role. With OSD concurrence, the Air Force cancelled this program because of its high estimated costs, and is now considering other options. In the near-term, the Marine Corps will continue to use upgraded EA-6B aircraft, but anticipates using in the future an electronic attack-capable Joint Strike Fighter integrated with unmanned aerial systems. There is an OSD directed study underway to validate the services’ requirements. While DOD continues to tout joint capabilities, it is a concern that one area of success is being curtailed. A September 2004 memorandum of understanding between the military services and joint staff stated that the Navy expeditionary EA-6B squadrons will decommission between fiscal years 2009 and 2012 to be replaced by indigenous Navy, Air Force, and Marine Corps electronic attack capability. DOD continues to assess requirements and options. Tactical air recapitalization and modernization is a costly and very challenging enterprise, requiring a delicate and dynamic balancing of funding, fielding schedules, and retirement plans between new system acquisitions and legacy aircraft to ensure that current and future forces can meet national security requirements at reasonable levels of risk. New tactical aircraft programs, for the most part, have not adequately employed evolutionary, knowledge-based acquisition strategies—resulting in escalating costs that undercut DOD’s buying power, reduces aircraft purchases, and delays delivering needed capabilities to the warfighter. Because funding needs and plans for new and legacy aircraft programs are interdependent, cost, schedule, or performance problems experienced in acquiring new systems cause perturbations in modernization costs and retirement schedules throughout the operational fleets. Dependent largely on the future course of the Joint Strike Fighter, legacy programs are placed in reactive modes with uncertain and changeable future requirements, unstable retirement plans, and potential unfunded requirements in the billions of dollars. While the services strive to reduce war-fighting risks by fielding new systems and limiting investment in legacy systems, they are faced with increased prices and schedule risks for new aircraft while maintaining aging, capability-limited legacy aircraft. In the past, we have recommended the department use an evolutionary acquisition approach to develop weapon system programs coupled with a process that ensures at the start of development that requirements have been reduced to match mature technologies, a feasible design, and a reasonable expectation of available funding. While the department’s acquisition policy has included such practices, DOD has not fully embraced the use of these practices as it executes current acquisition programs. Despite DOD’s repeated declaration that recapitalizing its aging tactical aircraft fleet is a top priority, the department does not have a single, comprehensive, and integrated investment plan to adequately craft joint priorities, identify critical capability gaps, and allocate scarce funds. Instead, planning has been separately done by the services. Each military service independently plans and resources individual programs that, collectively, are likely unaffordable and that make it difficult to identify and quantify DOD-wide capability gaps or duplication. DOD needs to bring overall tactical aircraft investments into line with more realistic, long-term projections of overall defense funding and the amount of procurement funding expected to be available for aircraft purchases, and then establish and adhere to a plan that is militarily justified and can be executed within that amount. Efforts to improve joint capabilities-based planning and to manage tactical air assets as a portfolio should be encouraged. In order to recapitalize and sustain capable and sufficient tactical air forces that reflect what is needed and affordable from a joint service perspective and that has high confidence of being executed as planned, we are making two recommendations to the Secretary of Defense. The Secretary should take decisive actions to shorten cycle times in delivering needed combat capabilities to the warfighter including adopting a time-certain development cycle that can deliver an increment of new capability within 5 to 6 years after the start of system design and development; and reassessing requirements for ongoing weapon system acquisition programs to identify ways to reduce requirements and speed up delivery of initial capabilities; and develop an integrated enterprise-level investment strategy that is based on a joint assessment of warfighting needs and a full set of potential and viable alternative solutions, considering not only new acquisitions but also modifications to legacy aircraft to achieve this balance within realistic and affordable budget projections for DOD; strikes a balance between maintaining near-term readiness and addressing long-term needs; and considers the contributions of bombers, long range strike aircraft, unmanned aircraft, missiles, and other weapons currently in the inventory and those planned that can be employed to attack the same type targets as the tactical aircraft. DOD concurred with both recommendations in written comments on a draft of this report. These comments appear in appendix II. They also provided technical comments that we incorporated in the final report as appropriate. Regarding our first recommendation that DOD take decisive actions to shorten cycle times in developing and delivering weapon systems, DOD stated that this is consistent with a major initiative of the Under Secretary of Defense for Acquisition, Technology, and Logistics intended to put military capability into the hands of the warfighters faster and more affordably. The Department is also pursuing other efforts supporting such actions, including acquisition personnel pay incentives, acquisition policy changes, focused research and engineering investments in technology, and revised, earlier in-process reviews of requirements and proposed solutions by OSD and Joint Staff. At the same time, however, DOD stated that aircraft development is a highly complex engineering challenge and that it would be unreasonable to uniformly apply a six year cycle time to complex programs like the JSF. We think that it is precisely because of complexity that programs like the JSF could stand to benefit most from adopting a more evolutionary acquisition process to develop and evolve weapon systems through small, time-phased development increments. DOD’s history of substantial cost growth and extended development times for major weapon systems acquisitions were factors driving recent policy changes to require a more knowledge-based evolutionary process with time-phased development increments—key recommendations also in the Defense Acquisition Performance Assessment report. We note that the JSF’s predecessor, the F-16 fighter program, delivered an initial increment of capability to the warfighter within about 4 years after development began and then successfully delivered 2,200 aircraft with incremental improvements as technology became available over the span of about 30 years. We believe this alternative, less risky and more evolutionary approach is feasible and still available to the JSF as it seeks to develop multiple variants to recapitalize aging tactical fleets involving three services and international partners. Regarding our recommendation that DOD develop an integrated and affordable enterprise-level investment strategy for tactical aviation, DOD concurred but stated it already had elements of such a strategy. Officials cited key decisions to invest in fifth generation systems such as the JSF and F-22, prudent life extension programs for selected legacy aircraft, the Joint Air Dominance study conducted during the 2006 QDR, and new processes—the Joint Capabilities Integration and Development System and portfolio management—as bringing integrated capabilities-based approaches in formulating a tactical aircraft investment strategy. We agree that the Department is making strides toward an integrated enterprise-wide investment strategy but that key processes are still in their beginning stages and that annual budget decisions are still primarily driven on a service-centric, weapon system-specific basis. The new Joint capability portfolio management initiative is a reaction to the current environment in which the services independently budget, buy, and field capabilities. It has the potential to bring a joint warfighter, cross-service view and disciplined budgeting over sets of mission area capabilities, but test cases for experimenting and proving the concept are just beginning. The 2006 QDR had the potential, but did not present a coherent joint investment strategy that addressed needs, capability gaps, alternatives, and affordability. These are critical, but now largely missing, elements to the comprehensive and integrated investment strategy we are recommending. We are sending copies of this report to the Secretary of Defense, the Secretary of the Air Force, the Secretary of the Navy, the Commandant of the Marine Corps, and the Director, Office of Management and Budget. Copies will also be made available to others upon request. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. To determine current risks and future plans for DOD’s new tactical aircraft acquisition programs, we evaluated plans, budgets, delivery schedules, and results to date on the JSF, F-22A, F/A-18E/F, and EA-18G. We compared cost, schedule, and performance data to prior estimates to identify significant changes and their causes. We discussed concerns and emerging issues with officials from the program offices, the requiring commands, and service headquarters. To limit impacts on the services and leverage our work, we drew extensively upon prior and ongoing GAO engagements on the JSF, F-22A, and EA-18G. To determine impacts on legacy systems and retirement schedules, we reviewed work content and funding requirements for ongoing and projected modernization and sustainment projects for tactical aircraft. We discussed future plans for legacy systems, retirement schedules, and the degree they have been affected by cost, schedule, and performance outcomes for new acquisition systems. We compiled lists of unfunded requirements and estimates of costs for service life extension programs. To determine the extent to which DOD has developed an integrated investment plan for future tactical aircraft, we analyzed Air Force, Navy, and Marine Corps plans and processes for establishing force and capability requirements, the factors used to size and shape future force structure to meet national security requirements, and how capability gaps or redundancies are addressed. We reviewed OSD and joint staff responsibilities and processes for exercising program management and oversight of service programs and new initiatives intended to improve enterprise planning and look for integrated DOD-wide solutions. In performing our work, we obtained information and interviewed officials from the F-22A System Program Office, Wright-Patterson Air Force Base, Ohio; F/A-18 System Program Office, Patuxent River, MD; program offices for Air Force legacy systems, Wright-Patterson Air Force Base, Ohio; program offices for Navy and Marine Corps legacy systems, Patuxent River, MD.; Air Combat Command, Langley Air Force Base, VA; Naval Air Systems Command, Patuxent River, MD; Navy, Marine Corps, and Air Force headquarters offices, OSD, and Joint Chiefs of Staff offices, Washington, D.C. We performed our work from June 2006 through March 2007 in accordance with generally accepted government auditing standards. This appendix provides more details on new and legacy tactical aircraft to expand upon summary information provided in the body of this report. We include a brief description of each aircraft’s mission, program status, and our observations on program execution and outcomes. Where applicable, we also highlight recent GAO work on some systems. The appendix also includes a funding table for each aircraft that consolidates the budget requests in the Fiscal Year 2008 Defense Budget, the Fiscal Year 2007 Global War on Terrorism Supplemental, and the Fiscal Year 2008 Global War on Terror request. The budget information in these tables is expressed in current (then year) dollars and the totals may not add exactly because of rounding. The fiscal year 2007 funding shown in these tables has been appropriated by Congress except for the supplemental requests. The F-22A is the Air Force’s next generation air superiority fighter and incorporates a stealthy and highly maneuverable airframe, advanced integrated avionics, and a supercruise engine. It will replace or complement the F-15 as the Air Force’s primary air-to-air fighter and was originally intended to counter threats posed by the Soviet Union. The Air Force has decided to add more robust air-to-ground and intelligence- gathering capabilities not previously envisioned at program start, but now considered necessary to increase its utility. Demonstration and validation began in October 1986 and system development in June 1991. Low-rate initial production was approved in August 2001 and full-rate production in March 2005. The first production aircraft was delivered in June 2003 and, as of October 2006, 78 aircraft had been delivered to the operational forces. The program of record is to acquire a total of 183 aircraft at a total cost of $62.6 billion. The Air Force plans to complete procurement in 2010 under a multiyear contract. Initial operational capability was declared in December 2005. In its December 2006 annual report, DOD’s Director of Operational Test and Evaluation has determined that the F-22A is operationally effective in the air-to-air mission role and in the air-to-ground mission against fixed targets using the Joint Direct Attack Munition. The aircraft is not yet operationally suitable due to reliability and maintainability deficiencies. Operational users report that the aircraft has performed excellently in military exercises against representative threats and represents a large advantage over the F-15. The Air Force is implementing a modernization and reliability improvement program and plans to invest another $6.3 billion to develop and integrate more robust ground attack, intelligence-gathering, and other new capabilities. Formally established in 2003, the F-22A’s modernization program is currently being planned for three increments of increasing capability to be developed and delivered over time, from fiscal year 2007 to 2013. Additional modernization is expected, but the content and costs have not been determined or included in projected budgets beyond 2013. The Air Force’s current stated need is for 381 F-22As. However, because of past cost overruns and current budget constraints, OSD states that 183 are all that is needed and affordable. This leaves a 198-aircraft gap with the Air Force’s stated need. We have reported on F-22A issues for many years and have recommended that a new and executable business case be prepared that more accurately and realistically supports the current program of record and which resolves a capability gap between what the Air Force requires and what DOD can afford. During the more than 20 years the aircraft has been in development, the conditions underpinning the original business case substantively changed—threat and employment plans changed, costs increased, the development period doubled, and new mission requirements were added. Without a new relevant business case— on the appropriate number of F-22As for our national defense—it is uncertain whether additional investments in the modernization program are advisable. The Air Force is working with the contractor to fix structural deficiencies on the F-22A. Fatigue testing identified cracks in the aircraft near the horizontal section tail of the aircraft. The Air Force is planning modifications to strengthen the structure to get the 8,000-hour service life. The Air Force estimates the costs to modify 72 F-22As will be approximately $124 million. These modifications will not be fully implemented until 2010. At the start of modernization, all three critical technologies essential to achieving capability requirements were considered mature by best practice standards. Since that time, however, the program added three additional critical technologies, all of which are immature. Immature and untested technologies, as the program pushes forward, significantly increase the risk of poor cost and schedule outcomes. The JSF program goals are to develop and field an affordable, highly common family of stealthy, next-generation strike fighter aircraft for the Navy, Air Force, Marine Corps, and U.S. allies. The carrier suitable variant will provide the Navy a multirole, stealthy strike aircraft to complement the F/A-18E/F. The conventional take-off and landing variant will primarily be an air-to-ground replacement for the Air Force’s F-16 and the A-10 aircraft, and will complement the F-22A. The short take-off and vertical landing (STOVL) variant will be a multi-role strike fighter to replace the Marine Corps’ F/A-18 and AV-8B aircraft. The JSF program is DOD’s most costly aircraft acquisition program. DOD estimates that the total cost to develop and procure its fleet of aircraft will be $276 billion, with total costs to maintain and operate the JSF adding another $347 billion over its life cycle. It is also DOD’s largest cooperative development program. Eight partner countries are providing funding for system development and demonstration: Australia, Canada, Denmark, Italy, the Netherlands, Norway, Turkey, and the United Kingdom. Concept demonstration began in November 1996. The program entered system development and demonstration in October 2001 and is expected to run through fiscal year 2013. Manufacture and assembly of test aircraft is continuing, and first flight of the Air Force’s variant occurred in December 2006. Overall, the cost estimate to develop the JSF has increased from $34.4 billion in 2001 to $44.5 billion in 2005—about 29 percent. Procurement costs have increased from $196.6 billion in 2001 to $231.7 billion in 2005—about 18 percent. Since program start, JSF quantities have been reduced by 530 aircraft. Current estimated program acquisition unit costs are about $112 million, a 38 percent increase since 2001. We recently issued our third annual report on the JSF acquisition. The development team has achieved first flight and has overcome major design problems found earlier in development. However, the current acquisition strategy still reflects very significant risk that both development and procurement costs will increase and that aircraft will take longer to deliver to the warfighter than currently planned. Even as the JSF program enters the midpoint of its development, it continues to encounter significant cost overruns and schedule delays. As a result of the program reporting a Nunn-McCurdy unit cost breach, a new baseline was established in 2004 with additional costs of $19.4 billion; since then, estimated costs to complete the acquisition have increased another $31.6 billion. OSD cost analysts are concerned about worsening cost performance and believe the cost to complete the program will further escalate. The program has also experienced delays in several key events, including the start of the flight test program, delivery of the first production representative development aircraft, and testing of critical missions systems. Our past reports have found that the acquisition program is not following a knowledge-based evolutionary approach that places it at risk of continued poor program outcomes. The degree of concurrency between development and production in the JSF’s acquisition strategy includes significant risks for cost and schedule overruns or late delivery of promised capabilities to the warfighter. For example, at the time of the low-rate initial production decision, only one aircraft will have flown; less than 1 percent of the flight test program will have been completed; and none of the three variants will have a production representative prototype built. The 7-year flight test program of more than 11,000 hours of testing just began in December 2006. It will not be until 2011 that a fully capable, integrated JSF is scheduled to begin flight testing. By that time, DOD expects to have committed to buy 103 production aircraft for $20 billion. Therefore, almost all of critical flight testing remains to confirm the aircraft will indeed deliver the required performance. Manufacturing and technical problems can delay the completion of the flight test program, may necessitate design changes, increase the number of flight test hours needed to verify the system will work as intended, and affect when the capabilities are delivered to the warfighter. DOD appears to be taking some actions to lessen funding risk—the ability to sustain funding in times of austere budgets or against competing priorities. DOD’s plan in 2006 assumed extremely high annual funding rates averaging $14 billion between 2012 and 2023. This is an extremely large annual funding commitment that carries a correspondingly high level of funding risk as the program moves forward and must annually compete with other programs for the defense dollar. Due to affordability pressures, DOD is beginning to reduce procurement budgets and annual quantities. The recently released fiscal year 2008 defense budget shows declining procurement quantities for the first years of production. To meet future constrained acquisition budgets, Air Force and Navy officials and planning documents suggest a decrease in maximum annual buy quantities from 160 shown in the current program of record to about 115 per year, a 28 percent decrease. While this will reduce annual funding requirements, it will also stretch the procurement program at least seven years to 2034, assuming buy quantities are deferred rather than eliminated. The F/A-18E/F Super Hornet program was approved as a major modification in the F-18 series in May 1992. It is a twin engine, single- and two-seat, multi-mission tactical aircraft designed to perform fighter escort, interdiction, fleet air defense, and close air support missions. The F/A-18E/F is replacing the F/A-18A/B/C, has improved range and payload, and is less detectable. In addition to the procurement quantity of 462 E/F aircraft, the Navy is also procuring 84-90 airframes for the EA-18G program (total acquisition up to 552 aircraft). Development began in 1992, procurement in 1996, and initial operational capability was declared in September 2001. Through fiscal year 2006, the Navy has taken delivery of 272 aircraft and has 210 aircraft on a 5-year multiyear contract. The Navy has received an unsolicited draft proposal for a third multiyear contract that would complete the planned program. Navy officials believe this could reduce unit costs, but told us to be effective the contract would need a quantity higher than the 70 aircraft remaining to be bought. This would seemingly require an increase in Navy buys or the addition of potential foreign military sales. Super Hornet aircraft have flown over 340,000 hours by the end of December 2006 and have been employed in combat operations. The Navy originally planned to buy 1,000 aircraft, but the quantity was reduced to 548 by the 1997 Quadrennial Defense Review, expecting to transition more quickly to the JSF, but with provisions for additional procurement if the JSF is delayed. In 2003, the quantity was further reduced to 462 when a study showed closer integration of Navy and Marine Corps aviation fleets would provide greater efficiency for common assets. The F/A-18E/F acquisition program is mature and has had relatively good procurement cost and schedule outcomes. One substantive reason for good outcomes is the low risk, evolutionary acquisition strategy adopted. The E/F variant is part of the F/A-18’s family of aircraft that has gradually upgraded capabilities since delivery of the original F-18 in the late 1970s. It has substantial commonality with its predecessor C/D models and leveraged previous technologies. For example, the initial release of the E/F models incorporated the avionics suite from the C/D models with provisions for upgrades to occur subsequent to the basic air vehicle development. Planned upgrades to the F/A-18E/F continue to incrementally add capabilities. Current production is phasing in block upgrades including the active electronically scanned array radar, advanced crew station, network-centric operation, and time-critical strike modifications. Navy program officials cited that, for the past three years, full rate production aircraft have been consistently delivered up to 3 months ahead of schedule, that the program is mature, and its current costs remain well-defined and within targets. While platform production and fielding has been successful, the December 2006 report of the Director of Operational Test and Evaluation identified ongoing tests and deficiencies in several of the aircraft’s major systems, including radar, defensive countermeasures, and weapons. The report states it is paramount that all systems interoperate properly in order to allow for optimal operational effectiveness and suitability. The program has reported two Nunn-McCurdy (10 U.S.C. 2433) breaches in unit cost since 1999, but these are attributable more to external factors than to system development, production, or management problems. The first breach occurred in 1999 when the procurement quantity was significantly reduced by the QDR. The second breach occurred in 2005 when the quantity was again reduced. Also, the OSD Comptroller decided to break out program reporting for the EA-18G aircraft separate from the E/F models. In doing so, common support costs for both programs were budgeted in the E/F program. Prior to this review, we last reported on the E/F program specifically in our 2003 annual weapon systems’ assessment. At that time program officials noted that the aircraft demonstrated two to three times the quality of the F/A-18C/D and have provided measurable improvements to squadron readiness. In addition, all F/A-18E/F preplanned upgrades continued to track to their program schedules. Program officials also stated that the active electronically scanned array radar program continues to execute as planned, and the program received the first engineering and manufacturing development unit in 2003. The EA-18G is the replacement for the Navy’s EA-6B Prowler and will provide carrier strike forces with electronic attack and tactical jamming capabilities to defeat enemy air defenses and to protect strike fighters and the carrier group. Derived from the combat proven F/A-18F aircraft, the EA-18G incorporates advanced airborne electronic attack avionics for the suppression of enemy air defenses, including accurate emitter targeting for employment of onboard weapons such as the High-Speed Anti-Radiation Missile. The two-seater EA-18G airframe is about 90 percent common with the F/A-18F airframe and is procured under the same multiyear contract. The two models diverge at a point in the production line and airframes destined to be Growlers receive the electronic attack subsystems. System demonstration and design was about 70 percent complete by October 2006. Two test articles were delivered in 2006 and first flight was in August 2006. The low-rate initial production decision is scheduled for late April 2007 and initial operational capability is planned for the last quarter in 2009. The Navy is proposing to reduce the total quantity of EA-18Gs from 90 to 84. The reduction is a result of re-evaluating inventory requirements in association with the Navy’s fiscal year 2008 budget and the application of tiered readiness, as well as a reduction of four aircraft from the first low- rate production buy. The Navy expects to receive its first EA-18G in 2009. We reported in 2006 on the EA-18G’s acquisition schedule for integrating the electronic attack subsystems. Our analysis showed that the program was not fully following the knowledge-based approach espoused in best practices and DOD’s acquisition guidance, thus increasing the risk of cost growth, schedule delays, and performance problems. None of its five critical technologies were fully mature when system development started, and, at the time of our review, flight testing hadn’t begun. The Navy proposed buying one-third of the total quantity as low-rate initial quantity aircraft based on limited demonstrated functionality. We recommended DOD consider outfitting additional EA-6Bs with the improved electronic suite for an interim capability, which would allow the restructuring of EA-18G production plans to begin procurement after full functionality was demonstrated. This year, our follow on review as part of our annual assessments of major weapon systems determined that progress has been made but that three of the five critical technologies are still not fully mature to best practices standards with production slated to start in 2007. Flight testing is underway and, until full functionality is demonstrated, there are risks of redesign and retrofit. Fifty-six aircraft are already on the F-18 multiyear contract, most procured as low-rate initial production aircraft based on limited demonstrated functionality. A fully functioning Growler, one that meets or exceeds the upgraded EA-6B capability, will not complete operational testing until January 2009, 20 months after production starts and after more than one-third of the total fleet has already been bought. Navy officials agree that EA-18G’s schedule is aggressive, but disagreed with our overall assessment of the EA-18G. Officials reported that the program has been stable since its schedule was developed in 2003 and is meeting or exceeding all cost, schedule and performance parameters. Furthermore, officials stated that some technologies are evolutionary upgrades of systems previously tested on its EA-6B aircraft with demonstrated effectiveness. We note, however, that these technologies are in new environments with form and fit challenges, including space constraints, which could impact performance and ultimate design. The December 2006 annual report from the Director of Operational Test and Evaluation stated that the schedule remains aggressive with plans to fully assess risk areas to achieve initial operational capability in fiscal year 2009. The Director reported that the primary risks include the integration of multiple components of the electronic attack system onto the F/A-18E/F platform and the operator workload for the two-man crew in missions currently performed by the four-person EA-6B aircraft. The A-10 was the first Air Force aircraft specially designed for close air support of ground forces. It is a simple, effective and survivable twin- engine jet used against all ground targets, including tanks. Officials cite exceptional combat results during Desert Storm and the Global War on Terror. Some aircraft are specially equipped for airborne forward air control. Because of the A-10’s relevant combat capabilities—demonstrated first during Desert Storm and recently in the Global War on Terror—the Air Force now plans to keep it in the inventory longer than anticipated. How long and with what upgrades is also dependent on whether the JSF aircraft are delivered on schedule. The Air Force is pursuing several major modifications to upgrade systems and structures on the A-10 fleet. A major re-winging effort is planned for 2007 through 2016 that will replace the “thin skin” wings on 242 aircraft at an estimated cost of $1.3 billion. This effort will help to extend the A-10’s service life to 16,000 hours. Precision Engagement modernizes cockpit controls and upgrade avionics and weapons. All 356 aircraft in the force are slated to receive the Precision Engagement suite. Total cost to complete the modification is estimated to be $420 million. Significant investments are underway and others planned or proposed to modernize 356 A-10s and to extend service life from 8,000 to 16,000 flying hours in order to achieve the goal of keeping the aircraft in service until 2025 or later. However, because of post-Cold War plans to retire the aircraft starting in the early 1990s, the A-10 fleet received no money for major modifications or programmed depot maintenance during the 1990s. As a result, the Air Force is now faced with a very large backlog of maintenance, structural repairs, and extensive modifications to modernize the A-10 fleet and keep it viable. Officials have begun major upgrades to modernize the cockpit and major subsystems and to replace the wings on most of the fleet. Officials are also finding that as older aircraft are inspected and opened up for modification, additional and more costly structural and sustainment work is being identified beyond initial plans. Even with the higher priority accorded the aircraft, program officials identify at least another $2.7 billion in unfunded requirements. Chief among these are an engine upgrade program estimated at $2.1 billion. It is intended to provide the A-10 with significantly improved engine capabilities. However, the proposal was deferred by the requiring command because of limited funding and higher warfighter priorities. The Air Force’s Fleet Viability Board, which assesses aging aircraft fleets and recommends to the Secretary and Chief of Staff of the Air Force whether aircraft should be retired or continued in service, recently determined that the A-10 is still viable and validated many of the modifications and repairs already underway. The Board recommended funding this engine upgrade in order to extend the A-10’s service life until 2030. The Board’s assessment identified mission limitations due to insufficient thrust to maximize survivability in the current threat environment with existing engines. Although agreeing that the engine upgrade would be desirable if funds were available, the requiring command continues to defer this program as a lower priority. We note that the Air Force has requested development funding of $230 million for the engine upgrade program in the 2008 supplemental request. The F-15A/B/C/D Eagle is a single- and two-seat, twin-engine, all-weather tactical fighter designed to gain and maintain air supremacy over the battlefield. The F-15E Strike Eagle is a two-seater dual-role fighter designed to perform air-to-air and air-to-ground missions. An array of avionics and electronics systems gives the F-15E the capability to strike targets at low altitude, day or night, and in all weather. The Air Force has a number of ongoing improvement efforts for the F-15 fleet, including helmet mounted cueing system, a new identification friend-or-foe system, various computer upgrades, and new radar for the F-15E The Joint Helmet Mounted Cueing System is planned for several DOD systems and provides pilots the capability to aim weapons and sensors by looking at the intended target. The new friend-or-foe identification system will solve obsolescence issues, add capability, and be upgradeable for the future. Computer upgrades also resolve obsolescence issues, enhance on- board computers, and improve avionics performance. The F-15E model will receive the improved active electronically scanned array radar. For years, modernization efforts and funding for the F-15C/D aircraft had been concentrated on about half the fleet—178 aircraft of its total inventory of 391. These were the number of aircraft the Air Force projected was needed to provide sufficient force structure to meet defense requirements and to complement the F-22A. That projected number was predicated upon the Air Force receiving its full F-22A stated requirement of 381 aircraft. However, due to affordability, the Air Force now faces a 198 aircraft shortfall in the quantity of F-22As it is slated to receive. As a result, officials expect more F-15C/Ds need to be modernized and retained for longer periods than planned. Originally planned for retirement by 2015, the Air Force now needs to keep substantial numbers of F-15C/D aircraft operational to 2025 and perhaps beyond. A multi-staged improvement program for the 178 aircraft, including recent upgrades of the engines and radar, is mostly complete. Officials identified near-term unfunded requirements on these aircraft totaling $2.3 billion, including new radars and countermeasure sets. In addition, potential service life extension efforts on the fleet and backlogged unfunded requirements to modernize aircraft in addition to the 178 may be needed but the full costs have not been identified. The Air Force also plans to keep 224 F-15Es in service beyond 2025. These are the newest F-15s with enhanced strike capabilities. The major upcoming upgrade effort on the F-15E is a radar modernization program to add active electronically scanned array radar. Estimated to cost $2.3 billion, the Air Force has delayed funding for this effort and now plans to start procurement in 2010. Program officials identified unfunded requirements totaling about $1.7 billion, including upgraded radar warning receivers, helmet mounted cueing system, and long-term sustainment efforts to address electrical, structural, and power plant concerns to keep the aircraft viable for another 25 or more years. The F-16 Fighting Falcon is a single engine multi-role fighter with full air- to-air and air-to-ground combat capability. It provides a relatively low cost, high-performance weapon system for the United States and allied nations. The F-16 currently comprises more than half of the Air Force’s fighter force. The fleet includes several different configurations or blocks. The newest blocks incorporate the high-speed anti-radiation missile targeting system, the Air Force’s only platform specifically for the suppression of enemy air defenses. The Air Force is not currently purchasing any new F-16’s, but the contractor is still producing them for foreign sale. The production is slated to continue past 2009 to accommodate recent sales. If the Air Force were to buy new aircraft, officials estimated that it would cost $380 million for development and about $50 million per aircraft procured. The Air Force has a number of ongoing improvement efforts for the F-16, including structural airframe modifications, avionics and capabilities upgrades, engine service life extension program, and new engines for some F-16 models. Falcon STAR is an effort to modify the airframe to allow the F-16 to reach the original 8,000 hours estimated for its flight life. Due to increased workload and weight that exceed the original specifications of the aircraft, the F-16 must be structurally modified to compensate for the increases. A number of common avionics and capabilities upgrades are necessary to provide increased processor speed and memories, color displays, and incorporate the Joint Helmet Mounted Cueing System. The F110 engine service life extension program addresses safety, reliability and maintainability concerns and new engines for the Block 42 aircraft will provide needed thrust improvements. With over 1,300 aircraft, the F-16 fleet comprises more than one-half the Air Force’s fighter and attack forces. The fleet includes several different configurations that were acquired and upgraded in evolutionary fashion over a considerable period of time. Reduced annual buy quantities on the JSF and deferred deliveries to the warfighter means that F-16s slated to be replaced by the JSF and retired will need to remain operable and relevant for additional years. Already investing several billions of dollars to keep the fleet operable, improve capabilities, and sustain it to meet its original expected service life, a preliminary unfunded cost estimate to increase the life expectancy of the newer fighters is $4.5 billion. Without improvements, almost 90 percent of the fleet would exceed design limits on engines by 2010. High usage, increased stresses, and more weight than planned threatened to cut life expectancy in half. Significant unknowns exist about extending the life beyond 8,000 hours should that be necessary. This makes any additional JSF schedule delays, deferrals, and cost growth very problematic for the overall Air Force fighter structure. If it becomes necessary to enable the newest F-16 aircraft to reach a 10,000 flying hour life, a program official estimated an additional cost of $2.2 billion for structural enhancements. The program office also identified another $3.2 billion in unfunded requirements, including radar upgrades to aircraft capable of suppressing enemy air defenses. The oldest F-16s are to be retired over the next few years, and the Air Force has halted modifications and funding for these aircraft. The F-117A Nighthawk is the world’s first operational aircraft designed to exploit low observable stealth technology. This precision strike aircraft penetrates high-threat airspace and uses laser-guided weapons against critical targets. As part of its transformation plans, the Air Force proposed retiring the F-117A aircraft in 2007 and 2008, stating that there are other more capable assets that can provide low observable, precision penetrating weapons capability. Program Budget Decision 720, dated December 2005, directed the Air Force to develop a strategy to gain congressional support for this plan. Congress has agreed, with certain limitations, mandating that the Air Force retire F-117As in “pristine” storage in case the aircraft would need to be recalled into service. Program officials estimate that the drawdown of the fleet and the shutdown of government and contractor offices and facilities would cost approximately $283 million. However, there is currently no funding allocated for these retirement costs of the F-117A. This cost does not include long-term storage and maintenance of the fleet after such a retirement. The F/A-18A/B/C/D is an all-weather fighter and attack aircraft also known as the Hornet. It is a single- and two-seat, twin engine, multi-mission fighter/attack aircraft that can operate from either aircraft carriers or land bases. The F/A-18 fills a variety of roles: air superiority, fighter escort, suppression of enemy air defenses, reconnaissance, forward air control, close and deep air support, and day and night strike missions. The major modification effort ongoing is the Center Barrel Replacement to eliminate structural limitations caused by cracking in the central fuselage. This effort is expected to cost about $970 million. During scheduled inspections of the aircraft, the Navy also identified cracks in the wing structure in about 40 percent of the aircraft. These could cause safety of flight issues in the future but are not thought to be serious enough at this time to ground the aircraft or to require immediate repair. The F/A-18s are the backbone of the naval tactical aircraft fleet, but are quickly running out of service life. The Navy plans to soon retire the A and B models, and the Marine Corps plans to transition entirely to the JSF for its future strike force. The Navy’s modernization efforts are focused on the remaining 421 F/A-18C/D aircraft. The Navy has an ongoing assessment of the service life of this aircraft that is expected to be completed in December 2007. At this time, it is not clear as to the need for or extent of future modifications, but a Naval Air Systems Command official said the assessment could very well identify additional modifications and structural work required beyond what is funded. Further delays in JSF could exacerbate funding shortfalls to sustain and modernize the operational fleet. While the F/A-18C/D legacy aircraft are currently meeting both the Navy’s and Marine Corps’s force structure requirements and readiness levels, inventory reductions though the Navy-Marine Corps tactical aircraft integration plan, JSF delays, and better defined structural limits of the F/A-18C/D have created a shortfall starting in 2011 in the number of aircraft that Navy officials project as needed to support its war-fighting plans. One option the Navy is considering would be the purchase of additional F/A-18E/F models to resolve this shortage. Another option under consideration is extending the life of its F/A-18C/D fleets to mitigate projected shortfalls. The full cost of the life extension program is not known at this time. The service life assessment effort to be completed in December 2007 will determine the feasibility, scope of work, and total costs for extending the life of the system. Current estimate for extending service life, including the costs of the assessment, is about $2 billion, but officials said that number could very well increase substantially as the assessment progresses and cost estimates mature. Concerned over the looming gap in the Navy’s inventory, in May 2006, the Senate Committee on Armed Services recommended that the Navy consider buying more F/A-18E/Fs to mitigate any possible shortfall in aircraft until JSF aircraft are delivered. The primary mission of the EA-6B Prowler is the suppression of enemy air defenses in support of strike aircraft and ground troops by interrupting enemy electronic activity and obtaining tactical electronic intelligence within the combat area. The Prowler is a long-range, all-weather aircraft with advanced electronic countermeasures capability, and enhances combat survivability of strike force aircraft and weapons by denying, delaying, and degrading the acquisition of friendly forces by enemy air defense systems. Both the Navy and Marines maintain Prowler assets. In 1995, the EA-6B was selected to become the sole tactical radar support jammer for all services after the Air Force decided to retire its fleet of EF-111 aircraft. This decision resulted in increased use of the EA-6B, as the Prowler provided airborne electronic attack capability during numerous joint and allied operations since 1995. The Navy plans to start retiring its EA-6B in 2008 and replace it with the EA-18G as its core airborne electronic attack component. The Marine Corps had expected to retire their EA-6B assets in 2015, but that could change as future plans for its replacement are still evolving. Three significant upgrades to the EA-6B are the Improved Capability electronic suite modification (ICAP III), which provides the EA-6B with greater jamming capability; an upgrade to the aircraft’s current electronic pods, which improves frequency band capability; and replacement of the wing center sections of the entire fleet and outer wing panel replacement on portions of the fleet. The ICAP-III modification includes the addition of software to allow the EA-6B to automatically pinpoint enemy signals and better receive and utilize data. Aircraft not receiving ICAP III are having the current electronic attack systems upgraded. Funding to replace the wing center sections was added by Congress. To date, 114 wings have been procured and 100 have been installed on aircraft. In addition forty-seven EA-6Bs are also in need of an outer wing panel replacement; Navy officials said that the first four pairs have already been delivered, and procurement will be ramped to 18 sets per year in order to receive deliveries through 2008. In 2006 GAO reported that, as a result of DOD’s decision to move to an electronic attack system of systems, the EA-6B would be able to meet the defense suppression needs of the Navy until 2017 and those of the Marine Corps until 2025 if the aircraft were fitted with the ICAP-III electronic suite upgrade. Because the EA-18G’s five critical technologies were not fully mature and posed a costly risk for design changes, GAO recommended that DOD consider outfitting additional EA-6Bs with the ICAP III suite, which would allow the Navy to slow EA-18G low rate production until its technologies become fully mature and functionality demonstrated. The Navy and Marine Corps operate the EA-6B, which provides electronic attack support DOD-wide at this time. The EA-6B has been upgraded over time to increase its reactive jamming capability. The most important on- going effort to the EA-6B is the ICAP-III electronic suite modification, which provides more rapid emitter detection, selective reactive jamming, and expanded coverage. The Navy has two squadrons currently deployed with ICAP-III and plans to equip a total of 15 of its EA-6Bs with the ICAP-III suite. The Navy plans to start decommissioning the EA-6B from its fleet starting in 2008 and retire all aircraft by 2013, replacing them with the new EA-18G that will provide electronic attack support to its carrier strike forces. The Navy will start transferring aircraft to the Marine Corps in fiscal year 2010 and complete transfers in 2013 with delivery of the ICAP III aircraft. The Marines Corps planned to retire its EA-6Bs by 2015, but officials said plans could change depending on the transfer schedule and that they may need to keep these aircraft in the inventory longer depending on the JSF delivery schedule. The Marine Corps has not yet made firm plans for its future electronic attack capability and is considering employment of the JSF and unmanned aircraft systems. We note that the Marine Corps has requested a total of $379 million in the fiscal year 2007 and 2008 global war on terror requests to upgrade an additional 18 EA-6Bs with the ICAP-III suite and for other modernization enhancements. The AV-8B Harrier II is a short field take-off and vertical landing (STOVL) jet aircraft that deploys from naval ships, advanced bases, and expeditionary airfields. Its mission is to attack and destroy surface targets and escort friendly aircraft, day or night, under all weather conditions during expeditionary, joint or combined operations. The Harrier is responsible for conducting close air support, armed reconnaissance and air interdiction, offensive and defensive anti-air warfare, including combat air patrol, armed escort mission, and offensive missions against enemy ground-to-air defenses. The first Harrier squadron is expected to be replaced by the JSF starting in fiscal year 2011. The AV-8B, a more powerful and longer range model, than its predecessor the AV-8A, was introduced in 1985. The AV-8Bs were originally designed as day attack only aircraft, but some were later upgraded to add night attack and radar capabilities. The night attack and radar upgrades enhance the pilot’s ability to locate and destroy targets under various weather conditions and at night. Some of the AV-8Bs received an upgrade to enhance night attack with improved multimode radar in 1991-1992. Between 1994 and 2001, the majority of AV-8Bs were remanufactured with new fuselages to add structural life to the airframe and to accommodate the new radar upgrade. Currently there are several on-going efforts to add capabilities and improve sustainment for the AV-8B until replaced by the JSF, including remanufacturing 5 old, day attack aircraft to receive the night attack capability and refurbishing 2 training aircraft; using a more accurate method to track the useful life of the aircraft; continuing efforts to improve sustainment through a readiness management plan for the airframes and an engine life management plan. The AV-8B was originally designed to last for 6,000 flying hours. This estimate was based on engineering fatigue projections on a 20 year service life, flying 300 hours per year, on very rigorous mission profiles. However, the aircraft have typically not been flown in such stressful flight envelopes and the Marines estimate they will be able to exceed the original 6,000 hour service life and maintain an additional 66 aircraft in service through 2015. In addition, the Marine Corps plans a set of modifications, largely unfunded, that would add important capabilities by 2012 or later to enable the Harriers to be more effective in future threat environments. The AV-8 aircraft was DOD’s first STOVL system. The aircraft is costly to maintain and has a relatively high attrition rate. The Marine Corps has 134 AV-8Bs in its current fleet and plans to replace them all with STOVL JSFs by 2025. The new fuselages increased the estimated service for the AV-8Bs from 6,000 to 9,000 flight hours. Further, the AV-8Bs have not been used as vigorously as mission profiles used to project its useful life and officials believe that the fleet can remain in inventory well beyond the expected delivery dates of the JSF, if necessary. Ongoing and planned modernization efforts are minimal. The Marines are upgrading five AV-8Bs that did not get previous upgrades so that they will now have the night attack capability, and refurbishing two training aircraft In fiscal year 2007, the Marine Corps began repairs on four aircraft damaged during combat operations using supplemental funding. As another step to mitigate potential slips in JSF production, officials are also increasing the amount of depot level maintenance on the AV-8B fleets to ensure sufficient numbers are available and capable. The Harrier is scheduled to remain in service until at least 2021, but its retirement is dependent upon the delivery of the JSF. Principal contributors to this report were the Assistant Director Michael Hazard, Bruce Fairbairn, Marvin Bonner, Erin Clouse, Matthew Lea, Sara Margraf, Robert Miller, and Karen Sloan. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington D.C.: March 30, 2007. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. Defense Acquisitions: Analysis of Costs for the Joint Strike Fighter Engine Program. GAO-07-656T. Washington, D.C.: March 22, 2007. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Tactical Aircraft: DOD Should Present a New F-22A Business Case before Making Further Investments. GAO-06-455R. Washington, D.C.: June 30, 2006. Electronic Warfare: Option of Upgrading Additional EA-6Bs Could Reduce Risk in Development of EA-18G. GAO-06-446. Washington, D.C.: April 26, 2006. Defense Acquisitions: Actions Needed to Get Better Results on Weapon Systems Investments. GAO-06-585T. Washington, D.C.: April 5, 2006. Tactical Aircraft: Recapitalization Goals Are Not Supported by Knowledge-Based F-22A and JSF Business Cases. GAO-06-487T. Washington, D.C.: March 16, 2006. Systems Acquisition: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-06-391. Washington D.C.: March 31, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington D.C.: March 15, 2006. Defense Acquisitions: Business Case and Business Arrangements Key for Future Combat System’s Success. GAO-06-478T. Washington D.C.: March 1, 2006. Defense Acquisitions: DOD Management Approach and Processes Not Well-Suited to Support Development of Global Information Grid. GAO-06-211. Washington D.C.: January 30, 2006. Defense Acquisitions: DOD Has Paid Billions in Award and Incentive Fees Regardless of Acquisition Outcomes. GAO-06-66. Washington D.C.: December 19, 2005. DOD Acquisition Outcomes: A Case for Change. GAO-06-257T. Washington D.C.: November 15, 2005. Defense Acquisitions: Progress and Challenges Facing the DD(X) Surface Combatant Program. GAO-05-924T. Washington D.C.: July 19, 2005. Defense Acquisitions: Incentives and Pressures That Drive Problems Affecting Satellite and Related Acquisitions. GAO-05-570R. Washington D.C.: June 23, 2005. Defense Acquisitions: Resolving Development Risks in the Army’s Networked Communications Capabilities is Key to Fielding Future Force. GAO-05-669. Washington D.C.: June 15, 2005. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-519T. Washington D.C.: April 6, 2005. Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-05-301. Washington D.C.: March 31, 2005. Defense Acquisitions: Changes in E-10A Acquisition Strategy Needed Before Development Starts. GAO-05-273. Washington D.C.: March 15, 2005. Tactical Aircraft: Air Force Still Needs Business Case to Support F/A-22 Quantities and Increased Capabilities. GAO-05-304. Washington D.C.: March 15, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington D.C.: March 15, 2005. Tactical Aircraft: Status of F/A-22 and JSF Acquisition Programs and Implications for Tactical Aircraft Modernization. GAO-05-390T. Washington D.C.: March 3, 2005.
|
The Department of Defense (DOD) plans to invest $109 billion in its tactical air forces between 2007 and 2013. Long term, DOD plans to replace aging legacy aircraft with fewer, more expensive but more capable and stealthy aircraft. Recapitalizing and modernizing tactical air forces within today's constrained budget environment is a formidable challenge. DOD has already incurred substantial cost and schedule overruns in its acquisition of new systems, and further delays could require billions of dollars in additional investments to keep legacy aircraft capable and sustainable. Because of the large investments and risk, GAO was asked to review investment planning for tactical aircraft. This report describes the current status of DOD's new tactical aircraft acquisition programs; identifies current impacts on legacy aircraft modernization programs and retirement schedules; and assesses DOD's overall investment plan for tactical aircraft. DOD's efforts to recapitalize and modernize its tactical air forces have been blunted by cost and schedule overruns in its new tactical aircraft acquisition programs: the Joint Strike Fighter (JSF), the Air Force F-22A, and the Navy F/A-18E/F. Collectively, these programs are expected to cost about $400 billion--with about three-fourths still to be invested. The JSF program, which is expected to make up the largest percentage of the new fleet, has more than 90 percent of its investments still in the future. Increased costs and extended development times have reduced DOD's buying power, and DOD now expects to replace legacy aircraft with about one-third fewer new aircraft compared to original plans at each program's inception. The outcomes of these acquisition programs directly impact existing tactical aircraft systems. Until new systems are acquired in sufficient quantities to replace legacy fleets, legacy systems must be sustained and kept operationally relevant. Continual schedule slips and reduced buys of new aircraft--particularly in the F-22A and JSF programs--make it difficult for program managers to allocate funds for modifying legacy aircraft to meet new requirements or to set retirement dates for legacy aircraft. Lengthening the life of legacy systems also impacts DOD's new tactical aircraft acquisition programs. DOD has become increasingly concerned that the high cost of keeping aging weapon systems relevant and able to meet required readiness levels is a growing challenge in the face of forecast threats and reduces the department's flexibility to invest in new weapons. DOD's tactical aircraft investments are driven by the services' separate acquisition planning. Moving forward, these plans are likely unexecutable given competing demands from future defense and non defense budgets. The EA-6B--providing tactical radar jamming capabilities for all services and one of the few examples of a joint asset--is also expected to be replaced by separate and unique aircraft for each of the services. Without a joint, DOD-wide strategy for tactical aircraft investments, it is difficult to identify potential areas where efficiencies might be achieved or where capability gaps might occur in DOD's tactical aircraft acquisitions.
|
Among other impacts, climate change could threaten coastal areas with rising sea levels, alter agricultural productivity, and increase the intensity and frequency of severe weather events such as floods, drought, and hurricanes that have cost the nation tens of billions of dollars in damages over the past decade. For example, Congress provided around $60 billion in budget authority for disaster assistance after Superstorm Sandy.These impacts pose significant financial risks, but the federal government is not well positioned to address this fiscal exposure, partly because of the complex nature of the issue. Given these challenges and the nation’s fiscal condition, in February 2013, we added Limiting the Federal Government’s Fiscal Exposure by Better Managing Climate Change Risks to our list of high-risk areas. Climate-related impacts will result in increased fiscal exposures for the federal government from many areas, including, but not limited to its role as (1) the insurer of property and crops vulnerable to climate impacts, (2) the provider of aid in response to disasters, (3) the owner or operator of extensive infrastructure such as defense facilities and federal property vulnerable to climate impacts, and (4) the provider of data and technical assistance to state and local governments responsible for managing the impacts of climate change on their activities. The financial risks from two important federal insurance programs—the National Flood Insurance Program (NFIP) administered by the Federal Emergency Management Agency (FEMA) and the Federal Crop Insurance Corporation (FCIC) administered by the United States Department of Agriculture (USDA)—create a significant fiscal exposure. In 2012, the NFIP had property coverage of over $1.2 trillion and the FCIC had crop coverage of almost $120 billion. NFIP has been on our High Risk List since March 2006 because of concerns about its long-term financial solvency and related operational issues. While Congress and FEMA intended to finance NFIP with premiums collected from policyholders and not with tax dollars, the program was, by design, not intended to pay for itself. As of December 2013, FEMA’s debt from flood insurance payments totaled about $24 billion—up from $17.8 billion before Superstorm Sandy—and FEMA had not repaid any principal on the loan since 2010. Further, the federal government’s crop insurance costs have increased in recent years for a variety of reasons, more than doubling from $3.4 billion in fiscal year 2001 to $7.6 billion in fiscal year 2012. In March 2007, we reported that both of these programs’ exposure to weather-related losses had grown substantially, and that FEMA and USDA had done little to develop the information necessary to understand their long-term exposure resulting from climate change. recommended that the Secretaries of Agriculture and Homeland Security analyze the potential long-term fiscal implications of climate change on federal insurance programs and report their findings to Congress. The agencies agreed with the recommendation and contracted with experts to study their programs’ long-term exposure from climate change. Both agencies have incorporated the findings of the reports into their climate change adaptation plans—as directed by instructions and guidance implementing Executive Order 13514 on Federal Leadership in Environmental, Energy, and Economic Performance. We are currently examining how these programs account for climate change in their activities. GAO, Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades Are Potentially Significant, GAO-07-285 (Washington, D.C.: Mar. 16, 2007). topography, coastal erosion areas, changing lake levels, future changes in sea levels, and intensity of hurricanes in updating its flood maps. The Biggert-Waters Act also reauthorized NFIP through 2017 and made other significant changes to the program, including removing subsidized premium rates for certain properties, eliminating the grandfathering of prior premium rates when a property is remapped, and requiring FEMA to create a reserve fund. While these changes may help put NFIP on a path to financial solvency, their ultimate effect is not yet known. In addition, the program faces challenges in making the changes. For example, implementation of certain changes was delayed by provisions in the Consolidated Appropriations Act of 2014, and S. 1926, which passed the Senate on January 30, 2014, would delay the implementation of certain rate increases contained in the Biggert-Waters Act. As we have previously reported, such delays to rate increases may help address affordability concerns, but they would likely continue to increase NFIP’s long-term burden on taxpayers. In the event of a major disaster, federal funding for response and recovery comes from the Disaster Relief Fund managed by FEMA, and disaster aid programs of other participating federal agencies. The federal government does not fully budget for these costs, thus creating a large fiscal exposure. We reported, in September 2012, that disaster declarations have increased over recent decades to a record of 98 in fiscal year 2011 compared with 65 in 2004. Over that period, FEMA obligated over $80 billion in federal assistance for disasters. We also found that FEMA has had difficulty implementing long-standing plans to assess national preparedness capabilities and that FEMA’s indicator for determining whether to recommend that a jurisdiction receive disaster assistance does not accurately reflect the ability of state and local governments to respond to disasters. Had FEMA adjusted its indicator to reflect changes in personal income and inflation, 44 percent and 25 percent fewer disaster declarations, respectively, would have met the threshold for public assistance during fiscal years 2004 through 2011. In September 2012, we recommended, among other things, that FEMA develop a methodology to more accurately assess a jurisdiction’s capability to respond to and recover from a disaster without federal assistance. FEMA concurred with this recommendation. The federal government owns and operates hundreds of thousands of buildings and facilities that a changing climate could affect. For example, in its 2010 Quadrennial Defense Review, the Department of Defense (DOD) recognized the risk to its facilities posed by climate change, noting that the department must assess potential impacts and adapt as required. We plan to report later this year on DOD’s management of climate change risks at over 500,000 defense facilities. In addition, the federal government manages about 650 million acres––nearly 30 percent of the land in the United States––for a variety of purposes, such as recreation, grazing, timber, and fish and wildlife. In 2007, we recommended that that the Secretaries of Agriculture, Commerce, and the Interior develop guidance for their resource managers that explains how they expect to address the effects of climate change, and the three departments generally agreed with this recommendation. However, as we showed in our May 2013 report, resource managers still struggled to incorporate climate-related information into their day-to-day activities, despite the creation of strategic policy documents and high-level agency guidance. The federal government invests billions of dollars annually in infrastructure projects that state and local governments prioritize and supervise. In total, the United States has about 4 million miles of roads and 30,000 wastewater treatment and collection facilities. According to a 2010 Congressional Budget Office report, total public spending on transportation and water infrastructure exceeds $300 billion annually, with roughly 25 percent of this amount coming from the federal government and the rest coming from state and local governments. These projects have large up-front capital investments and long lead times that require decisions about addressing climate change before its potential effects are discernable. The federal government plays a limited role in project-level planning for transportation and wastewater infrastructure, and state and local efforts to consider climate change in infrastructure planning have occurred primarily on a limited, ad hoc basis. Infrastructure is typically designed to withstand and operate within historical climate patterns. However, according to NRC, as the climate changes and historical patterns—in particular, those related to extreme weather events—no longer provide reliable predictions of the future, infrastructure designs may underestimate the climate-related impacts to infrastructure over its design life, which can range as long as 50 to 100 years. These impacts can increase the operating and maintenance costs of infrastructure or decrease its life span, or both, leading to social, economic, and environmental impacts. For example, the National Oceanic and Atmospheric Administration estimates that, within 15 years, segments of Louisiana State Highway 1— the only road access to Port Fourchon, which services virtually all deep- sea oil operations in the Gulf of Mexico, or about 18 percent of the nation’s oil supply—will be inundated by tides an average of 30 times annually due to relative sea level rise. Flooding of this road effectively closes this port. Because of Port Fourchon’s significance to the oil industry at the national, state, and local levels, the U.S. Department of Homeland Security, in July 2011, estimated that a closure of 90 days could reduce the national gross domestic product by $7.8 billion. Figure 1 shows Louisiana State Highway 1 leading to Port Fourchon. Despite the risks posed by climate change, we found, in April 2013, that infrastructure decision makers have not systematically incorporated potential climate change impacts in planning for roads, bridges, and wastewater management systems because, among other factors, they face challenges identifying and obtaining available climate change information best suited for their projects. Even where good scientific information is available, it may not be in the actionable, practical form needed for decision makers to use in planning and designing infrastructure. Such decision makers work with traditional engineering processes, which often require very specific and discrete information. Moreover, local decision makers—who, in this case, specialize in infrastructure planning, not climate science—need assistance from experts who can help them translate available climate change information into something that is locally relevant. In our site visits to a limited number of locations where decision makers overcame these challenges— including Louisiana State Highway 1—state and local officials emphasized the role that the federal government could play in helping to increase their resilience. Any effective adaptation strategy must recognize that state and local governments are on the front lines in both responding to immediate weather-related disasters and in preparing for the potential longer-term impacts associated with climate change. We reported, in October 2009, that insufficient site-specific data—such as local temperature and precipitation projections—complicate state and local decisions to justify the current costs of adaptation efforts for potentially less certain future benefits. Executive Office of the President develop a strategic plan for adaptation that, among other things, identifies mechanisms to increase the capacity of federal, state, and local agencies to incorporate information about current and potential climate change impacts into government decision making. USGCRP’s April 2012 strategic plan for climate change science recognizes this need by identifying enhanced information management and sharing as a key objective. GAO, Climate Change Adaptation: Strategic Federal Planning Could Help Government Officials Make More Informed Decisions, GAO-10-113 (Washington, D.C.: Oct. 7, 2009). designated by the Executive Office of the President work with relevant agencies to identify for decision makers the “best available” climate- related information for infrastructure planning and update this information over time, and to clarify sources of local assistance for incorporating climate-related information and analysis into infrastructure planning, and communicate how such assistance will be provided over time. They have not directly responded to these recommendations, but the President’s June 2013 Climate Action Plan and November 2013 Executive Order 13653 on Preparing the United States for the Impacts of Climate Change drew attention to these issues. For example, the Executive Order directs numerous federal agencies, supported by USGCRP, to work together to develop and provide authoritative, easily accessible, usable, and timely data, information, and decision-support tools on climate preparedness and resilience. We also have work under way exploring, among other things, the risk extreme weather events and climate change pose to defense facilities, public health, agriculture, public transit systems, and federal insurance programs. This work—within the framework of the February 2013 high- risk designation—may identify other steps the federal government could take to limit its fiscal exposure and make our communities more resilient to extreme weather events. Chairman Carper, Ranking Member Coburn, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions you have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Alfredo Gomez, Director; Michael Hix, Assistant Director; and Heather Chartier, Diantha Garms, Cindy Gilbert, Richard Johnson, Joseph Dean “Pep” Thompson, and Lisa Van Arsdale made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
According to the United States Global Change Research Program, the costs and impacts of weather disasters resulting from floods, drought, and other events are expected to increase in significance as previously “rare” events become more common and intense. These impacts pose financial risks to the federal government. While it is not possible to link any individual weather event to climate change, these events provide insight into the potential climate-related vulnerabilities the United States faces. GAO focuses particular attention on government operations it identifies as posing a “high risk” to the American taxpayer and, in February 2013, added to its High Risk List the area Limiting the Federal Government's Fiscal Exposure by Better Managing Climate Change Risks . GAO's past work identified a variety of fiscal exposures—responsibilities, programs, and activities that may either legally commit the federal government to future spending or create the expectation for future spending in response to extreme weather events. This testimony is based on reports GAO issued from March 2007 to November 2013 that address these issues. GAO is not making new recommendations but made numerous recommendations in prior reports on these topics, which are in varying states of implementation by the Executive Office of the President and relevant federal agencies. The federal government has opportunities to limit its exposure and increase the nation's resilience to extreme weather events. Since 1980, the U.S. has experienced 151 weather disasters with damages exceeding 1 billion dollars each. This testimony focuses on 4 areas where the government could limit its fiscal exposure. Property and crop insurance . The financial risks from two federal insurance programs—the National Flood Insurance Program administered by the Federal Emergency Management Agency (FEMA) and the Federal Crop Insurance Corporation (FCIC)—create a significant fiscal exposure. In 2012, the NFIP had property coverage of over $1.2 trillion and the FCIC had crop coverage of almost $120 billion. As of December 2013, FEMA's debt from flood insurance payments totaled about $24 billion. For various reasons, FCIC's costs more than doubled from $3.4 billion in fiscal year 2001 to $7.6 billion in fiscal year 2012. In 2007, GAO found that the agencies responsible for these programs needed to develop information on their long-term exposure to climate change. The Biggert-Waters Flood Insurance Reform Act of 2012 requires FEMA to use information on future changes in sea levels and other factors in updating flood maps used to set insurance rates. Private insurers are also studying how to include climate change in rate setting. GAO is currently examining the extent to which private and federal insurance programs address risks from climate change. Disaster aid . The federal government does not fully budget for recovery activities after major disasters, thus creating a large fiscal exposure. GAO reported in 2012 that disaster declarations have increased to a record 98 in fiscal year 2011 compared with 65 in 2004. Over that period, FEMA obligated over $80 billion for disaster aid. GAO's past work recommended that FEMA address the federal fiscal exposure from disaster assistance. Owner and operator of infrastructure . The federal government owns and operates hundreds of thousands of facilities that a changing climate could affect. For example, in its 2010 Quadrennial Defense Review, the Department of Defense (DOD) recognized the risk to its facilities posed by climate change, noting that the department must assess the potential impacts and adapt. GAO plans to report later this year on DOD's management of climate change risks at over 500,000 defense facilities. Provider of technical assistance to state and local governments . The federal government invests billions of dollars annually in infrastructure projects that state and local governments prioritize, such as roads and bridges. Total public spending on transportation and water infrastructure exceeds $300 billion annually, with about 25 percent coming from the federal government and the rest from state and local governments. GAO's April 2013 report on infrastructure adaptation concluded that the federal government could help state and local efforts to increase their resilience by (1) improving access to and use of available climate-related information, (2) providing officials with improved access to local assistance, and (3) helping officials consider climate change in their planning processes.
|
Justice is responsible for collecting criminal debt and has delegated operating responsibility to its FLUs within all of Justice’s U.S. Attorneys’ Offices (USAO). Justice’s Executive Office for United States Attorneys (EOUSA) provides administrative and operational support, including support required for debt collection, to the USAOs. According to Justice, the FLUs typically become involved in the criminal debt collection process after the judgment, which occurs when an offender is convicted and a judge orders the offender to pay a fine or restitution. The U.S. Courts and their probation offices may also assist in collecting moneys owed. AOUSC provides national standards and promulgates administrative and management guidance, including standards and guidance required for debt collection, to the various U.S. judicial districts. In July 2001, we reported on the growth of uncollected criminal debt through fiscal year 1999. We noted that although some of the key factors that contributed to the increasing amount of criminal debt were beyond Justice’s control, certain of Justice’s criminal debt collection processes were inadequate. Accordingly, in the 2001 report, we made 14 recommendations to Justice to improve the effectiveness and efficiency of its criminal debt collection processes. In our March 2004 report, we discussed the extent to which Justice had acted on our previous recommendations to it to improve criminal debt collection. Our follow-up work on Justice’s efforts to implement our 2001 recommendations showed that it had completed actions on 7 of the 14 recommendations, most of which were completed about 2 years after we made the recommendations, and had efforts under way to address 6 other recommendations. We noted that because many of these recommendations largely focused on establishing policies and procedures, it is important that they be effectively implemented once they are established, and it will likely take some time for collection results to be realized from full implementation. However, efforts to implement the recommendation that we considered the most critical had not progressed—namely for Justice to participate in a multiagency effort to develop a unified strategy for criminal debt collection. Specifically, we reported that Justice had not yet worked with other agencies, including AOUSC, OMB, and Treasury, to implement a key recommendation to work as a joint task force to develop a strategic plan that addresses managing, accounting for, and reporting criminal debt. We concluded that the long-standing problems in the collection of outstanding criminal debt—including fragmented processes and lack of coordination—continued because there is no united strategy among the major entities involved with the collection process. Our case study review, on which the results described in this report are based, focused on a nonrepresentative selection of five criminal white- collar financial fraud debts that Justice reported outstanding as of September 30, 2002, each with a judgment prior to fiscal year 2001 that assessed the offender millions of dollars of restitution. We selected debts involving offenders who were not currently in prison and for which the offenders had paid a relatively small amount of the outstanding restitution amounts as of September 30, 2002. Also, our review only involved selected cases for which we could clearly identify the lead debtor in court and Justice records. We obtained sufficient information to address our three reporting objectives; however, we were not provided all of the details pertaining to each of the five selected cases and thus cannot be assured that there was not additional relevant information. Because Justice still considers these cases to be open law enforcement cases for collection purposes, the information Justice provided for each case was limited primarily to what was included in its debt collection file minus personal identifiers, such as the names of the offenders, their addresses, and their Social Security numbers. Therefore, we are not providing a comprehensive account of any particular case. For each selected debt, we reviewed Justice’s debt collection file or files, minus all personal identifiers. We interviewed appropriate officials from Justice’s EOUSA and the responsible FLUs concerning actions taken to collect the debt, obstacles to collection, and prospects for future collections. To supplement or attempt to further corroborate the information obtained from Justice for each case, we obtained and reviewed pertinent information about the selected debts and debtors from certain records made available by the courts and from public sources available through the Internet, such as property records. Also, for reporting purposes, rather than highlighting specific case studies in detail, our discussions focus on specific types of debt collection problems identified during this review, many of which we were aware of from our previous work. This was done to ensure sufficient privacy of those involved in our selected cases, and in consideration of Justice’s concern that the release of information on open cases could hinder the department’s efforts to collect the debts. We conducted our review from November 2003 through June 2004 in accordance with U.S. generally accepted government auditing standards. We received written comments signed by the Director, Executive Office for United States Attorneys, on a draft of this report. Justice’s comments are reprinted in appendix I, and technical comments received from both Justice and AOUSC have been addressed as appropriate in this report. The court-ordered restitution amounts assessed the offenders for the five selected criminal debt cases far exceed likely collections for the crime victims. The offenders’ restitution amounts totaled about $568 million. However, according to court records, only about $40 million, or about 7 percent of the total, had been collected several years after the courts sentenced each of the offenders. The vast majority of these collections resulted from asset forfeiture actions and from payments that were made before the offenders were sent to prison or placed on probation. We found that the FLUs, which typically become involved in criminal debt collection after the debt is established at judgment, performed certain debt collection activities; however, they were not able to reduce the restitution debts significantly by identifying and liquidating additional assets of the offenders to pay the victims. Moreover, based on information available to us, the FLUs’ prospects are not good for collecting additional restitution amounts from the offenders to compensate their victims to the extent initially ordered by the courts. Following the judgments, despite indications of prior wealth or possession of significant financial resources, the offenders claimed to have limited financial means to pay their restitution debts. Further, there were minimal, if any, apparent negative consequences to the offenders for not paying such debts. A major debt collection problem for the FLUs for the selected cases was that up to 13 years had passed between the offenders’ criminal activities and the related judgments. By the time the FLUs became involved in trying to collect the restitution debts, the offenders’ assets had been, among other things, transferred to family members or others, forfeited to the government, or involved in bankruptcy. Justice acknowledged to us that the long intervals between criminal activity and the related judgments, and certain dispositions and circumstances involving the offenders’ assets or the offenders that take place during such intervals, make collection difficult for many criminal restitution debt cases. As previously mentioned, the offenders’ restitution amounts for the selected cases totaled about $568 million. Restitution amounts for individual cases ranged from over $7 million to more than $400 million. Court records show that each of the offenders, who pled guilty to engaging in criminal activity, had been high-ranking officials of companies and lending institutions or operated their own business. The crimes in these cases consisted of fraudulently manipulating company sales figures and inventories to increase stock values or to obtain loans, engaging in schemes to convert business loan proceeds for personal use, selling securities to private investors under false pretenses, and illegally sharing in loan proceeds from a federally insured financial institution. The victims of the crimes involving the offenders of our selected cases included corporate shareholders, large lending institutions, and small investors—many of whom were elderly and had been harmed financially. In addition to the court-ordered restitution, prison terms ordered by the courts for four of these offenders ranged from 1 to 5 years followed by 3 to 5 years of supervised release. One offender received several years of probation rather than prison. As of June 2004, all of the offenders were out of prison or off probation, but three offenders were still on supervised release. As noted earlier, only about $40 million, or about 7 percent of the total restitution for the selected cases had been paid as of June 2004, which was from about 4 to 8 years after the courts sentenced each of the offenders. Collections for the individual cases ranged from less than 1 percent to about 10 percent of the restitution amounts owed. About $24 million of these collections resulted from asset forfeiture actions, and over $11 million from payments that were made prior to the offenders’ sentencing. After the judgments were rendered, the FLUs performed certain debt collection activities, such as filing liens on the offenders’ real property; issuing restraining notices forbidding the transfer or disposition of assets; performing title searches; and requesting, obtaining, and reviewing financial information from the offenders. Performing such activities did not enable the FLUs to further reduce the restitution debts significantly by identifying and liquidating additional assets of the offenders. For the selected cases, based on information available to us, the FLUs are not likely to collect sufficient additional restitution amounts from the offenders to compensate their victims to the extent initially ordered by the courts. At some point prior to the judgments establishing the restitution debts, each of the offenders either reported having wealth or significant financial resources to the courts or to Justice, or there were indicators of such. Specifically, prior to sentencing, one or more of the offenders reported earning millions of dollars in annual gross income, having millions of dollars in net worth, or spending thousands of dollars per month on clothing and entertainment. In addition, court records indicate that certain of the offenders converted millions of dollars of fraudulently obtained assets for personal use, established businesses for their children, or held residential properties worth millions that were located in upscale communities. In spite of the reported wealth or financial resources or indications of such, following their judgments, each of the offenders reported to either the courts or Justice a modest income or net worth and claimed to have limited financial means to pay restitution debt. Further, at the time of our file reviews, three of the offenders were on supervised release and making monthly or yearly payments set by the courts that will do little to reduce the outstanding balance of their restitution debts, one offender had stopped making routine monthly payments after supervised release terminated, and one offender had negotiated a settlement with the crime victim, which was approved by Justice and the court, for far less than the initial court-ordered restitution. There were minimal, if any, apparent negative consequences to the offenders for not paying restitution to their victims as initially ordered by the courts. First, information obtained from the courts and public documents indicated that the offenders were living in reasonable comfort. For example, one offender and his immediate family owned and, at the time of our review, resided in a property worth millions of dollars; another offender owns a home worth over $1 million; and two offenders took overseas trips while on supervised release. Second, after probation or supervised release has expired, the offenders cannot be sent to prison for failure to pay their restitution debts. According to Justice, although it does not apply to restitution, the willful failure to pay a fine is a crime of criminal default, which can result in the offender’s receiving an additional fine of not more than twice the amount of the unpaid balance of the fine or $10,000, whichever is greater; being imprisoned not more than 1 year; or both. However, there is no such similar crime for willful failure to pay restitution. A court may revoke or modify the terms and conditions of probation or supervised release for an offender’s failure to pay restitution. However, these are of little consequence once the offender has successfully completed the term of probation or supervised release. For the selected cases, according to records provided by the courts, at least 5 to 13 years passed between when the offenders began to engage in the criminal activities for which they were sentenced and the date of their judgments. We identified and the FLUs acknowledged that by the time the courts rendered the judgments establishing the restitution debts, certain of the offenders’ assets were, among other things, transferred through legal or potentially fraudulent means to a family member or others, involved in forfeiture actions, subject to bankruptcy, or moved to a foreign account. In addition, one of our selected cases involved an offender who was jointly and severally liable for the debt with another offender who had been deported. Justice stated that after criminal activity occurs, years may pass before the initial investigation of a crime, let alone the arrest, trial, and conviction of an offender. Justice also stated that the primary focus during the criminal investigation, prior to judgment, is on the discovery and prosecution of the offender’s criminal acts rather than on the potential future debt recovery by the federal government. During the intervals between criminal activities and the related judgments, Justice acknowledged that dispositions and circumstances involving the offenders’ assets or the offenders often occur that create major debt collection challenges for the FLUs. According to Justice, criminals with any degree of sophistication, especially those engaged in fraudulent criminal enterprises, commonly dissipate their criminal gains quickly and in an untraceable manner. Assets acquired illegally are often rapidly depleted on intangible and excess “lifestyle” expenses. Specifically, travel, entertainment, gambling, clothes, and gifts are high on the list of means to rapidly dispose of such assets. Moreover, money stolen from others is rarely invested into easily located or exchanged assets, such as readily identifiable bank accounts, stocks or bonds, or real property. Justice emphasized that the initial efforts by criminal law enforcement investigators, federal prosecutors, and the probation office promise the greatest opportunity for meaningful recovery of illegally obtained assets. Therefore, in our view, coordination among the FLUs and other entities involved in criminal debt collection is critical. According to Justice, there is no general statutory authority for Justice to obtain pretrial restraint of assets in order to satisfy a potential criminal judgment that may result in a restitution debt. However, once such a judgment is imposed, Justice can proceed against a third party by filing a separate federal action to recover the assets or proceeds thereof. Justice emphasized that it must prove by a preponderance of the evidence that the offender fraudulently transferred assets, which often involves a lengthy and time-consuming process. Moreover, even when a valid claim is made against a third party for a fraudulent transfer, the third party may have a “good faith” defense if the transfer was accepted in exchange for a “reasonably equivalent value.” The challenges encountered in collecting restitution debt from offenders who may have transferred assets to others through legal or potentially fraudulent means were evident in our review of selected cases. According to Justice, at least one of the offenders in our selected cases has engaged in a shell game for the purpose of shielding their assets. In addition, Justice stated that at least one of the offenders has not provided full financial disclosure, and that the FLU is currently exploring whether the offender fraudulently conveyed assets to family members and others. Based on information in Justice and court records, certain of the offenders in the selected cases engaged in one or more of the following activities. Prior to the judgment, the offender and the offender’s family established trusts, foundations, and corporations for their assets at about the same time they closed numerous bank and brokerage accounts. Over the course of several years, the offender converted for personal use hundreds of millions of dollars obtained through illegal white-collar business schemes. Several years prior to the judgment, the offender’s minor child, who is now an adult, was given the offender’s company. As of completion of our fieldwork, that company employed the offender. Prior to the judgment, the offender placed a multimillion-dollar residence in a trust. Prior to the judgment, the offender established a trust worth hundreds of thousands of dollars for the offender’s child. The offender and the offender’s family rent their expensively furnished residence, which they previously owned, from a relative. Justice stated that forfeited assets are the property of the federal government and do not always go to crime victims. Justice can restore forfeited assets to a victim upon the victim’s filing of a petition, but only in those limited cases when it is the victim’s actual property that is being restored. According to Justice, the FLUs’ coordination with Justice’s Asset Forfeiture Unit and others at the outset of the case is invaluable in securing assets for payment of the victims’ restitution when such potential exists. The importance such coordination has to securing forfeited assets for the crime victim was evident in one of our selected cases. Court records showed that about $175 million of the offender’s assets that had been identified as related to the case had been forfeited; however, the FLU’s records showed that only about $50 million of such assets had been forfeited. At the time of our file review, the FLU was not certain whether any forfeited assets had been, or could be, applied toward the offender’s restitution debt. Subsequent to our visit to the FLU and our inquiries related to this matter, Justice stated that only about $24 million of the $50 million of forfeited assets in its records may be applied toward the offender’s restitution debt as a result of a petition filed by the victim. According to Justice, bankruptcy can impair the FLU’s ability to collect criminal restitution debt. When a bankruptcy proceeding is initiated before the criminal judgment, the bankruptcy estate attaches to all of the offender’s property and rights to property, which can significantly limit assets available for restitution. When a bankruptcy proceeding is initiated after the criminal judgment, the United States may file a proof of claim in the bankruptcy proceeding and may have secured status if its lien was perfected against any of the defendant’s property. However, there may be other creditors seeking payment from the offender’s estate, including often the Internal Revenue Service. These other creditors may be just as much victims of the offender as the victims named in the restitution order and may also have valid interests in payment from the estate. Moreover, bankruptcy’s automatic stay may limit the FLUs’ ability to otherwise enforce the debt. For one of the selected cases, the offender went into bankruptcy prior to the judgment. Shortly after the judgment, which was rendered over 5 years ago, the FLU issued a restraining notice to the offender, forbidding the transfer or disposition of his assets, and filed a lien on certain property. However, according to the FLU, the ongoing bankruptcy has prevented it from taking additional collection action. Recently, Justice stated that it had been advised by the bankruptcy trustee that for this case, most of the offender’s bankruptcy estate of several million dollars would be distributed to the victim. Justice emphasized that generally for cases in which the offender goes into bankruptcy prior to the judgment, the criminal restitution debt will only be recognized as a general unsecured debt and, therefore, most often will not be satisfied. Justice stated that money obtained illegally is often moved to offshore accounts or to debtor-haven countries. In the absence of a treaty with a foreign government or a provision of law to provide for the repatriation of money transferred to foreign accounts, acquiring such money for the liquidation of an offender’s restitution debt is difficult at best. Justice also stated that certain offenders are deported; however, they continue to be liable for the unpaid portion of their restitution debts, as current law requires that the debts stay on the books for 20 years after the period of incarceration ends or after the judgment if no incarceration is ordered. Justice acknowledged that potential collection actions are limited for offenders who have been deported. For example, liens filed in counties where the offender previously held property have little, if any, effect when offenders have moved assets and are living abroad. In addition, FLU officials cannot subpoena financial information from offenders who have been deported or obtain depositions from such offenders regarding their assets. Debt collection complications due to transfers of assets to foreign accounts and the deportation of offenders were evident in our selected cases. For one case, according to Justice, the FLU’s efforts to identify and secure assets of the offender to liquidate the restitution debt have been hampered, in part, because the offender had established, among other things, a foreign bank account for the purpose of shielding his assets. For another case involving two offenders who were jointly and severally liable for the restitution debt, one offender had settled his liability for the debt, with the approval of Justice and the court, by paying the victim far less than the amount initially ordered by the court. With regard to this offender, Justice stated that his reported assets and net worth were such that the thought that additional collection efforts would have positive results was not considered by the FLU to be reasonable. The FLU was left with little recourse for additional collection action because the other offender in the case, who is still liable for the remainder of this debt, was deported after serving a prison term. Our March 2004 report and ongoing discussions with your office have kept you apprised of progress in implementing the recommendations included in our 2001 report. As discussed more fully in the background section of this report, Justice has made progress in establishing certain policies and procedures to improve criminal debt collection. Unfortunately, the effort we considered key to more substantive progress, namely, development of a strategic plan by all of the involved entities, had not been started. However, very recently, the Congress directed the Attorney General to develop a strategic plan with certain other federal agencies to improve criminal debt collection. Specifically, the conference report that accompanied the Consolidated Appropriations Act, 2005, Public Law No. 108-447, signed into law on December 8, 2004, included language to further the implementation of our 2001 recommendation regarding the establishment of an interagency task force for the purpose of better managing, accounting for, reporting, and collecting criminal debt. In the conference report, the conferees directed the Attorney General to establish a task force within 90 days of enactment of the act and to include specified federal agencies, such as Treasury, OMB, and AOUSC, to participate in the task force. Led by the Department of Justice, the task force will be responsible for developing a strategic plan for improving criminal debt collection. The strategic plan is to include specific approaches for better managing, accounting for, reporting, and collecting criminal debt. Specifically, the plan is to include steps that can be taken to better and more promptly identify all collectible criminal debt so that a meaningful allowance for uncollectible criminal debt can be reported and used for measuring debt collection performance. Also, the conferees directed the Attorney General to report to the Committees on Appropriations within 180 days of enactment of this act on the activities of the task force and the development of a strategic plan. Given such poor prospects for collection for the selected cases, as well as the overall low collection rates for criminal debt we have previously reported, it is important that Justice determine how to better maximize opportunities to make offenders’ assets available to pay offenders’ victims once judgments establish restitution debts. By taking advantage of all debt collection opportunities, Justice may be able to better achieve the intent of MVRA, which is to compensate crime victims to the extent of their financial loss. Justice can best accomplish this aim by implementing the recommendation we made in 2001 to work with AOUSC, OMB, and Treasury to develop a strategic plan as now also called for by the conference report accompanying the Consolidated Appropriations Act, 2005, to address managing, accounting for, and reporting criminal debt including the collectibility of such debt. Further, our review of the five selected criminal white-collar financial fraud debts, in conjunction with the findings on our previous criminal debt collection work, strongly supports the need for Justice to take the leadership role in promptly addressing this recommendation. Effective coordination and cooperation is essential for maximizing collections, and as the federal agency primarily responsible for criminal debt collection, Justice’s leadership in this effort is vital. The strategic plan should include a determination of how to best maximize opportunities to make offenders’ assets available to pay the victims once judgments establish restitution debts. Until such a strategic plan is developed and effectively implemented, which could involve legislative as well as operational initiatives, the effectiveness of criminal restitution as a punitive tool may be diminished, and Justice will lack adequate assurance that offenders are not benefiting from ill-gotten gains and that innocent victims are being compensated for their losses to the fullest extent possible. To help ensure that the strategic plan called for in the conference report effectively addresses all potential opportunities for collection, we recommend that the Attorney General include in the strategic plan legislative initiatives, operational initiatives, or both that are directed toward maximizing opportunities to make offenders’ assets available to pay victims once restitution debts are established by judges. To monitor progress in leading the development and implementation of the strategic plan, we also recommend that the Attorney General report annually in Justice’s Accountability Report on progress toward developing and implementing a strategic plan to improve criminal debt collection. This report should include a discussion of any difficulties or impediments that significantly hinder such progress. Overall, Justice’s EOUSA’s comments on a draft of this report, which are reprinted in appendix I, are consistent with our conclusion that given such poor prospects for collection for the selected cases, as well as the overall low collection rates for criminal debt we have previously reported, it is important that Justice determine how to better maximize opportunities to make offenders’ assets available to pay offenders’ victims once judgments establish restitution debts. EOUSA stated that consistent with our recommendation and the conference report that accompanied the Consolidated Appropriations Act of 2005, Justice is in the process of organizing an interagency joint task force to develop a strategic plan for improving criminal debt collection. EOUSA did not specifically comment on our recommendations including the recommendation that the Attorney General include in the strategic plan legislative initiatives, operational initiatives, or both that are directed toward maximizing opportunities to make offenders’ assets available to pay victims once restitution debts are established by judges. However, EOUSA did emphasize that current statutes do not provide adequate remedies for the collection of criminal debt and cited several examples including the lack of general statutory authority for the United States to obtain pretrial restraint of assets in order to satisfy a potential criminal judgment that may result in a restitution debt. Regarding operational initiatives, as stated in this report, because many of the recommendations we have previously made to Justice to improve criminal debt collection focused on establishing policies and procedures, it is important that the policies and procedures be effectively implemented once they are established. Moreover, any multiagency effort to develop a unified strategy for criminal debt collection will need to address operational issues. Both EOUSA and AOUSC provided technical comments that have been addressed as appropriate in this report. As agreed with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Senate Committee on Homeland Security and Governmental Affairs; the Subcommittee on Financial Management, the Budget and International Security, Senate Committee on Homeland Security and Governmental Affairs; and the Subcommittee on Government Efficiency and Financial Management, House Committee on Government Reform. We will also provide copies to the Attorney General, the Director of the Administrative Office of the U.S. Courts, the Director of the Office of Management and Budget, and the Secretary of the Treasury. Copies will be made available to others upon request. The report will also be available at no charge on GAO’s Web site, at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3406 or [email protected] or Kenneth R. Rupar, Assistant Director, at (214) 777-5714 or [email protected]. Staff acknowledgments are provided in appendix II. The following are GAO’s comments on the Department of Justice’s letter dated January 13, 2005. 1. As discussed in this report, only about $40 million, or about 7 percent, of the $568 million restitution for these five selected cases had been paid as of June 2004, and collections for these individual cases ranged from less than 1 percent to about 10 percent of the restitution amounts owed. Prospects are not good for collecting additional restitution to fully compensate the crime victims for the selected cases in our study. Regardless of whether these offenders currently have, or once had, wealth equal to the restitution amounts, the disparity between restitution owed to the crime victims for the financial losses they incurred as a result of criminal activity and amounts paid to the victims by the offenders makes it necessary for Justice to take advantage of all debt collection opportunities to better achieve the intent of MVRA, which is to compensate crime victims to the extent of their financial loss. 2. EOUSA stated that the USAOs had collected over $4 billion on behalf of victims of crime over the last 5 years. However, as stated in this report, the low collection rate (about 7 percent of the ordered restitution) for the selected cases coincides with overall collection rates for criminal debt as we have previously reported. In 2004, we reported that according to Justice’s unaudited records, collections relative to outstanding criminal debt averaged about 4 percent for fiscal years 2000, 2001, and 2002 (GAO- 04-338). In 2001, we reported that criminal debt collection averaged about 7 percent for fiscal years 1995 through 1999 (GAO-01-664). Richard T. Cambosos, Michael D. Hansen, Andrew A. O’Connell, Ramon J. Rodriguez, Linda K. Sanders, and Matthew F. Valenta made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
In the wake of a recent wave of corporate scandals, Senator Byron L. Dorgan noted that the American taxpayers have a right to expect that those who have committed corporate fraud and other criminal wrongdoing will be punished, and that the federal government will make every effort to recover assets held by the offenders. Recognizing that GAO previously reported on deficiencies in the Department of Justice's (Justice) criminal debt collection processes (GAO-01-664), Senator Dorgan asked GAO to review selected criminal white-collar financial fraud cases for which large restitution debts have been established but little has been collected. Specifically, GAO was asked to determine (1) the status of Justice's efforts to collect on the outstanding debt, (2) the prospects for future collections, and (3) whether specific problems have affected Justice's ability to collect the debt. The court-ordered restitution for the five selected white-collar financial fraud criminal debt cases GAO reviewed far exceeded amounts likely to be collected and paid to the victims. These offenders, who had either been high-ranking officials of companies or operated their own business, pled guilty to crimes for which the courts ordered restitution totaling about $568 million to victims. As of the completion of GAO's fieldwork, which was up to 8 years after the offenders' sentencing, court records showed that amounts collected for the victims in these cases totaled only about $40 million, or about 7 percent of the ordered restitution. At some point prior to the judgments establishing the restitution debts, each of the five offenders either reported having wealth or significant financial resources to the courts or to Justice, or there were indicators of such. However, following the judgments, the offenders claimed that they were not financially able to pay full restitution to their victims. Justice's Financial Litigation Units (FLU) that were responsible for collection performed certain activities to collect the debts after the judgments, but the debts had not been significantly reduced as a result of the FLUs' identification and liquidation of additional assets of the offenders. The FLUs' prospects are not good for collecting additional restitution amounts on these cases. A major problem hindering the FLUs' ability to collect restitution debt in the selected cases was the long time intervals between the criminal offense and the judgment. Court records show that 5 to 13 years passed between when the offenders began to engage in the criminal activity for which they were sentenced and the date of their judgments. For each of the selected cases, by the time the court rendered the judgment establishing the restitution debt, certain of the offenders' assets had been, among other things, transferred to family members or others, involved in forfeiture actions, subject to bankruptcy, or moved to a foreign account. In addition, one of the selected cases involved an offender who was jointly and severally liable for the debt with another offender who had been deported. Justice acknowledged that such dispositions or circumstances are not uncommon and create major debt collection challenges for the FLUs. Moreover, there were minimal, if any, apparent negative consequences to these offenders for not paying their restitution debts. Recently, to further implementation of a related recommendation made in 2001 by GAO, the Congress directed the Attorney General to develop a strategic plan with certain other federal agencies to improve criminal debt collection. Given the significant upward trend in outstanding criminal debt and the difficulty experienced by Justice in collecting criminal restitution debt, it is important that Justice include in such a plan legislative initiatives, operational initiatives, or both to enhance the federal government's capacity to collect restitution for victims of financial crimes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.